No Rules Are Implicit Rules - The European view on enlightened American management policy: “Greg, I hate to bring it to you, but working for ten fucking hours a day is not the normal hour. I don’t care if you live in America or not. The section continues with other “grand” examples of managers taking “up to” 14 days a year off to show their employees they should to so too. Let’s assume the best here: 14 workdays are almost three weeks. A year. The statutory minimum for full-time employees working a forty-hour week is 20 (thus 4 weeks) in Belgium. Oops."

AI Agents and the CEOs - “At the risk of saying the quiet part out loud, the way CEOs are talking about agents sure sounds like how they talk about employees–only cheaper!” // “Companies are dedicating significant spend to AI–approximately 5% of the revenue of large enterprises (revenues over $500 million) according to one survey by Boston Consulting Group, and yet only 25% claim they are seeing value from their AI investment."

Semiconductors, Security, and the DeepSeekFreak, along with Ass Semiotics

In this episode: AI eschatology, assology, and a deep, intellectual commitment to hating mayonnaise. Tariff trouble, security panic, and NVIDIA shrugging off DeepSeek. Young voters shift rightward, no one agrees on ‘medium roast,’ and Hollywood still relies on glue to critique its own youth obsession. Wastebook“immanetize the AI eschaton,” Charlie Stross. “The ass is a very strong symbol of how our body is not neutral in the public space. How our body is constantly scrutinized, has been shaped to please the man’s eyes, has been seen as a body part that was objectified, that was detached from the person who was simply bearing it.

A head full of bologna

Lots of links and stuff this episode: AI isn’t a coworker, it’s just automation wrapped in hype. Tech moves fast, but nothing lasts—except bad takes, questionable business models, and the creeping realization that managers just want fewer humans to manage. Meanwhile, we live like kings and don’t even notice. Put it on iceGood episode of Software Defined Talk this week, especially the opening moment of absurdity where we, yet again, try to solve Europe’s ice problem.

The danger of relying on OpenAI’s Deep Research - Some valid critiques of Deep Research. Though, none of them really amount to “it’s not good.” To sum-up: it can’t do complex research, let alone come up with original ideas nor cover obscure topics. It can’t only tell you what the Internet knows. This is actually not fully accurate: you can also upload your own files and put in your own knowledge. For me, the main problem is the readability of the reports. While they are long and detailed, they’re not written in an engaging way they makes it easy to read. I have a pile of them that I’ve yet to fully pick through. // Yeah, these robots have little creativity and original thought and father own, they can only do the predictable. But, man, they sure can do a lot of it.

[Learning from my mistakes…](https://open.substack.com/pub/chrisduncania/p/learning-from-my-mistakes?r=2d4o&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false) - It’s tough to monetize content that has near zero value or originality, and be easily pirated. This especially true if the price is wrong. That sort of applies to every product. // “In the end though, you can’t optimise your way out of a black hole, the gravity is too heavy. We were marketing a product at a price point that was material to our customers, and giving them content which was largely available from our competitors for free. All the tweaks in the world couldn’t change that."