The broader lesson extends beyond music. When outcomes worsen, we are tempted to regulate expression rather than confront underlying issues. School bans, censorship campaigns and moral lectures target what is easiest to see. But culture, more often than not, is a lagging indicator—not a driver—of economic life.

If hip hop had never existed, the trajectory of the communities most exposed to it would have remained the same. Silencing the music wouldn’t have created jobs, stabilized families or reduced violence. If we want gentler lyrics, we should solve the problems they describe.

Every generation thinks the music The Kids listen to is corrupting society. Of course, it never is. And usually it’s doing the opposite.

Relative to your interests, Tuesday

OpenAI’s Codex taking aim at Claude Code, plus growing global pushback against U.S. tech - with Europe flirting with the kill switch. And a spicy take on AWS’s future: when developers don’t choose the cloud, AI tools do.

This is how the new announcement economy works. You declare a massive number. The headlines write themselves. The stock moves. Mission accomplished." Whether the deal actually closes becomes almost irrelevant. The momentum already happened." As a generalized marketing strategy: “my doctrine is that velocity has replaced authority as the organizing principle of information. What and who moves fastest wins. Truth and facts are optional and get lost in the race to dominate attention.

🔗 OpenAI and the New Announcement Economy

The real problem is that developers don’t choose AWS anymore (because honestly, given a choice between AWS and a vendor who thinks deeply about developer experience, who would?). They choose Vercel, or Netlify, or whatever their AI coding assistant suggests when they type “deploy this.”

Some charts/survey/data would be nice to see here. Still, I hear this sentiment a lot. If it gets into the chatter, true or not, it will have negative consequences for AWS.

🔗 AWS destiny: becoming the next Lumen, Corey Quinn

Relevant to your interests, Monday

Longevity data from the CDC, why CIOs keep missing AI infra costs, and what agentic AI can actually learn from low-code and fintech failures. Plus: vibe coding’s limits, the end of the AI “free lunch,” and what platform teams can do about it.

Enterprise ROI continues to be elusive, but 48% of surveys say they've been cutting humans

AI is still stuck in “efficiency mode.” In many organizations, a technology organization leads AI efforts and is treated as a cost center. Those CIOs are incentivized to optimize for efficiency, not growth. The result is predictable: Tech-led AI strategies deliver productivity improvements, but not transformation. Equally concerning is that technology and AI executives tell us their business partners can’t articulate what they want from AI beyond saving money. That leadership gap bleeds over into the measurement of AI value.

Runciman’s thesis, roughly, is that any given week in a democracy looks like a clown show of dithering, delay, u-turns, squabbling and complacency. But from the longer perspective the chaos is effective – a symptom of democracy’s capacity to criticise itself, to change tack, to spread its bets widely, keep its options open and correct its failures.

🔗 Why Homer Simpson went MAGA

Researchers at the University of Manchester followed 25,000 11- to 14-year-olds over three school years, tracking their self-reported social media habits, gaming frequency and emotional difficulties to find out whether technology use genuinely predicted later mental health difficulties.

Participants were asked how much time on a normal weekday in term time they spent on TikTok, Instagram, Snapchat and other social media, or gaming. They were also asked questions about their feelings, mood and wider mental health.

The study found no evidence for boys or girls that heavier social media use or more frequent gaming increased teenagers’ symptoms of anxiety or depression over the following year.

🔗 Social media time does not increase teenagers’ mental health problems – study

Google shows that large language models, when paired with traditional tools and validation pipelines, can accelerate enterprise code migrations by 50% or more, unblocking projects stalled for years.

The robot says:

The most dramatic impact occurred in previously stalled migrations, which AI unblocked after years of delay. One migration achieved an 89% estimated time savings, while another processed 149,000+ lines across 5,359 files in three months. Review, not generation, became the primary bottleneck. Google mitigated this by throttling weekly AI-generated changes to avoid overwhelming reviewers.

🔗 🤖 How is Google using AI for internal code migrations?