AI has shifted from a helpful add-on to the bedrock of modern software engineering, according to Google Cloud’s 2025 DORA Report. While nearly all developers now use AI, trust remains tentative, and the real differentiator lies in how organizations structure their practices and platforms to manage speed without chaos.
Summarized by AI.
Source summarized:
AI has the New Baseline: What Google Cloud’s 2025 DORA Report Means for Developers – ADTmag
Google Cloud’s 2025 DORA Report signals a definitive shift in software engineering: AI isn’t just an accelerator anymore; it’s the baseline. A staggering 90% of developers now actively employ AI tools, with two-thirds integrating AI into at least half their workflows. Yet the paradox is clear—while 80% report productivity gains and 60% report higher code quality, only a quarter feel strong trust in AI’s outputs. Developers are embracing the “trust but verify” model, funneling AI-generated work through layers of version control, automated testing, and human oversight.
The report frames the AI transition through both velocity and risk. Teams that integrate AI effectively see higher throughput—faster deployments, quicker recoveries, and better responsiveness to change. But without robust pipelines, small-batch workflows, and platform-based delivery, AI can inadvertently increase instability. DORA’s new AI Capabilities Model offers a blueprint: seven practices ranging from clear governance and healthy data ecosystems to user-centric product focus and high-quality internal developer platforms separate the high performers from the firefighting teams.
One of the report’s more colorful contributions is its taxonomy of seven AI-era team archetypes. At the top are “Harmonious High-Achievers,” who manage to accelerate without burning out. At the bottom sit “Foundational Challenges,” where process gaps and cultural friction prevent AI from delivering meaningful improvements. In between are nuanced archetypes like “Stable and Methodical” or “Legacy Bottleneck,” illustrating that AI doesn’t magically fix organizational dysfunction; it amplifies what’s already there.
Beyond team structures, the report probes how AI is reshaping developer experience. Prompt engineering is rising as a core skill, and engineers using AI report a boost in “authentic pride” even if their sense of meaning stays neutral. But the evolution comes with a warning: AI’s efficiency may reduce opportunities for junior engineers to learn by doing. Organizations are urged to consciously balance throughput with mentorship and skill-building to avoid hollowing out their talent pipelines. As the report concludes, AI adoption is no longer the question—how teams transform around it will determine who actually thrives in the AI-native era.
#tech #culture #AI #DevOps #softwareengineering
Summarized by ChatGPT on Sep 25, 2025 at 6:59 AM.
Generative AI stormed into software development promising a coding revolution, but so far, it’s making developers only marginally faster—and in some cases, slower. Bain & Company’s Technology Report 2025 argues that real productivity gains will require a wholesale reinvention of the entire development lifecycle, not just sprinkling AI on coding tasks.
AI generated summary of: AI coding hype overblown, Bain shrugs.
Generative AI entered software development with promises of exponential efficiency, but Bain’s new report suggests the industry’s expectations were wildly optimistic. While roughly two-thirds of firms have rolled out tools such as AI code copilots, actual developer adoption and measurable returns remain tepid. Many early pilots saw productivity bumps of just 10 to 15 percent, with the time “saved” often consumed by error correction or stuck in processes that weren’t reimagined to leverage AI’s potential. A separate METR study even found that some AI coding initiatives slowed developers down, puncturing the narrative of a frictionless AI coding future.
A core problem is that software development is far more than coding. Bain points out that writing and testing code accounts for only a third of the lifecycle; design, requirements gathering, deployment, and maintenance dominate the rest. Simply speeding up code generation won’t meaningfully change delivery timelines or business outcomes. This is why Bain advocates an “AI-native” approach, which implies embedding generative AI across every lifecycle phase. That means automating planning, integrating AI into testing and deployment pipelines, and rethinking workflows so AI-driven work can move seamlessly to production. Without this comprehensive shift, most pilot projects will plateau, yielding disappointing returns that are hard to justify to CFOs.
The report also casts doubt on the next big buzzword: agentic AI. Tools like Cognition’s Devin promised autonomous software engineering, but in practice, struggles abound. Devin completed just 3 of 20 tasks in early tests, often producing unusable solutions or getting stuck. Carnegie Mellon’s benchmark study found AI agents fail 70% of multi-step office tasks, while Gartner forecasts 40% of these projects will be abandoned by 2027. Cultural and organizational hurdles may be just as challenging: engineers resist AI adoption, firms skimp on training, and executives fail to define clear KPIs. Bain’s takeaway is blunt—without bold, top-down leadership and a total process rethink, generative AI in software development risks being a costly sideshow rather than a transformative tool.
Summarized by ChatGPT on Sep 24, 2025 at 8:28 AM.
#tech[2509.13348] Towards an AI-Augmented Textbook - Adapting/changing textbooks to match learning style. // This is an additive use of AI: you’re not replacing humans, you’re doing more work that they humans couldn’t do.
Enterprise AI Looks Bleak, But Employee AI Looks Bright - I think a take-away is: AI ROI accruing to individuals, not the enterprise as a whole. This must drive executives crazy. Is that some kind of digital Marxist thing?
Also, people have suggested that have just two roles on the AI Center of Excellence Gatekeepers Guild is a good idea. Those two roles being security and legal. The theory is that the security people are used to this kind of thing and have processes in place. Also, that they are narrowly focused on a handful of things. You need the lawyers for all the whacky IP stuff. Hopefully they focus most on licenses. I think what you probably want to avoid in the AI policy board is focusing on endless hypotheticals about what can go wrong and ways of preventing them. But, who knows at this point?
Unlocking content potential: a report on organising structures and capability - not sure about tue conclusions, but good for the archives.
The art of Jean-Michel Nicollet - “French cover design can be unsympathetic to cover illustration, crowding the paintings with poor type choices and purposeless graphics.” // Great looking paperback covers. // That basic framing would be good for video thumbnails…?
Manton predicts the AI oligopoly - “I’m increasingly thinking that we’ll have OpenAI and Google for the mainstream, Anthropic carving out an enterprise niche, Meta doing the ads thing, open source models… and the rest of the industry is going to fade away.”
The new Lost Generation - What people without kids are up to. “Must be nice,” etc.
How Target is rethinking search for generative AI - 🤖 To stay ahead, Target is training its AI agents to understand product context deeply and surface assortments that reflect how people shop in the real world. Bhosale gave an example of a “summer party” query that should generate not just tableware, but grills, décor, sunscreen, and more—a holistic, curated experience.
🤖 Shaping the Future of VMware Cloud Foundation: Broadcom’s Focused Strategy for VMware Cloud Service Providers - Broadcom outlines its plan to consolidate VMware Cloud partners, emphasizing quality, customer value, and strategic growth for its VCF 9.0 ecosystem.
AI-Generated “Workslop” Is Destroying Productivity - “We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.” And: “Each incidence of workslop carries real costs for companies. Employees reported spending an average of one hour and 56 minutes dealing with each instance of workslop.” Related: “It turns out having an always-available ‘marriage therapist’ with a sycophantic instinct to always take your side is catastrophic for relationships.” // Some additional shitting all over AI if you’re into that.
I Blame the AI - “Accountability sinks are systems designed so that when things go wrong, no individual human can be held responsible for fixing them. He argues that ‘decisions are delegated to a complex rule book or set of standard procedures, making it impossible to identify the source of mistakes when they happen.’” So, therefore: “Redesigning roles and AI systems with human overrides will be essential to ensuring accountability. At least for now, every key decision still needs a person behind it & maybe that’s enough to rebook quickly enough to get home for dinner.”
The intelligence is in the user - The robot is only as good as what you bring to it. Related:
The Man Calling Bullshit on the AI Boom - “To what end? What happened there? Because we get all these stories about ‘Oh, they fed all the data into the LLM,’ and then what?”'// Plus, a list of previous big tech things that have been utter bullshit.
Lots of time to catch-up on things this week. You know, the “important, but not urgent” quadrant.
For example, I setup my desk so that I can do the Larry King video style. You know, leaning on the desk. Dan often does this, so does Noah, and Whitney. People tell me this is better. And, as a viewer, I do like that look more.
Everyday AI and AI Everyday, with Hannah Foxwell - Software Defined Talk
Check out last week’s Software Defined Interviews with Hannah Foxwell: > In this episode, Whitney and Coté talk about the integration of AI into daily life with Hannah Foxwell, organizer of AI for the Rest of Us, among many other doings. They talk about stuff like practical applications of AI in daily tasks like finding recipes and tech support to the complexities of adopting AI in professional settings. Hannah also talks about building AI communities and conferences in general. Also, you hear about the upcoming conference AI for the Rest of Us.
The Man Calling Bullshit on the AI Boom - “To what end? What happened there? Because we get all these stories about ‘Oh, they fed all the data into the LLM,’ and then what?”'// Plus, a list of previous big tech things that have been utter bullshit.
The intelligence is in the user - The robot is only as good as what you bring to it.
The art of Jean-Michel Nicollet - “French cover design can be unsympathetic to cover illustration, crowding the paintings with poor type choices and purposeless graphics.” // Great looking paperback covers. // That basic framing would be good for video thumbnails…?