Executive translation. - “Many high-agency managers try to prevent executives from doing silly things, but it’s almost always more effective to translate their energy for a silly thing into energy for a useful thing. It also leaves the executive feeling supported by your work rather than viewing you as an obstacle to their progress."

How DevOps can come back from the dead, and why it must

The DevOps community is running on fumes and at the lowest point in mindshare and interest that it’s ever been at. This is stupid. The practices, tools, and mindset of DevOps are vital to how most organization run their software1 and DevOps has improved the way the software we use everyday is built and run, improving all of our lives. If DevOps wasn't a thing, the world of software would be worse and each day would be a little more tedious because the apps we depend on would be worse.

Beyond mystic management mind-games

Searching for Cheap TricksThe amount of AI content and conversations out there is getting exhausting. It’s almost as bad as the burbling font of platform engineering content that gets rewritten and published each week. The below RAND paper and commentary got me thinking though: a lot of the AI talk is just talk about applying new technologies in general. We are still really bad at the basic task of communicating requirements to developers, checking in on their work and seeing if they’re solving the right problems in a helpful way…and even knowing what business problems to need attention in the first place.

Is AI a Silver Bullet? - “This is the trade off we have seen before: eliminating the accidental complexity of authoring code has less value than we think because code spends most of its time in maintenance, so the maintainability of the code is more important than the speed to author it."

The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed: Avoiding the Anti-Patterns of AI | RAND - As ever, communicating the requirements and the problem to be solved to IT is difficult and often results in solving the wrong problem, at least in the right way: “In failed projects, either the business leadership does not make them- selves available to discuss whether the choices made by the technical team align with their intent, or they do not realize that the metrics measuring the success of the AI model do not truly represent the metrics of success for its intended purpose. For example, business leaders may say that they need an ML algorithm that tells them the price to set for a product–but what they actually need is the price that gives them the greatest profit margin instead of the price that sells the most items. The data science team lacks this business context and therefore might make the wrong assumptions. These kinds of errors often become obvious only after the data science team delivers a completed AI model and attempts to integrate it into day-to-day business operations.”