Using AI to help with SRE, ops, etc.:
The problem, he said, is that Claude “will get wrong correlation versus causation.” It’s like a new joiner on the team, they will think “oh, it’s a capacity problem, when actually you lost your cache.” “This is why we can’t trust LLMs for incident response,” said Palcuie. The problem is its inability to “step back and start discerning between causation and correlation… For us humans, it is hard as well.”
And:
The Jevons Paradox, said Palcuie, is “the favorite paradox in the AI industry. It’s when technological improvements increase the efficiency of our resources used, but the resulting lower cost causes consumption to rise rather than fall.”
In the case of software, “it’s easier to write software, so we write much more of it, so the complexity goes up and not down, which means things break in more interesting ways, which means more incidents, more on call… all the improvements in the tooling will be cancelled by this ever-growing complexity.”
From: Fixing Claude with Claude: Anthropic reports on AI site reliability engineering