Because it seems fancy pants and authoritative:
The generally accepted hypothesis to explain this overuse ties back LLMs’ training and reinforcement processes. As models learn to predict language patterns, they begin to use their learned patterns to do so. However, this isn’t the only factor determining which patterns get used more often. Models like Claude and ChatGPT have an additional goal with their responses: to provide users with clarity. Em-dashes, which allow for explanatory pauses and the breaking down of complex ideas, are an ideal tool for AIs. As such, LLMs are not only introduced to more em-dashes, but their training also reinforces their usage. This results in em-dashes appearing more frequently than in typical human writing.