You can see the pattern and understand the root cause: ChatGPT can’t actually understand the rules of Lasers & Feelings (in the sense of having the words of the rulebook create a mental model that it can then use independent of the words) and, therefore, cannot truly use them. It can only generate a sophisticated pattern of babble, guessing what the next word of a transcript of Lasers & Feeling game session would look like based on the predictive patterns generated from its training data.

And:

It turns out that the GM’s primary responsibility is to create and hold a mental model of the game world in their mind’s eye, which they then describe to the players. This mental model is the canonical reality of the game, and it’s continuously updated – and redescribed by the GM – as a result of the players' actions.

From the grumpy DM.