LLMs Do Not Succeed in Games by Default. The Benchmark and API Layer Is Doing Much of the Work
Recent game-playing results from large language models are easy to overread. The stronger finding is not that LLMs can simply be dropped into games, but that their performance changes sharply when researchers add task-specific evaluation harnesses, game interfaces, and supporting modules that compensate for weak planning, action formatting, or memory. LMGAME-BENCH shows the gap between…