Don't get me wrong, I have no hate for AI. The potential for LLMs in gaming, especially for creating dynamic NPC dialogues or complex, adaptive game masters, is immense. The point of my previous post wasn't to dismiss AI as a whole, but to question its practical application for this specific, solved problem.
However, the example given (basic farming) is the worst possible use case to demonstrate this potential. It's the equivalent of using a fusion reactor to power a desk lamp. The overhead is astronomical compared to the task.
The core of my argument is about efficiency and the right tool for the job. For the predictable, loop-based behavior of auto-farming, a state machine is not just adequate; it is superior. It is lightweight, incredibly fast, reliable, and consumes negligible resources.
To prove it's not about the volume of code but the efficiency of execution,
here is the entirety of the auto-play logic for my server:
AutoPlayTaskManager 400 lines of code:
AutoUseTaskManager 470 lines of code:
This code provides full, retail-like auto-play support for all classes, including offline play.
It runs on any standard VPS without a dedicated GPU, using a tiny fraction of CPU cycles.
An LLM-based solution for this same task, even a "weaker" one, would:
-Introduce significant latency (response time) for each decision.
-Require expensive GPU hardware to run locally or incur API costs for cloud services.
-Add immense complexity for parsing natural language responses back into game actions.
-Be inherently less reliable than a simple "if mob dead -> loot" check.
So, while I agree the research is "quite cool" as a proof-of-concept, championing it as a practical solution for auto-farming is where the "AI hype" label fits. The real innovation would be applying that LLM power to a problem a state machine can't easily solve, not one it already solves perfectly.