Jump to content

Recommended Posts

Posted

I don't really work on L2j anymore, but from time to time, I check up the forums just in case someone built something cool, and I stumbled upon this:

It's actually really easy to implement LLMs (both local, like gpt-oss, and online, like ChatGPT or Gemini) for any sort of action in L2.

Even without training a model explicitly, a good GPU and some basic LLM knowledge can take you very far. 

Here is an example of an auto play bot I made in 10 minutes that uses gpt-oss (which is mega overkill) locally with a 5090 to do some basic farming.
As you can see from the LM Studio responses, it is fairly fast (and uses reasoning too) to think about what's the best course of action for a given situation.
 

 

Basically just a quick proof of concept. If I had time, I would make something like this for auto-play/farm or bots, since LLMs would play really nicely with fake players that can actually think and are not completely pre-programmed. It's quite cool.

  • Like 3
Posted

Interesting. If I use one of those weaker LLMs locally that can even be connected to a VPS with a few GBs, will it also work?

If it is to do simple jobs

Posted
18 minutes ago, Litch said:

Interesting. If I use one of those weaker LLMs locally that can even be connected to a VPS with a few GBs, will it also work?

If it is to do simple jobs

You'll need to test it, but you need at least some decent GPU power for it to be reasonably fast. What you can also do is predictive actions, so send a single request that is bigger, let it take some time and then instruct the response to include a batch of 5-10 followup actions which can be processed on the client (the l2 server).

  • Like 1
Posted

AI hype from dumb AF devs at it's best.

The video shows a character doing the most predictable loop imaginable attack, loot, repeat. You don't need a "reasoning" LLM for that. A lightweight task scheduler or even a basic script can handle it with a fraction of the resources and zero GPU overhead.

 

Spinning up a big language model to decide "hit the mob and pick up the drop" isn’t clever, it is wasteful. It's like hiring a rocket scientist to flip a light switch: impressive hardware, pointless job.

Posted (edited)
1 hour ago, Mobius said:

AI hype from dumb AF devs at it's best.

The video shows a character doing the most predictable loop imaginable attack, loot, repeat. You don't need a "reasoning" LLM for that. A lightweight task scheduler or even a basic script can handle it with a fraction of the resources and zero GPU overhead.

 

Spinning up a big language model to decide "hit the mob and pick up the drop" isn’t clever, it is wasteful. It's like hiring a rocket scientist to flip a light switch: impressive hardware, pointless job.

 

Ofc it does 🙂 . All it was trained on was farming for levelling. It's not about doing something unpredictable. It's about potential.

Feed it a bigger context and train it on class-specific fighting patterns, buffing, item values, trading, chatting in L2 lingo, reading trade chat, party-based farming, crafting goals, etc, and you have something that a normal deterministic bot would take a really long time coding to achieve.

 

Add that to all of the work being offloaded to a model instead of the gameserver, and you have a much better solution that feels way more natural. Ofc you can do pretty much everything with a state machine but you'd have to write insane amounts of code. It's a waste of time.

Don't be blindsided by your hate for AI. There is considerable potential here.

Edited by Elfo
Posted

Don't get me wrong, I have no hate for AI. The potential for LLMs in gaming, especially for creating dynamic NPC dialogues or complex, adaptive game masters, is immense. The point of my previous post wasn't to dismiss AI as a whole, but to question its practical application for this specific, solved problem.
 

However, the example given (basic farming) is the worst possible use case to demonstrate this potential. It's the equivalent of using a fusion reactor to power a desk lamp. The overhead is astronomical compared to the task.


The core of my argument is about efficiency and the right tool for the job. For the predictable, loop-based behavior of auto-farming, a state machine is not just adequate; it is superior. It is lightweight, incredibly fast, reliable, and consumes negligible resources.
 

To prove it's not about the volume of code but the efficiency of execution,
here is the entirety of the auto-play logic for my server:
AutoPlayTaskManager 400 lines of code:
AutoUseTaskManager 470 lines of code:


This code provides full, retail-like auto-play support for all classes, including offline play.
It runs on any standard VPS without a dedicated GPU, using a tiny fraction of CPU cycles.


An LLM-based solution for this same task, even a "weaker" one, would:
-Introduce significant latency (response time) for each decision.
-Require expensive GPU hardware to run locally or incur API costs for cloud services.
-Add immense complexity for parsing natural language responses back into game actions.
-Be inherently less reliable than a simple "if mob dead -> loot" check.


So, while I agree the research is "quite cool" as a proof-of-concept, championing it as a practical solution for auto-farming is where the "AI hype" label fits. The real innovation would be applying that LLM power to a problem a state machine can't easily solve, not one it already solves perfectly.

Posted

Yeah, that's fair. I'll create a full AI-based fake player engine out of this to see how far it can go.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

AdBlock Extension Detected!

Our website is made possible by displaying online advertisements to our members.

Please disable AdBlock browser extension first, to be able to use our community.

I've Disabled AdBlock