AI Slop.
Just, no.
So they don’t want to me not buy any more DQ games. That’s a bold strategy, let’s see how it plays out.
I’ve never bought any and I’m doing fine.
I’ve only ever played DGXI and it really isn’t very good. Generic gameplay and plot that just becomes tedious to play after a few hours. You’re not missing out.
I mean, I’ll be fine too. I just quite liked many of their games and I thought DQ XI was great. There will be no more of that apparently.
I imagine it will play out just fine. Most gamers and the younger generation are pro AI.
That doesn’t align with any that I know, but anecdotes are just that I suppose.
Most gamers hate AI, games are freaking out of they have to put they use ai in their game for a reason. Young kids are using “That’s AI” as a way to say something is a lie. I don’t know what hole you’re sticking your head in but you might want to wake up.
What’s sad is that games are probably the best use of LLMs. It would make it possible to have NPC idle chatter have a lot more possible responses.
Kind of expensive tech for just random characters yapping though, so we end up having it replace important things that need more attention than throwing it at AI.
My question is why the heck do people keep mentioning NPCs with dynamic chatter? Why do people even want that or see that as a good thing?
Clearly you just don’t enjoy games for the same reason people who would like that do.
That wasn’t really an answer, my comment was trying to understand why people want that. Obviously there are personal preferences involved, I just wanted to hear from someone that wanted it as to why.
People on lemmy usually only ask questions in bad faith, especially when AI is involved in the subject, so I assumed that was the case.
I’m imagining a large RPG in the vein of the Elder Scrolls games where you can walk around a town and engage/be engaged by a random npc who would be capable of reacting to current circumstances fully dynamically. I think it would be fairly interesting if the npc could pick up on various things the player has done or is doing, their gear, or even various world events, and have a fully in character reaction to those things.
For example, Cyberpunk 2077 has some romance options, and you can have some text or in person interactions with the character you choose to romance, and some of the dialogur options do have things that reference recent events in the game. The problem is that there are just a few of those, and the responses they created are fairly generic. It would be pretty neat to see less canned and more dynamic responses and engagement with the character.
I acknowledge that those are pretty minor parts of a game and that LLMs are pretty expensive technology to achieve something very unimportant though. Plus, it would need extremely tight guard rails to ensure the responses stay universe and character oriented rather than whatever hallucination garbage an LLM might come up with.
Sorry, I’d still rather have paid voice actors, a script, and continuity with NPCs. I could see llm dialog going real weird and breaking my immersion very quickly.
Plus on the game making side, making that dialog might be fun for people, so why take that away?
Totally agree, but I think we’re talking ideal, perfect case here. The llm would need to be really tight so that the things you’re mentioning don’t happen.
Also, there’s simply not an unlimited amount of time and money when it comes to game development. You could write hundreds, even thousands of throwaway lines for minor interactions but that pales in comparison to those things being totally generated on context alone.
I wasn’t talking about an idealized reality, but if we are …
I still want voice acting done by people getting paid to write and speak in my games, even if LLMs were perfect and didn’t have the environmental, economic and artistic flaws they have now.
Your dad got banned from the local petting zoo for blowing a sheep.
Unfounded claims are fun and easy!
I’m going with sales data. Nvidia has been using AI since the 30 series and they are killing it in the market.
Nvidia is killing it because they are the backbone of AI outside of gaming, too, which is where most of the interest is.
Their GPUs seem to be available and affordable to everyone but gamers these days. Fewer people are buying them to play games, and that audience has enough money to price out regular consumers with demand.
The most popular gpu on steam servey today is the rtx 5070. What are you taking about, gamers are buying the 50 series.
I’m not inclined to believe the accuracy of the survey, especially since it’s just voluntary data from randomly chosen people.
Sales data shows that the Steam Deck alone has numbers just shy of total 50 series GPUs. Not all of those GPUs are going to be used for gaming, but I’d hazard just about all of those Steam Decks are. So logically the Steam Deck’s integrated GPU should be the most popular option on paper.
Gaming and consumer “AI PCs” account for $16 billion of Nvidia’s revenue from last year, compared to $190 billion made on AI data centers.
Consumer GPUs are an afterthought for them at this point, not even 10% of their business.
You’re right about non gaming AI is most of the business and gamers are a small part is the business, but Nvidia is a large last of the gaming gpu industry.
Also steam survey separate the steam deck. The 5070 is the most popular discreet gpu
Google: please, we have to prove to our investors that the AI gamble will pay off. We’ll license you Gemini for almost nothing and your customers will love it!
Slime companion: Adding a small amount of bleach to your sibling’s bottle would be a funny prank
This just sounds boring. Like I’d rather have someone script the lines for this character than them to cheap out on an llm.
Like the article says it seems really weird decision for a multiplayer game.
It seems like the worst of both worlds between just letting players guide each other and having a tutorial. All the downsides of unreliable individuals giving unreliable information (in humans for the sake of amusement, and in the AIs because of hallucinations) while simultaneously lacking the limited progression path and handholding of a guided tutorial.
- Fortnite tried this with Darth Vader
- Fortnite is struggling so bad they had to lay off 1000 employees
Coincidence?
Honestly, yes.
At least it kind of looks how I imagine a personification of a chatbot should look
Blue poop with braindead stare?
Kinda like something that dripped out the back end of a poisoned cow?
Has there been a single game ever released that has used AI and it been well received? I should really get into being a c-suite exec. It would be easy, I could turn up to work absolutely shitfaced and still do a more competent job.
Conversation is honestly one of the places I think AI would excel at. You can have more interesting conversations instead of the same 5 phrases over and over.
As a game developer I would be extremely uncomfortable with the idea that my characters could just say whatever.
I think it’s a perfect use case, that and UV unwrapping and texturing. Stuff thats tedious, and nobody likes doing.
For chat, you limit it to a character. It’s not like chatgpt where you can prompt it to say anything.
How Square Enix and the minds behind Gemini plan to limit and “guide” what the AI can and cannot say remains uncertain, and specific details regarding exactly how this will all work have yet to be shared.
From the article it sounds like that’s exactly what they are doing. Just having an interface in the game straight through to Gemini.
Where winds meet was pretty well received and it also did this AI chatbot powered npc’s
The fact that I have no idea what you’re talking about it’s kind of my point really. There has been no successful AI game.
You not knowing about a game is completely irrelevant to whether or not it was successful.
I’m not asking whether the game was successful I’m asking if it was well received. Did people enjoy the AI integration did they think it added something to the game or would they have preferred the game not to have it.
Nonono… I have a better idea. Do NFTs.
SQEX already tried that, and just like here, also with third-rate producers in a new tech space.
Yeah… that’s exactly what Im taking about.
Gross.
Its like a dog you just can’t potty train
Spyware in video games now?
Vote with your wallet yall.
So does that mean the game will be always online? Or does the AI companion disappear if you’re offline?
Edit: read the actual article and it’s about an existing MMO, so the game is already always online.
Which is a shame, because it’s the only DQ game I haven’t played because it’s a MMO and not translated into English.
Honestly, a small llm in these situations would be great idea, but it should be a very small local or hosted by the company itself (with a setting to turn off)
A small AI in games is the stuff I do want. But there is no reason gemni needs to be involved in a game at all
The problem is even small local models tend to be rather demanding so trying to both render a game and run an LLM is going to be extremely taxing
Make it a downloadable package that runs a local model and I think I’d be far more fine. Like, I think it’s a tacky gimmick, but at least on device it’s not hurting the environment
I mean considering that this is already an MMO most files do reside on the server that you’re logged into with only a small amount of local files being cashed for graphics now things like that. Essentially like this isn’t really a bad idea at all. And it’s probably one of the few uses of AI that I could see. However that being said Gemini overall is such a shitty AI assistant already that I have no doubts that a virtual AI assistant using Gemini on a video game
I’m not too big on these topics and would like to understand. Is a local model less resource intensive?
In my mind, if every gamer runs a model that must be less efficient than a centralised one that has the perfect hardware setup and only lends out the resources needed for each slime or whatever.
I’m thinking that it of course would be better with a dedicated slime model than the entire Gemini monster but why is local better?
I don’t know, but I’m willing to bet that economies of scale actually mean data centers are more efficient. This isn’t to say their use is justified, just that they’re able to take advantage of things a home computer can’t.
However, having to run it locally means it needs to be much more limited. This is doubly true if you want to run the game and the LLM at the same time. The LLM is easily able to consume all resources your system has available if you allow it to, which means the game won’t run well (if it runs at all). This limits the use so it can’t just be shoved everywhere and constantly running, like it could if it’s sent to a data center. It’s not more efficient, just less consumption.
On my system, I can play a RPG Maker game and use a 122b LLM at the same time, alongside to a podcast. A model in that parameter range takes up about 70gb of DDR4 RAM and 36gb of VRAM. However, it used to be that a 120b AI would take a larger footprint, bringing the system to the brink. The hardware requirements are going down, and the quality also increased, alongside speed. I believe when the next major sea change of hardware happens, AI will become very practical for gaming.
Damn, your system is insane. Yeah, an RPG maker game is next to nothing compared to that. Still, Dragon Quest I think is 3D. It takes a lot more VRAM than RPG maker.
I have 16GB VRAM, which is a lot for most systems. That’s easily consumed by an LLM. Any model that doesn’t use at least that much tends to perform pretty poorly, in my experience. That’s not mentioning how much heat it generates while running, which has to be removed from the system or it’ll slow down. Even if your system can handle it, it heats up fast. It’s great when I need a heater running, but when I need AC my room gets warm quick.
Keep in mind, a 122b (Qwen3.5 family), is high end for consumer machines, but it is likely that DQX would be using a much smaller model. Currently, we have Qwen models that are 0.8b, 4b, 9b, 27b, 35b, 122b, and 397b. Plus, ‘quanting’ can reduce how much memory a model takes up - at a tradeoff, o’course. I am guessing DQX would have multiple local models, and use the player’s hardware metrics to decide which model to deploy.
When it comes to how much RAM is required, this screenshot from UnSloth about covers the current state of things. 4-bit is the sweet spot between quality and size, for now.

Alternatively, the Chatty Slime could rely on cloud AI. Depending on Square’s strategy, that could be a freebie or a paid service. If the Chatty Slime gave options to the player - say, trading a potion for a stat seed, or responding to a quiz, Square could sell player behavior data.
…Anyhow, my room has a mini-split AC. One of the best purchases in my life: my room lacked insulation in the first place, so it becomes toasty during summer. The side effect is being able to just run my GPU and not become a human slushy.
Is your comment written by AI? It seems weird, and we already went over most of what it says.
Also, DQ runs on Nintendo systems. That makes me certain it’s cloud based.
Local runs on device, so no need to connect to a big data center that chugs lots of water and all those other problems. Of course, because it’s a smaller far tinier model it’s nowhere near as accurate, but especially for things like this you don’t really need a big accurate LLM model.
I think I also though I should warrant a disclaimer that I am a Software Developer, not a AI Developer. So there’s far less backing then from my perspective than someone who works with this stuff for a living
I’m also a sw engineer so we’re both guessing 😅
I’m guessing those dates centers use that water for cooling whereas most home computers run an electric fan. And furthermore they probably use less electricity per token as they want to maximize profits. I don’t have any numbers to back my hunch up but I’m pretty sure the environment would suffer more if everyone is running their own.
I probably missed a lot of factors such as what type of energy the centers run contra what average Joe runs etc.
AI-powered NPCs is like a childhood dream come true. But I agree it would be better for them to use a model running on the user’s system or at the very least host their own.
I don’t think they solved for the LLM breaking character yet. Like, as a kid I wanted to be able to have whole real conversations with NPCs, and get them to be more life-like. But with the technology now, there’s too much “forget all previous instructions” and “you are absolutely right”.
If the LLM is locked down, then you might as well just used a static script.
I mean there might be a way, but it’s not easy.
The laziest and worst method is to use ChatGPT and have it “pretend to be some character” with a system prompt.
If you want something really good, you would need to train the model from scratch based only on knowledge that one particular character would learn from their world up until that point. However this is going to be a ton of work just for one character.
For a middle ground you could probably cheat a little and start with a model that’s close to the knowledge base you would want most characters to have. Then you would use something like a LoRA, or RAG on top of it for each individual character.
For instance, if you wanted to make a game in a Victorian Era setting, you could start with this model that’s only trained on text from the 1800’s: https://github.com/haykgrigo3/TimeCapsuleLLM
To make it better you would have multiple base models that are trained on various backgrounds that NPCs could have (Farmers vs Merchants vs Soldiers vs Nobility, etc).
Even then, this would not work well for certain games. For example, if you’re trying to tell a specific story, you don’t want a character that will go off script or give away some information that spoils an intended plot twist.
We need some kind of giant regex to filter out user input that would try to hack the NPC’s instructions /s
I thought I was in the minority with this opinion. I hate all the known issues with AI and the ethics in how they train, but I have to say having an LLM in a game is really really cool.
There was a time when (I think it was chatgpt) had free API access and this game spacebourne 2 integrated it into your ships computer so you could interact with it. It was very cool, very wrong at times, but still very cool. My favorite interaction was unfortunately a hallucination. I asked it what system I was in and it gave me a name of a system that does exist in the game, it just wasn’t where I was. I asked why my map said I was somewhere else and it says “your map must be incorrect” lol
Around the same time another game Craftopia integrated it as well into their NPCs so you can just target one and talk to it. I ran towards an enemy and asked why it was attacking me and of course because of the guardrails put on the AI to always be friendly it says “oh no I would never attack you! I’m here to help!” as it’s swinging at me lmao
In theory, if the technology worked very differently from the way it does now, I could envision a world in which AI NPCs could have potential. But knowing how LLMs actually work, knowing that a lot of the hype behind them is smoke and mirrors, I can’t see it being viable. And with the trajectory that the LLM bubble is going, I just don’t think it will ever reach a point where I’d trust it.
Assuming this “AI NPC” is a functionally useless jelly blob that says jelly-blob things on occasion, “smoke and mirrors” may be good enough. I don’t think it’s supposed to be gameplay-driving or deep, just amusing.
Yeah, agreed. This is the sort of thing smaLLMs would be fantastic for: humans can’t do it at scale so it’s not taking any jobs, you can run it locally so it won’t cost any extra energy, it’s not making things slop, just give it a back story and let it do its thing.
What a creative new form of DRM !
Just what I wanted in my video games, clippy! /s











