Barotrauma 潜渊症

Barotrauma 潜渊症

AI NPCs
417 条留言
RubbingMyAxe  [作者] 10 月 26 日 上午 8:18 
@Voids Yes, if your computer is good enough. I like to use LM Studio for the orders prompt since it is always short and not very complicated.
Voids 10 月 26 日 上午 2:48 
Soo.. What i'm thinking.. Can use LLM studio to host ai and don't use API key?
Leechard 10 月 25 日 下午 8:40 
@RubbingMyAxe
Okay, I will give it a try. I'm worried that these two mods might be incompatible. ur mod has brought me enjoying in.thx
RubbingMyAxe  [作者] 10 月 25 日 下午 8:09 
@Leechard Yeah, the name thing can happen if it's a really unique and famous name.

Making him interact with an item would require some kind of change to their AI. I think Smarter Bots mod does it with alcohol for bots that have specific talents.
Leechard 10 月 25 日 下午 6:40 
Coincidentally i recruit a bot named Einstein, and he seems to truly believe he is the historical Einstein and even teaches me how to make rockets orbit Europa.:D .But it seems to have realized that it is an AI in a game and said many desperate yet philosophical things.:-O
Headshotkill 10 月 23 日 下午 2:54 
ok
RubbingMyAxe  [作者] 10 月 23 日 下午 2:06 
It does support longer conversations, it's just you have to specify who you're talking to each time.


This will work:
!artie how are you?
Artie Dolittle: I am doing well. How about you, captain?
!artie i am doing good, are you ready to depart?



This will not work:
!artie how are you?
Artie Dolittle: I am doing well. How about you, captain?
i am doing good, are you ready to depart?
Headshotkill 10 月 23 日 下午 1:42 
Ok so it doesn't support longer conversations but is more focused on issuing orders to specific people.
RubbingMyAxe  [作者] 10 月 23 日 下午 1:32 
Are you using voice or text? If voice, you have to say their name before your message, that's how the mod knows who you're talking to. Same with the text commands, you have to do use !<name> before the message.
Headshotkill 10 月 23 日 下午 12:54 
So it's kinda working now but very limited, I can call someone over to me with a very direct order like "Renato come here", and it will respond but when they ask what's up and I reply with a random question like "Where are you from" or "Check for any leaks" they won't respond.
RubbingMyAxe  [作者] 10 月 23 日 下午 12:46 
Yes, using the same information for other parts is fine if you are using the same endpoint. I just have them separated in case someone wants to local host or use different models for different features.

You can have the speech to text set to local if your PC is good enough, but using the API is much faster and more accurate.

And just leave Memory turned off at this point, it's not very useful.
Headshotkill 10 月 23 日 下午 12:32 
Should I use the same API key for all parts of this mod? (chat, orders, memory, text to speech,...) Or should I use a different one for each?
RubbingMyAxe  [作者] 10 月 23 日 下午 12:01 
@Headshotkill If you're using OpenAI directly and not going through OpenRouter then you just need to enter the information in the boxes and not select anything from the dropdown. Selecting from the dropdown changes the endpoint to OpenRouter.

It should look like this: https://i.imgur.com/HZkYITg.png

Then enter your API key.
Headshotkill 10 月 23 日 上午 11:04 
I may be stupid but I just can't get this mod to work, I've installed the lua and mod itself as described and have acces to the AI mod settings. I selected the Open AI gpt 4o mini model and got an API key from OpenAI and put it in the box but it still won't work.
Matthew_Mei 10 月 22 日 下午 10:34 
@RubbingMyAxe
Oh, I get it now. Sorry, I just started playing this game recently and don't know much about these things.
RubbingMyAxe  [作者] 10 月 22 日 下午 2:58 
@Matthew Whenever the game updates you have to update Lua for Barotrauma and reinstall client-side Lua.
Matthew_Mei 10 月 22 日 上午 8:52 
The mod stopped working after the game update.
RubbingMyAxe  [作者] 10 月 21 日 下午 11:11 
Glad you got it working, let me know if you have any other problems.
Matthew_Mei 10 月 21 日 下午 9:31 
@RubbingMyAxe
I found the issue. I needed to manually enable API calls in the F3 console. Now it's running perfectly. Thank you so much for helping me resolve this. I really appreciate it! :D
RubbingMyAxe  [作者] 10 月 21 日 下午 8:57 
Endpoint: https://api.deepseek.com/chat/completions
Model: deepseek-chat

Both of those look correct to me. What error do you get when using those?
Matthew_Mei 10 月 21 日 下午 8:50 
@RubbingMyAxe
To be honest, I don't really understand these settings either. Here's what I filled in, but I'm not sure if it's correct, and the mod still isn't working.

For the API, I tried "https://api.deepseek.com/chat/completions" and "https://api.deepseek.com" .
For the Model, I tried "deepseek-chat".
The key is the one provided on the official website.

However, it's still not working. I'm not sure if what I entered is correct—I just followed the instructions from "https://api-docs.deepseek.com/zh-cn/" .
RubbingMyAxe  [作者] 10 月 21 日 下午 8:22 
@Matthew Don't bother with the dropdown, just enter the endpoint and model into the boxes. The dropdown is just there for less tech-savvy people to be able to easily select free models and get started quickly.
Matthew_Mei 10 月 21 日 下午 8:08 
@RubbingMyAxe It's very strange. I tried using "https://api.deepseek.com/chat/completions" , but when I click REFRESH, it still lists the AI language models from "https://openrouter.ai/api/v1/chat/completions" . When I select one, the API automatically changes to "https://openrouter.ai/api/v1/chat/completions" .

The reason I don't use OpenRouter is that it doesn't seem to accept RMB payments. I'm from China, which makes it very inconvenient for me. I'm also using a translator to communicate with you. Moreover, DeepSeek performs better in understanding Chinese compared to other AI language models. As you know, meanings can get lost in translation, especially with Chinese.

That's why I chose DeepSeek and purchased an API key on their official website (https://api-docs.deepseek.com/zh-cn/) . However, your mod doesn't seem to be compatible with it, which is really frustrating.
RubbingMyAxe  [作者] 10 月 21 日 上午 11:15 
@Matthew Have you tried using https://api.deepseek.com/chat/completions as the endpoint? From what I am seeing, it should be compatible but I haven't tested it myself.
Matthew_Mei 10 月 21 日 上午 7:42 
Why can't I use the DeepSeek official website's API? It doesn't work at all in the mod.
Pawsy 10 月 13 日 下午 8:41 
do i have to pay to openrouter bullshit for even a single first prompt on extremely light model :(?
Diyar 9 月 29 日 上午 11:22 
ive joined discord
RubbingMyAxe  [作者] 9 月 29 日 上午 11:10 
@Diyar Are you sure it didn't paste any spaces before/after? I can't remember if I am stripping those out.

If you want to join the discord it would be easier to figure out what's going on: https://discord.gg/uWpFN4NPx3
Diyar 9 月 29 日 上午 10:53 
hi it says i dont have a valid api key even though i copied my api key how do i fix this?
Diyar 9 月 29 日 上午 10:41 
hi i added my appi key and i applied it but it doesnt work i used one of the free models
RubbingMyAxe  [作者] 9 月 23 日 上午 7:56 
@common Yes, personality trait is included in the prompt.
common 9 月 23 日 上午 3:11 
Does the character's personality trait (e.g tough, joker) influence the responses?
The Pale Knight 9 月 13 日 上午 2:29 
@mohawk_mark This mod isn't quite a normal mod on the workshop, and requires a bit of setup to work. Im sure RubbingMyAxe or another member in the discord would be glad to help you set it up.:chivsalute:
classic-zombie 9 月 10 日 上午 7:43 
i want a talking with npc but he not says in chat
classic-zombie 9 月 10 日 上午 7:43 
also
RubbingMyAxe  [作者] 8 月 24 日 上午 11:20 
@metroload What isn't working? If you need help, can you join the discord and I can try to figure out what's wrong? https://discord.gg/uWpFN4NPx3
classic-zombie 8 月 24 日 上午 6:05 
don't work :(
RubbingMyAxe  [作者] 8 月 19 日 上午 4:44 
I've been thinking of this. I've thought of role-based targeting so "mechanic" or "security" would apply any commands to all your crew with that role, but only one of them would respond in the chat so that it doesn't use up a lot of tokens. I could do the same with "everyone" or "all".
Seti 8 月 19 日 上午 4:25 
if it would be possible, could you add a target for chat (I mean like how you type a name for it to work) that would Target everyone who is a part of the crew? or like the closest ones to you or something?
RubbingMyAxe  [作者] 8 月 18 日 下午 11:09 
Sorry I missed your message earlier. There are full logs of the responses from the various APIs in the root of the mod folder, sometimes if the response isn't formatted the way my code expects it doesn't display in the console properly but it should be in those files. And turning some of the debug options in the AI NPCs Options screen can help too.

Glad you got it working and are enjoying it. I'm working on trying to put more contextual information into the prompt instead of just dumping everything into it each time, so I may use that to add some crew information to it.
Seti 8 月 18 日 下午 2:53 
I got it working and just wanna say BRAVO, i expected it to be very basic but you integrated it so well! the AI is very aware of the missions and what is going on exactly, also, jjust for other people, I really recommend putting crew details into the prompt, makes for much more interesting characters
Seti 8 月 18 日 上午 8:53 
Urgh, this isnt specifically an issue with your mod, but it always says API provider returned error, is there a longer log you store somewhere? or just no way to test what it is
RubbingMyAxe  [作者] 8 月 15 日 上午 5:43 
Found a couple of small issues with outpost NPCs, so I will be doing a hotfix later today.
RubbingMyAxe  [作者] 8 月 15 日 上午 4:31 
@mman Yeah, it depends on your hardware. Context length is also important, you need enough to handle some conversation history and mission text, it can go over 4k.

If you don't have a powerful PC you could still host a local model and offload the Orders onto it, since it doesn't use many tokens. That way you would use less tokens and API calls on the endpoint you use for chat. I've used mistralai/mathstral-7b-v0.1 when local hosting and it seems to work well.
mman 8 月 15 日 上午 4:09 
Works well with LM studio use a small model or it will take more than 15 seconds
RubbingMyAxe  [作者] 8 月 14 日 下午 7:50 
Finally pushed the update! Let me know if you have any questions! I am monitoring the mod's discord channel here: https://discord.gg/uWpFN4NPx3
RubbingMyAxe  [作者] 8 月 11 日 上午 10:34 
Also local hosting with LM Studio works on the current workshop version.

If anyone wants to help test the latest update I am putting versions of it up in the discord: https://discord.gg/uWpFN4NPx3
RubbingMyAxe  [作者] 8 月 11 日 上午 10:12 
@mman I am not sure if it helps, but if you go to page 10 of the comments here, Trish said they got it to work with local hosting ollama. The current workshop version only works with an API key, but I am working on a big update that will remove that.
mman 8 月 11 日 上午 9:57 
so im using ollama for local hosting on a server and it does not need a API key is there a way to remove API
mman 8 月 11 日 上午 7:41 
Sorry if this is a stupid question but can you use a Local LLM