边缘世界 RimWorld

边缘世界 RimWorld

EchoColony
Kobold Toppy 'Unrecognized response format'
Even after the 2 updates you did to try and resolve the issue, I can still only see the pawn's output in the Kobold console.

Here are a couple of prompts and responses from Kobold's console, given after talking inside Rimworld:
https://pastebin.com/PBhHe5vD
https://pastebin.com/23Tknce1

There seems to be a LOT of extraneous data in there, which is probably for the benefit of the AI the mod was originally designed for.
Kobold, however, appears to take it all as plain text, including the 'model' section in the mod's config.
I also recently added the 'Speak Up' mod, so the common chats they have with eachother are more expressive, which may be to the detriment of AI interpreting it (Vanilla's simple actions are likely easier, than the actual chats going on with that mod).
最后由 Darian Stephens 编辑于; 4 月 19 日 上午 11:05
< >
正在显示第 1 - 12 条,共 12 条留言
Gerik Uylerk  [开发者] 4 月 19 日 下午 4:04 
Alright. I'm not very familiar with Kobold, but after installing KoboldCpp version 1.88 and loading the toppy-m-7b model, I finally started getting positive results.
Try again and let me know if you can now see your colonists' replies.

https://i.imgur.com/EnnIMJe.png
最后由 Gerik Uylerk 编辑于; 4 月 19 日 下午 4:04
Yeah, I'm getting the output, now!
I just need to figure out how to get it to output stuff properly, since I just had it give quite a few lines of dialogue between itself and I, and it keeps repeating the 'stay in character' stuff in the output as the first line:

https://i.imgur.com/0qMmfbp.png
For context, I only gave 2 prompts, there. Once at the top, and once below. The parts with no line break between them were all from the AI.
最后由 Darian Stephens 编辑于; 4 月 19 日 下午 5:04
I'm trying some different models, but they all appear to output in different ways.
Is there any way you could provide a new section in the config where we can specify our own regex which will be used to grab the output we want?
I'm actually starting to get decent results using a Deepseek model, but it requires a pretty huge context size to fit all the thinking.
It seems the mod always gives it a token limit of 512. Though, since I don't see one given with the input at all, it's possible that's just a default. Kobold's own lite interface gives a context and token size in there.
Maybe we could fully customize it at some point? Or at least be able to stick extra data in there?

It's also possible things are getting cut off, since the prompt just ends with '...' in the middle of one of the events, and I don't see the question I gave in the console at all.



I also had the AI refer to Brendan as another character, possibly influenced by the events which specify Brendan. Maybe you can cleverly replace 'Brendan' with 'me' or 'I'?
For example, there's an event like this listed:
'Brendan made a comment about sports to Edmund.'
If it could be replaced before sending it to the AI with 'I made a comment about sports to Edmund.', that may improve things.

Another type, 'Brendan and Edmund chatted about deep space mining.', would be better suited to getting replaced with 'Me and Edmund chatted about deep space mining.'
Or, even better, 'Edmund and I chatted about deep space mining.'
最后由 Darian Stephens 编辑于; 4 月 19 日 下午 7:29
Oh, actually, yeah, I really need to either find a way to adjust the token limit on Kobold's end, or there needs to be a way to do it on the mod's side, because I got a really good response on the first time it managed to not overflow the token count thinking.
If it helps, the max token count seems to be given as part of the input to Kobold. When using the web UI, prompts that I give it have that as part of it in the console.
最后由 Darian Stephens 编辑于; 4 月 20 日 上午 3:58
Gerik Uylerk  [开发者] 4 月 20 日 上午 10:06 
That’s really interesting—thank you so much for the feedback!

It seems that since my mod wasn’t explicitly setting a token limit, Kobold was defaulting to around 512 tokens, which is way too short for a colonist to respond properly.

I also noticed that the model was taking the prompt very literally, often trying to write a story instead of having a natural conversation.

I’ve made several improvements to how instructions are sent and how responses are handled, to make things much clearer for Kobold. On top of that, I’ve increased the token limit to 4096. So far, this has been working well on my end.

But if needed, I can also add a new setting that lets players customize the token count to their preference.

Thanks again for your help—and please let me know if the responses are looking better now!

Also, don’t forget to set Kobold as your model provider in the mod’s configuration!

https://imgur.com/a/uYjMI64
最后由 Gerik Uylerk 编辑于; 4 月 20 日 上午 10:08
This is awkward, but I think you may have used the wrong parameter for the length; it's still 512, despite there being a '"max_tokens": 2048'. The one used in the official documentation is '"max_length"'
This is from the console, giving a prompt with the in-built Kobold WebUI:
https://pastebin.com/ZGt1RTee

I did also make sure that Kobold is the provider set in the mod settings.
最后由 Darian Stephens 编辑于; 4 月 20 日 下午 1:15
Gerik Uylerk  [开发者] 4 月 20 日 下午 7:02 
Alright, can we try again?
I’ve replaced "max_tokens" with "max_length" to see if that works better now.
Sorry for the inconvenience, and thank you for your patience.
Well, that definitely worked to increase the token limit!
But it seems like the prompts are far less detailed, now?
The previous one had all sorts of information. Telling the AI that they're a colonist, aware of their body, to stay in character, details about recent events, relationships, character traits, even the personality from 1,2,3 Personalities.
Now it seems to just be their name, age, their location, activity, numbers about their health and mood, their inventory, and the top skill.
Before, I was getting pretty good responses when within the token limit, but here the AI didn't even try to stay in character, just imagining a story with the short prompt given.
This has happened every time I've tried it.
最后由 Darian Stephens 编辑于; 4 月 20 日 下午 9:45
Gerik Uylerk  [开发者] 4 月 21 日 上午 8:25 
Hey, how’s it going? Thanks for your patience.
To avoid hitting the token limit, I had removed some information from the final prompt, but I think I cut out too much. I've now added more context to the colonist.
Could you show me the latest console output you're getting with this update?
If the token usage is still low, I’ll gradually add more detail to the colonist’s prompt.

It’s a bit tricky for me to calibrate this from my side since I’m not too familiar with local models yet, but I’m determined to get this working properly for everyone.
What limit are you talking about?
Both the input and output token limits can be customized in Kobold.
I had my input set to 16k, and output at 4k. Manually passing in one of the original types of prompts, I got really good results.
If it's a problem, though, perhaps you could have a setting for how detailed the prompt should be, or checkboxes for what types of information should be passed? So you could enable/disable the allied settlements and names, the recent events, whatever is needed to trim it down if necessary.
< >
正在显示第 1 - 12 条,共 12 条留言
每页显示数: 1530 50