边缘世界 RimWorld

边缘世界 RimWorld

RimTalk
337 条留言
ZIRI 11 分钟以前 
this link contains error msg: https://pastebin.com/9zDKXkiC
AUV 28 分钟以前 
好把,我知道为什么了,我使用的是本地的ip访问的,需要更改为 localhost
Juicy  [作者] 1 小时以前 
@Cosmosteller Thanks for reporting. This is a known issue when pawns share the same name. For now, the workaround is to rename one of the pawns as you described.

@ZIRI Please post error logs in the Discussions section. If the log is too long, you can upload it to pastebin.com and share the link here.
ZIRI 1 小时以前 
how can l report an error message to you? rimtalk with vanilla ideology expand get large amount of errors when running the games
Cosmosteller 2 小时以前 
I found a small issue related to the 'Corrupted Obelisk' from the Anomaly DLC.

When Pawn A is duplicated (resulting in B), Pawn B outputs all dialogue generated by Pawn A. Additionally, the dialogue process seems to become corrupted for both characters.

The issue was resolved by editing the name using the Character Editor mod and reloading the save.
Juicy  [作者] 2 小时以前 
@BadBlackLion @AUV
For local servers, only HTTP is supported, so make sure your address starts with http:// and not https:// . If you’re trying to connect from another machine on your local network, then you’ll need to use HTTPS. You can follow the guide here: https://psteamcommunity.yuanyoumao.com/workshop/filedetails/discussion/3551203752/600789021066589686/
AUV 3 小时以前 
本地部署的模型为什么不能使用http的呢?只能使用https?
BadBlackLion 6 小时以前 
Hey there,
I set up Ollama on my Windows machine, the same I play Rimworld on, like said in the description. But now I get the error saying 'Unable to complete SSL connection'. What can I do?
Juicy  [作者] 9 小时以前 
@Fuyun Thanks for the feedback!

1. RiMind is a fork of an early RimTalk version by someone else, and I wasn’t involved in its development, so I’m not familiar with its internal logic or design.
2. That’s an interesting idea. I will keep it in mind for a future update. Visualizing character histories or interactions could make the experience richer.
3. Regarding missed events, characters currently only respond to nearby events, but I plan to fix this so they will be more responsive. Lines that ignore distance can happen because the AI generates a batch of text at once, which is displayed one by one while RimWorld pawns keep moving. You can also adjust the prompt to limit conversations to 2 or 3 turns to help prevent this issue.
Cosmosteller 11 小时以前 
Hi Juicy, The detailed feedback on the "Thought/Dialogue Context" has been posted to the Discussions tab. Check it out when you have a chance!
Fuyun 13 小时以前 
A great mod! I’ve used it for two days and really enjoy it. Here are some questions/suggestions:

1.I tried both RimTalk and RiMind but couldn’t tell their dialogue logic/output differences. RimTalk seems lighter on performance—what sets them apart?

2.Could you add a "Memory" UI like RiMind’s (or similar) to visualize character histories/interactions? Current logs lack readable "story" text.

3.Characters often miss/lag in reacting to in-game events (e.g., visitors/caravans: dialogue doesn’t match roles/goals; distance-ignoring lines). Is this fixable via AI/model tweaks, prompt adjustments, or a logic issue in info-gathering?

4.More custom prompt presets would boost usability—hoping to see those added!
Fuyun 13 小时以前 
令人惊喜的模组,试用了两天非常喜欢,同时我有一些疑问和建议。
1.我对比使用了您的RimTalk和另一个相似模组RiMind,但在具体的对话生成逻辑和效果上没有总结出明确差异,RimTalk的性能占用似乎更低,可否介绍一下两者的区别?
2.是否可以引入类似RiMind的“记忆”UI设计,或者其他方式,使角色的经历和言行可视化?目前虽然有对话记录,但缺乏具有可读性的“故事”文本
3.角色对游戏中的信息感知有时不及时或有缺失,例如访客和商队来到殖民地时,谈话内容和他们的身份、目的不太匹配/角色有时没有做出及时反映/角色说话内容与对话对象距离不相符,这应该通过更换AI模型、提示词进行优化,还是说这是模组自身抓取信息逻辑的问题呢?
erya 14 小时以前 
Amazing!
pmrn 15 小时以前 
@Juicy you're absolutely right - the 3B model I linked to before was actually designed to work with a different Rimworld LLM mod RimDialogue. I swapped it out for a 24B model and now it works perfectly! I'm using Cydonia 4.1 and Dan's Personality Engine 1.3, both 24B both based on Mistral, they work fantastically.

https://huggingface.co/bartowski/PocketDoc_Dans-PersonalityEngine-V1.3.0-24b-GGUF

https://huggingface.co/bartowski/TheDrummer_Cydonia-R1-24B-v4.1-GGUF

24GB VRAM, 500ish mods, works just fine.
Juicy  [作者] 20 小时以前 
@ZIRI That error’s actually from Dubs Mint Menus, not my mod. I don’t have control over that one, so you’ll need to check with the Dubs author.
ZIRI 21 小时以前 
System.Action Close, System.Boolean Editing) [0x0044e] in <4a61d8a323544f168b948c321aa111ae>:0
at DubsMintMenus.MainTabWindow_MayaMenu.ExtraOnGUI () [0x00014] in <4a61d8a323544f168b948c321aa111ae>:0
at Verse.WindowStack.WindowStackOnGUI () [0x00038] in <24d25868955f4df08b02c73b55f389fe>:0
at RimWorld.UIRoot_Play.UIRootOnGUI () [0x00089] in <24d25868955f4df08b02c73b55f389fe>:0
- POSTFIX cj.rimtalk: Void RimTalk.UI.OverlayPatch:Postfix()
at Verse.Root.OnGUI () [0x00046] in <24d25868955f4df08b02c73b55f389fe>:0
- TRANSPILER net.pardeike.rimworld.lib.harmony: IEnumerable`1 VisualExceptions.ExceptionsAndActivatorHandler:Transpiler(IEnumerable`1 instructions, MethodBase original)
- POSTFIX UnlimitedHugs.HugsLib: Void HugsLib.Patches.Root_OnGUI_Patch:OnGUIHookUnfiltered()
ZIRI 21 小时以前 
When l click plan mark button in Command dial from dubs menu with this mod, This error msg shows and buttom not working, can you fix it?


Exception in Verse.Root.OnGUI: System.NullReferenceException: Object reference not set to an instance of an object
[Ref E708A9A5]
at RimWorld.Designator_Plan_Add.ProcessInput (UnityEngine.Event ev) [0x00032] in <24d25868955f4df08b02c73b55f389fe>:0
at DubsMintMenus.MainTabWindow_MayaMenu.DrawGrid (UnityEngine.Vector2 center,
Juicy  [作者] 22 小时以前 
Thanks, glad you’re enjoying it! The reason you still see dialogue is that the AI often prepares a batch of lines in advance, so those will appear one by one with a delay, but no new dialogue will be generated while speed-up pause is in effect.
小浪蹄子 10 月 2 日 上午 9:02 
Honestly, Juicy, the mod you've created makes me see the future of video game development. You're truly a pioneer!
小浪蹄子 10 月 2 日 上午 8:57 
Thank you for creating such an amazing mod! However, I noticed that the speed-up function doesn't pause production dialogues. This really bothers me because I don’t want to miss any discussions, but it significantly slows down my gameplay progress
Gloomy 10 月 2 日 上午 8:24 
@Juicy: Found this mod yesterday. I tried it out, i loved it! I am doing a Gravship / Anommaly Horror run and it was a lot of fun writing the personalities for each individual colonist and how thy react to the horrors around them. Works like a charm.

I also didn't to use Google for it (because f*ck em) and had to learn how to use Ollama with a local model, which also was a fun little trip until that was running. It does now, and i just wanted to drop a thank you for the additional live the mod breathed into the game.
Wug54 10 月 1 日 上午 2:22 
Thanks for the great mod. I was wondering if there's a way to prevent a specific colonist from talking? I've tried setting their Chattiness to 0, but they still speak sometimes. I'm hoping to find a way to have the mod ignore a particular colonist.
Juicy  [作者] 9 月 30 日 下午 8:32 
@Sea salt_helper That’s a cool idea about manually controlling dialogue. I’ll keep it in mind. Right now, pawns only remember recent things they said or heard to keep context short, no long-term memory yet. The next update will show what context is sent to the AI in the debug window. I’m curious, how would you want to customize prompts?
Juicy  [作者] 9 月 30 日 下午 8:20 
@pmrn looks like it might be a JSON deserialization error. There’s an FAQ in the discussion that covers this. Could you take a look and see if it helps?
SirMalFet 9 月 30 日 下午 8:18 
@pmrn If you check the output on llama.cpp of the assistant, is it **only** the JSON output? I used to get the JSON, and some other stuff, and then that error is caused. Being more forceful in the prompt about the output would fix that.
pmrn 9 月 30 日 下午 6:41 
@SirMalFet I'm not seeing anything about the JSON output, here's the first part of what I'm seeing

[RimTalk] Failed to process response: [Ref 73D5426A] Duplicate stacktrace, see ref for original


[Harmony, 0Harmony.dll] MonoMod.Utils.DynamicMethodDefinition.UnityEngine.StackTraceUtility.ExtractStackTrace_Patch1()
[Core, Assembly-CSharp.dll] Verse.Log.Warning(System.String text)
[RimTalk, RimTalk.dll] RimTalk.Util.Logger.Warning(System.String message) at /Users/chris/RiderProjects/RimTalk/Source/Util/Logger.cs:21
[RimTalk, RimTalk.dll] RimTalk.Service.TalkService.ProcessSuccessfulResponse(System.Collections.Generic.List`1[Verse.Pawn] allInvolvedPawns, System.Collections.Generic.List`1[RimTalk.Data.TalkResponse] talkResponses, System.String request) at /Users/chris/RiderProjects/RimTalk/Source/Service/TalkService.cs:109
[RimTalk, RimTalk.dll] RimTalk.Service.TalkService+<>c__DisplayClass0_0+
SirMalFet 9 月 30 日 下午 5:21 
@pmrn I posted about it a while ago. What do you see on the Development Log in Rimworld? It may be failing to generate the JSON output.
pmrn 9 月 30 日 下午 4:42 
I'm using a local llama.cpp endpoint - I can see in the CLI that tokens are being generated successfully but in the rimtalk screen it just says "Generating..." next to each pawn's name. Any ideas?
Sea salt_helper 9 月 30 日 下午 12:13 
@Juicy Will you allow players to manually control characters for dialogue in the future? By controlling an avatar, generate a typing interface for players to converse with colonists.
Also, I still don’t know how much memory context the mod sends to the AI—will they remember the previous events?
And, which game data is read and sent to the AI? Will you allow more customizable prompt writing in the future?
心跳猎鹰部的m0Niko本人 9 月 30 日 上午 8:31 
nice mod:steamthumbsup:
DeleteC 9 月 30 日 上午 6:30 
Thank you very much, the problem has been fixed.
Juicy  [作者] 9 月 30 日 上午 6:18 
@DeleteC This should be fixed now. Let me know if it still persists.
DeleteC 9 月 30 日 上午 4:56 
When I use OpenRouter and open the model, the game freezes.
3304441968 9 月 30 日 上午 4:08 
I am a player from the Chinese gaming community, TIEBA, and I am very grateful for the immersive mod you created, which greatly enhanced my gaming experience, thank you
Nimn 9 月 30 日 上午 1:01 
As a chub user, i'd personally love to use my chub sub models here.
littleC 9 月 29 日 下午 11:45 
I am a player from the Chinese gaming community, TIEBA, and I am very grateful for the immersive mod you created, which greatly enhanced my gaming experience, thank you
Tomori Takamatsu 9 月 29 日 下午 10:13 
@Juicy Perfect UI Now!
SZ0695 9 月 29 日 下午 12:24 
I am a player from the Chinese gaming community, TIEBA, and I am very grateful for the immersive mod you created, which greatly enhanced my gaming experience, thank you
Juicy  [作者] 9 月 29 日 上午 8:06 
@pmrn Thanks for sharing! Always nice to see more options for local, uncensored LLMs.

@Maiya0126 Thanks for your contribution! The “generate voice” mention was actually a mistranslation from an AI translation, which I’ve fixed. I’ve seen quite a few requests for a voice-over feature, and I’m just looking into whether the technology is in a good enough state to integrate and sound natural.
Maiya0126 9 月 28 日 下午 9:34 
Since I did the Chinese localization and tutorial for RimTalk and posted it on XIAOHEIHE and TIEBA, more and more Chinese players have said that this is a great mod. Watching your GitHub keep updating and optimizing, and the comment section continuously answering everyone's questions is really impressive. When I updated the localization recently, I saw that there was a mention of "generate voice," is this going to add a new feature?
pmrn 9 月 28 日 下午 4:51 
Hey anyone interested in this who doesn't want to use APIs and wants an uncensored LLM, TheDrummer made a special model just to use with RimTalk, even at high precision it's only around 3GB so most of you will be able to fit it into your video card alongside the game. If you're not familiar TheDrummer is one of the biggest names in uncensored local LLMs. Here's a link to the model, if you're an uncensored local LLM enjoyer like myself you should check out the rest of his page, lots of options that will run on as little as 8-12GB VRAM.

https://huggingface.co/TheDrummer/RimTalk-Mini-v1-GGUF

To run this locally you'll need an LLM inference repo, my personal favorite is llama.cpp but oobabooga and Ollama work fine and are more user friendly. You're basically going to run an openAI compatible endpoint on localhost or 0.0.0.0. Lots of great walkthroughs out there or just ask the free version of ChatGPT or Gemini or Le Chat or Qwen and they will walk you through it.
Juicy  [作者] 9 月 28 日 上午 10:38 
@ƎNA ok np, let me know if you got it working. Sometimes if the instruction is too long, the model may fail to respond properly and not respect the correct JSON format.

@sasurai1674 I tested with those two mods but couldn’t reproduce the issue, so maybe it's colliding with some other mod? I’ll keep that on my radar though, thanks for reporting.

@Fu @Tomori Takamatsu thanks a lot for the feedback! I’ve officially added an overlay log window. Please give it a try and let me know if you run into any issues.
Tomori Takamatsu 9 月 28 日 上午 9:22 
I use chatlog to view the historical conversations of RIMTALK directly, but its records are incomplete and still log the original interactions.
I think the record UI of RIMTALK could be slightly improved in the following aspects:
1.Allow setting the background transparency of the UI.
2.Allow hiding redundant button UIs that are irrelevant to reading.
3.Allow changing the order of conversation records to chronological order from top to bottom.
These are just my small suggestions. For me, RIMTALK is one of the greatest community MODs.
sasurai1674 9 月 28 日 上午 7:17 
Thank you for the amazing MOD! I always use it.
I encountered a really strange bug and got a red error, so I came to report it. Although I'm not sure if that's actually the cause.
https://gist.github.com/HugsLibRecordKeeper/ab9cec78b66f6b88a844cf7934641486
https://gist.github.com/HugsLibRecordKeeper/058bc2a27cda286baabfb3241f18ad41
It's these two.
In terms of content, there is a bug where if you use this MOD together with Kyulen and the Character Editor, you cannot create a character at the start (specifically, the button for editing disappears). After starting, it is possible to adjust stats and perform other operations.
Fortunately, your MOD can be added later, so if you install the first two first and then add this MOD afterward, you can play without any problems. That concludes my report.
Fu 9 月 28 日 上午 6:06 
How do i open that chat box that i can drag around the screen? I had it last time when i was playing, but not today when i load the save.(nvm if its from other mod, im not really sure because i installed too many mod sorry)
ƎNA 9 月 28 日 上午 1:31 
Yes, the responses received look like:

"Good morning, Kitch! The weather is great, it's time to collect herbs and medicinal plants," Lee says cheerfully, adjusting his apron. Matthews comes closer: "Good morning too, Lee! I've already walked around the camp for a bit, it seems like we're going to have a good day to study!"

This is a copy of what I received as a response. Apparently, the changes in prompt have affected this. Previously, the responses were generated correctly and also contained emotions or actions *designated like this*. I think we can assume that the problem was on my side, sorry to bother you.
ƎNA 9 月 28 日 上午 1:20 
It seems that LLM gives the answer in the wrong format, which is why errors occur. It's sad, but I'll be looking for another one.
ƎNA 9 月 28 日 上午 12:56 
I will upload the full json as soon as I get back to my PC and the error will appear again. I am using the LM Studio model "Saiga Gemma3 12b" or "Amoral Gemma3 12b Vision I1". I am also unable to test with Gemini due to regional restrictions
Juicy  [作者] 9 月 27 日 下午 3:06 
@ƎNA in the dev console, if you click the message “Json deserialization failed” it should show the full json that failed to process. Could you upload that in the discussion? Also, which model are you using? Does it also fail when using the default Gemini model?
ƎNA 9 月 27 日 下午 2:28 
Uh... It stopped after a couple of minutes. I don't understand at all, I didn't change anything.