安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题
Exception ticking Proj_Foam5953082 (at (114, 0, 148)): System.IndexOutOfRangeException: Index was outside the bounds of the array.
[Ref 33B796A] Duplicate stacktrace, see ref for original
UnityEngine.StackTraceUtility:ExtractStackTrace ()
(wrapper dynamic-method) MonoMod.Utils.DynamicMethodDefinition:Verse.Log.Error_Patch1 (string)
Verse.TickList:Tick ()
(wrapper dynamic-method) MonoMod.Utils.DynamicMethodDefinition:Verse.TickManager.DoSingleTick_Patch2 (Verse.TickManager)
Verse.TickManager:TickManagerUpdate ()
If this were improved, it would be truly awesome.
The developer of Chatlog Overlay has already fixed this, and he asked not to contact the Rimtalk and Rimind developers about this issue, since he will handle it himself in his mod.
Even when I enable only "RiMind conversation" in ChatlogOverlay, it doesn't output anything, and the text is classified and displayed under Core's "Chitchat" category instead.
Is there any way to fix this?
Also, if I may, I’d like to suggest a feature :)
It would be really nice to have an option to filter vanilla interactions. For example, to only allow deep talks or romance attempts to feed the AI. Or maybe a word filter. Sometimes I feel there’s a lot of “garbage” interactions that end up being fed to the AI dialogues, which leads to nonsensical talk…
Anyway, thanks for the mod! Im really enjoying it.
This prompt seems to be working better!
---
Write in the first person and up to no more than 2 sentences.
Each character has Skills that go from 1-20. The Skills are the comma-delimited fields following "Skills: "E.g., "Intellectual" of 0 or 1 means they’re not so bright. 20 means super genius. "Intellectual" value should guide the literacy of each character's speech. Same scale applies to other skills.
Consider each character's Skills when writing for the character. (Don't have them talk about their Skills or buffs)
Role play each RimWorld character per their profile.
Natural tone over perfect sentence structure.
Will look at the payload. Thanks for the pointer! I didn't know where to find it!
I've been playing with AI, also as a dev, so having the prompt will help loads. Thanks!
Thanks for the reply. I really enjoy your mod — it adds exactly the kind of atmosphere the game was missing. I understand your point about the bubbles and possible conflicts, it makes sense to be cautious. But the fact that you take time to explain things in detail and actually talk with players already shows you’re a great creator. I really appreciate that you don’t ignore even small requests and just speak honestly
Secondly, could you design more toggles? For example, third-party insertion, Job Change Detection, and perception of the environment. Yes, I know those are great features, but I'm worried that enabling them all will burden performance.
It seems Intelligence isn't a stat officially provided by RimWorld. Which mod are you using, if any? Or are you perhaps referring to the Research skill? I haven't yet attempted to make the pawn's speech change based on their intelligence level.
To see the actual form of the prompt that is generated and finally sent to the AI, you can activate Developer Mode and check the debug logs where it outputs something like [RiMind] {yourApi} request to {modelName}.
The data for the speaking pawn is always included, but the AI will competitively decide which data to utilize from that payload. I recommend you check the payload data for a request.
I understand there are various ways to run a local AI. Currently, Ollama is properly set up and supported.
In the mod settings, the options for Cloud AI (which require an API key) and Local LLM are separated by a dropdown menu. The input fields are also different: Cloud requires an API key and model name, while Local only asks for the localhost address.
I'm unsure exactly what local environment you're using that forces an API key. The ways AI works are more diverse than they seem, and I haven't been able to use every single one.
It would be helpful to have a little more information, such as the specific environment you are running and which model you are running locally.
Ideology precepts and memes do have an influence, but they don't intervene very aggressively in the dialogue. When there's a rich supply of conversation topics, the frequency of mentions about religion or precepts is expected to decrease quite a bit.
Surprisingly, even I, the developer, don't fully know the exact extent of their influence—they operate on their own!
I'm not familiar with the 1-2-3 Personality mod, so I don't currently know what data I would need to pull from it to automatically fill the prompt.
Could you elaborate on what information or specific part of the pawn's personality from that mod you'd like to be used? This would help me look into the feasibility of integrating it!
That's a very understandable request, as I also feel the game would look much cleaner without the vanilla bubbles. However, I have some reservations regarding the side effects.
While I agree that the vanilla bubbles can break immersion, I'm hesitant to suppress the default vanilla logic entirely. Although not suppressing them doesn't necessarily hinder play, explicitly overriding that logic to hide them might cause conflicts with other mods that rely on or interact with the default bubble functionality.
I'm afraid the negative impact of potential side effects might outweigh the benefit of simply tidying up the UI. We'll definitely have to give this a lot of thought!
인게임 > 개발자모드 > Debug log에 보시면 특정 폰이 발화 시 아래와 같이 로그가 존재합니다. 이게 아닌 어떤 것을 찾으시는 걸까요?
[RiMind] Local LLM API request to gemma3:4b: http://localhost:11434/v1/chat/completions
{"max_tokens":2048,"messages":[{"content":"1069905949 Role play RimWorld character per profile.\nSpeech style: Age\/personality appropriate.\nExpression: Absolutely follow the prompt instructions. Natural tone over perfect sentence structure.\nLength: shorty and casual 1~2 sentences.\nTone: Natural but do not explain your action.\r\n\r\nFOCUS: Continue conversation if someone is nearby, or think aloud if alone.\r\nYou are Lisa:\r\n(female, age 58)\r\nRace: 일반인
SNAC: What do you use as your prompt?
I'm currently trying:
"Write in the first person.
Write nor more than 2 sentences.
Each character has stats that go from 1-20. E.g., Intelligence of 0 or 1 means they’re not so bright. 20 means super genius.
Have each character speak appropriately to their ability relative to each stat.
Role play each RimWorld character per their profile.
Natural tone over perfect sentence structure."
Trouble is, I'm not sure what metadata you're feeding for each character.
It would be helpful to have some of that information about the LLM payload in your description for the mod. That way we could (try to) write better prompts to work within the mod's supplied prompts.
request에 대해서는 정확하게 로깅되고 있어서 지금 당장도 보실 수 있습니다. 순환할 모델을 선택하는 기능은 계획에는 없었는데 어떻게 풀어낼 수 있을지 고민좀 해보겠습니다.
I set the endpoint to .com because an error occurred when I used .cn. I'll check it again when I have time.
@EldenFu
I haven't found this issue during testing or in the current version. I'll definitely look into it once I can reliably reproduce it.
@Senilia
The issue is that the topic designed to detect and comment on a nearby pawn's Hediff status currently only distinguishes between "treated" and "untreated," causing it to interpret scars or old wounds as injuries that require medical attention. I've confirmed this. I'll look into a way to handle this more naturally.
@Waffz_The_Pancake
I don't know what that is. It's not a site that I use.
프롬프트 설정을 다시 해보셔요. 대사는 짧게 1~2문장으로 한다. 정도만 넣어줘도 잘 작동합니다.
I've just updated the UI for custom providers. Please check it out.
Thank you for the report. There are various factors that can determine the final state where a pawn is unable to speak, so I will add more detailed debug logs.