安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题








Could you try updating to the latest version of the mod? The it should support debugging logs once you enable the flag.
To do this, turn on Dev Mode, then go to Options -> Dev -> Verbose Logging.
With that enabled, you should see proper request and response information in the console, which will make it much easier to diagnose issues.
That said, I highly suspect the problem is related to the model size. Using a very small model (like 1B) can cause the LLM to produce malformed JSON or structure that don’t match what Rimtalk expects. I recommend using at least a 4B model with a maximum temperature of 0.7. This usually ensures the model follows instructions well enough to generate a valid response.
Let me know if the issue persists after these changes.