安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题






I really appreciate your feedback, but I’d kindly ask you to verify if there’s actually an issue before posting a comment. If you have further suggestions, please consider sharing them on the discussion board, as I’m unable to respond directly to your comment.
However, since the amount of JSON returned is huge, you need to process it.
Therefore, you can also find other projects on Github that encapsulate the Ollama service in C#, such as OllamaSharp.
If you want to know how to use the OpenAI SDK to access Ollama, I got some help from https://ollama.com/blog/openai-compatibility . At least I found that /v1/chat/completions does receive request replies, but I'm not sure if I used the OpenAI SDK to get the replies, I hope it can be helpful! :)
我知道ollama有标准的openai的api接口,并且使用python成功对话,但是当我尝试使用C#的openAI package获得生成结果, 我从未成功过。并且 将对应的网址输入到其他gpt模组中,同样无法使用。因此我编写了此模组。如果有人知道如何在C#中使用其包访问该接口 请告诉我!
Also, I am trying to add tool and memory support to this mod, but I should be able to access to ollama before start coding for tool support. If you go to my git hub repo page. You'll see I have already add tool support parameters. But yet did not use it.
Anyway, thanks for pointing that out.
ChatVPet Process (https://psteamcommunity.yuanyoumao.com/sharedfiles/filedetails/?id=3403460695)
The above mod has been set up in such a way that instead of using structured output, it's making assumptions about the response style. Leaving it prone to failing when pointed at a local model.
If you could resolve that however, that'd be fantastic.
A more useful mod if you are open to suggestions, would be one that does the same thing that the enhanced one does, the one that adds support for memory and tool use, but instead factoring it in a way that would work with llama based models. If you could take on that, that would be massively useful as a mod!