虚拟桌宠模拟器

虚拟桌宠模拟器

聊天API: ChatWithOllama
30 条留言
Bennet 8 月 18 日 上午 3:21 
Must have mod. I connected my 27B AI model to my VPet
Mxpea 7 月 20 日 下午 6:34 
可以做一个LM Studio的吗
Honkai, Start  [作者] 5 月 21 日 上午 10:36 
是不是使用了推理模型?尝试关闭 隐藏r1思考过程选项。这有可能是由于模型参数量过小 没有输出正确的结束思考token。如果还有问题 欢迎加群 这样可以发送图片 更加方便解决!
氢氧化钙钙哥 5 月 20 日 上午 8:18 
为什么我输入了我想要聊的内容并发送,但她给到我的却是一片空白,啥都没有,我已经按照流程完整走一遍了
Honkai, Start  [作者] 4 月 17 日 下午 6:05 
查看一下是不是网址没有输入对?要是本地的话 输入 http://localhost:11434/ (可以先在浏览器中尝试输入 网页应该显示 ollama is running 看看是不是真的打开了), 也可以尝试保存一下重新打开。如果有问题的话 可以加qq群方便解决
阿兰达瓦的文爹 4 月 17 日 上午 5:42 
为什么一直提示我的ollama未启动,任务栏启动了呀,然后模型里面没有可以选的
Bennet 3 月 26 日 上午 11:37 
this is insanely good to have that kind of feature. 10/10
Honkai, Start  [作者] 3 月 23 日 下午 5:58 
You may use non-stream mode which is capable to use tts. Currently, stream mode is not capable to use tts, which is because of the fundamental design of Vpet. I am keeping touch with the author of vpet and trying to add stream talk box. @wwhitepigeon29
wwhitepigeon29 3 月 23 日 下午 5:06 
Anyway to add TTS?
Honkai, Start  [作者] 3 月 17 日 下午 12:47 
Would you like to tell me how you got this error? If possible, you can add to my discord channel to get further assist.
Liv 3 月 17 日 上午 11:37 
"ChatOllama API 出现错误: The given key was not present in the dictionary". how to solve that pls
Honkai, Start  [作者] 3 月 13 日 下午 8:18 
No this is only for ollama
Kawzier 3 月 13 日 下午 7:33 
Can this also be used by alternatives like LM Studio or is it hardcoded?
Honkai, Start  [作者] 3 月 8 日 下午 8:52 
修改网址成 http://localhost:11434/v1/chat/completions 检查ollama是否正常开启 按照描述重新设置一遍
曙光-ShuGuang 3 月 8 日 下午 12:30 
我这边显示模型加载错误怎么办/。。。
Honkai, Start  [作者] 3 月 2 日 下午 6:06 
打开ollama设置 选择一个模型
Sept 3 月 2 日 下午 6:05 
ChatOllama API 出现错误: {"error":"model is required"}怎么解决
Honkai, Start  [作者] 3 月 1 日 上午 10:10 
可以 就是给ollama写的mod
Ch1h4ya An0n 3 月 1 日 上午 7:27 
能不能连接本地ollama的deepseek
LeamonR 2 月 24 日 上午 1:50 
本地部署终于有用武之地了()
Honkai, Start  [作者] 2 月 17 日 下午 5:10 
I noticed that Ollama provides an OpenAI-compatible API, and I have included instructions in the description on how to use the ChatGPT plugin to access Ollama. My plugin does allow users to choose from the installed options with dynamic updates and also enables them to input their own module names—this has been a feature since the very first release.

I really appreciate your feedback, but I’d kindly ask you to verify if there’s actually an issue before posting a comment. If you have further suggestions, please consider sharing them on the discussion board, as I’m unable to respond directly to your comment.
BunBox 2 月 17 日 下午 4:33 
Your best bet is allowing the user to pick a model they have installed (a list of models can be retrieved via something like "curl http://localhost:11434/api/tags" which will return a JSON object containing the installed models. You could use this to populate a drop down selector. Alternatively, have the user type the model in.
BunBox 2 月 17 日 下午 4:31 
Mhmm like Kayato said you send the standard OpenAI completions API call but you send it to localhost at the port ollama is listening on instead. Anything that uses OpenAI calls can be redirected to Ollama. You do need to include the model you're sending the completion to (For example I commonly use hermes3:8b) and you'd need to include the API key, which for Ollama is just "ollama"
Kaguya 2 月 15 日 下午 8:16 
If you want to know how to use C# to access the Ollama service, you only need to send a POST request to its address (such as /api/generate). It works with most languages that support HTTP requests. (You can even receive a reply using PostMan).

However, since the amount of JSON returned is huge, you need to process it.

Therefore, you can also find other projects on Github that encapsulate the Ollama service in C#, such as OllamaSharp.

If you want to know how to use the OpenAI SDK to access Ollama, I got some help from https://ollama.com/blog/openai-compatibility . At least I found that /v1/chat/completions does receive request replies, but I'm not sure if I used the OpenAI SDK to get the replies, I hope it can be helpful! :)
Honkai, Start  [作者] 2 月 15 日 下午 4:26 
I know Ollama have slandered openai api. And I have tried it on Python. But however, when trying to use C# openAI package to get response, it never succeed. Also, using other vpet ChatGPT plugins also does not give me the correct output. So I decided to make this mode. If you know how could use ollama with gpt plugin, please tell me!
我知道ollama有标准的openai的api接口,并且使用python成功对话,但是当我尝试使用C#的openAI package获得生成结果, 我从未成功过。并且 将对应的网址输入到其他gpt模组中,同样无法使用。因此我编写了此模组。如果有人知道如何在C#中使用其包访问该接口 请告诉我!

Also, I am trying to add tool and memory support to this mod, but I should be able to access to ollama before start coding for tool support. If you go to my git hub repo page. You'll see I have already add tool support parameters. But yet did not use it.

Anyway, thanks for pointing that out. :nagisaclannad:
BunBox 2 月 15 日 下午 1:29 
For context: Refering to this mod
ChatVPet Process (https://psteamcommunity.yuanyoumao.com/sharedfiles/filedetails/?id=3403460695)

The above mod has been set up in such a way that instead of using structured output, it's making assumptions about the response style. Leaving it prone to failing when pointed at a local model.

If you could resolve that however, that'd be fantastic.
BunBox 2 月 15 日 下午 1:27 
Is this necessary? The existing "ChatGPT" module (but actually just OpenAI API) works fine with Ollama since they both use the same API schema. (You just point it to localhost and its fine)

A more useful mod if you are open to suggestions, would be one that does the same thing that the enhanced one does, the one that adds support for memory and tool use, but instead factoring it in a way that would work with llama based models. If you could take on that, that would be massively useful as a mod!
银花今天吃什么 2 月 14 日 下午 4:45 
good
Kyle 2 月 13 日 下午 6:39 
已经用上了!我女儿现在成了百科全书:CuteGraySmile:
时光映画 2 月 11 日 上午 6:11 
牛逼!