虚拟桌宠模拟器

虚拟桌宠模拟器

[v1.10+] 桌宠说话语音 EdgeTTS
Shetani 2023 年 8 月 16 日 下午 1:25
any way to chat with it without chatgpt?
Is there a way to AI chat with it without chatgpt, I don't have the money for it lmao
< >
正在显示第 1 - 10 条,共 10 条留言
Dycool51 2023 年 8 月 18 日 下午 1:01 
true I was hoping we could use maybe character.ai characters or something like that
FN [Crazy Ones] 2023 年 8 月 20 日 下午 11:24 
the free chat install says server overlaod please try again later lmao :V
insomnyawolf 2023 年 8 月 22 日 上午 8:08 
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
Shetani 2023 年 9 月 6 日 上午 4:54 
引用自 insomnyawolf
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
idk what that is, but if it works then yes ofc
ninefid 2023 年 11 月 16 日 上午 10:55 
引用自 insomnyawolf
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
LLaMa cpp python bindings already have server available through openai's module. Perhaps we can replace openai's with local
Drudge 2024 年 1 月 17 日 上午 1:11 
引用自 den0620
引用自 insomnyawolf
Maybe we could create a plugin/mod that lets us use llama.cpp or exllama locally, do you think it would be worth to investigate it further?
LLaMa cpp python bindings already have server available through openai's module. Perhaps we can replace openai's with local

I beg of you
insomnyawolf 2024 年 1 月 19 日 上午 6:41 
It advanced a lot lately we could try nowadays, even ooba's uo has an "open ai" compatible api even with streaming.

And models/libraries have advanced a lot as well.

Exllama2 is nuts for speed, if you can load the whole model in VRAM it can answer in less than 5 seconds (but requieres modern nvidia gpus)

Llamacpp is a bit slower but can work anywhere (even without gpu but it will be slow) and use some system memory if vram is not enough.

Models deppends on what are you searching for, i have found out that nowadays there are models that can have a nice conversation with the user but they still narrate things and break sometimes.

If anyone of you is still really interested, send me a message anywhere and we can try to make it real
Shetani 2024 年 1 月 19 日 上午 11:03 
引用自 insomnyawolf
It advanced a lot lately we could try nowadays, even ooba's uo has an "open ai" compatible api even with streaming.

And models/libraries have advanced a lot as well.

Exllama2 is nuts for speed, if you can load the whole model in VRAM it can answer in less than 5 seconds (but requieres modern nvidia gpus)

Llamacpp is a bit slower but can work anywhere (even without gpu but it will be slow) and use some system memory if vram is not enough.

Models deppends on what are you searching for, i have found out that nowadays there are models that can have a nice conversation with the user but they still narrate things and break sometimes.

If anyone of you is still really interested, send me a message anywhere and we can try to make it real
if you actually can that would be sick lol
ninefid 2024 年 1 月 21 日 上午 8:05 
引用自 insomnyawolf
It advanced a lot lately we could try nowadays, even ooba's uo has an "open ai" compatible api even with streaming.

And models/libraries have advanced a lot as well.

Exllama2 is nuts for speed, if you can load the whole model in VRAM it can answer in less than 5 seconds (but requieres modern nvidia gpus)

Llamacpp is a bit slower but can work anywhere (even without gpu but it will be slow) and use some system memory if vram is not enough.

Models deppends on what are you searching for, i have found out that nowadays there are models that can have a nice conversation with the user but they still narrate things and break sometimes.

If anyone of you is still really interested, send me a message anywhere and we can try to make it real

Does ExLLaMa have a server? I tried to move from llama.cpp to exl as soon as got powerful enough gpu but couldnt find it
v1ckxy 2024 年 4 月 9 日 上午 9:28 
Connect the chatgpt mod to text-generation-webui:

https://github.com/oobabooga/text-generation-webui/wiki/12-%E2%80%90-OpenAI-API

ZERO cost. You can run one instance almost everywhere.
最后由 v1ckxy 编辑于; 2024 年 4 月 12 日 上午 1:11
< >
正在显示第 1 - 10 条,共 10 条留言
每页显示数: 1530 50