Thanks for the info. ik about llama.cpp and stuff but the problem is that I’m looking to run both speech to text, llm and text to speech all at the same time. I only have 8gb so yeah even CPU won’t cut it.
I’m planning to upgrade once I get a job or smthing.
8GB of regular RAM? That’s not much. No, that won’t cut it if you also want all the bells and whistles. Maybe try something like the Mistral-7b-OpenOrca with llama.cpp quantized to 4bit and without the STT and TTS. It’s small and quite decent. Otherwise you might want to rent a Cloud-GPU by the hour on something like runpod.io or use free services like Google Colab or you really need to upgrade.
Thanks for the info. ik about llama.cpp and stuff but the problem is that I’m looking to run both speech to text, llm and text to speech all at the same time. I only have 8gb so yeah even CPU won’t cut it. I’m planning to upgrade once I get a job or smthing.
8GB of regular RAM? That’s not much. No, that won’t cut it if you also want all the bells and whistles. Maybe try something like the Mistral-7b-OpenOrca with llama.cpp quantized to 4bit and without the STT and TTS. It’s small and quite decent. Otherwise you might want to rent a Cloud-GPU by the hour on something like runpod.io or use free services like Google Colab or you really need to upgrade.