Local model execution via Ollama for elizaOS
Operation | Models | Notes |
---|---|---|
TEXT_GENERATION | llama3, mistral, gemma | Various sizes available |
EMBEDDING | nomic-embed-text, mxbai-embed-large | Local embeddings |
OBJECT_GENERATION | All text models | JSON generation |
nomic-embed-text
- Balanced performancemxbai-embed-large
- Higher qualityall-minilm
- Lightweight optionModel Size | RAM Required | GPU Recommended |
---|---|---|
7B | 8GB | Optional |
13B | 16GB | Yes |
70B | 64GB+ | Required |