# Ollama [Ollama](https://ollama.com/) allows you to run open-source large language models, such as Llama 3, locally. ## 🚀 Deployment This service is deployed using Docker Compose. ### Files - `compose/compose.yml`: Docker Compose configuration. ## 🔗 Links - [Local Homelab](../)