🦙 Ollama

2 guides covering common problems, patterns, and production issues in Ollama.

Ollama lets you run open-source LLMs locally on your own hardware — Mac, Linux, or Windows with GPU. It provides a simple CLI and OpenAI-compatible REST API, making it straightforward to plug local models (Llama, Mistral, Qwen, Gemma, and 100+ others) into any agent framework.

  • 100+ models: Llama, Mistral, Qwen, Gemma, DeepSeek, and more
  • OpenAI-compatible REST API for seamless framework integration
  • GPU acceleration on Apple Silicon, NVIDIA, and AMD GPUs
  • Model quantisation to run large models on consumer hardware
  • Multimodal model support including vision and code models
Visit official site →

Stay sharp as AI tools evolve

New guides drop regularly. Get them in your inbox — no noise, just signal.