Skip to documentation content Read the source on GitHub
Browse documentation
Getting Started
Reference
In progress Page being written
Quickstart on Minikube
Run LLMKube end-to-end on a local Minikube cluster, no cloud account required. The shortest path from a fresh laptop to a working OpenAI-compatible endpoint.
What this page will cover
- System requirements (4+ CPUs, 8 GB RAM, 20 GB disk) and tooling install for macOS, Linux, and Windows.
- Starting Minikube with the right resource flags so the operator and a small model both fit.
- Installing the LLMKube operator with Helm and verifying the controller is healthy.
- Deploying a TinyLlama or Phi-3 model from the catalog and port-forwarding to test the API.