Run production LLMs
on your own hardware
Overnight, a local model on two $400 GPUs wrote the operator's next feature. Merged as PR #283. LLMKube makes self-hosted inference production-ready.
See it in action
Deploy LLMs with any runtime in seconds using the llmkube CLI
What's happening here
Recent posts from the lab
62.2% on Aider Polyglot from a MacBook Pro. Then the other model we tried scored 4%. Here's what actually happened, with a working cost loop attached.
Qwen3.6-35B-A3B Q8 on a MacBook Pro M5 Max scored 62.2% on Aider Polyglot (n=225/225), beating Claude Sonnet 4 with 32k thinking, o1-high…
ReadWe ran Qwen3.6-27B on $800 of consumer GPUs, day one: llama.cpp vs vLLM
A Kubernetes-native bake-off on 2× RTX 5060 Ti, published 48 hours after Tongyi Lab dropped the model. vLLM wins throughput by 3-4× at high…
ReadI Sent the Agents Loose on My Kubernetes Operator. Here's What They Shipped.
I pointed a fleet of coding agents at LLMKube and told them to audit the repo and close what they found. Six hours later, 17 PRs had landed…
ReadWhy LLMKube?
Local LLMs are great for prototyping. Scaling them for a team is where it gets hard.
The scaling problem
- × Silent failures with no alerts
- × Multi-GPU memory math by trial and error
- × Updates that break your setup
- × Docker Compose that doesn't scale
- × One person managing everything
- × Every machine set up by hand
With LLMKube
- Pluggable runtimes: vLLM, TGI, llama.cpp, or bring your own
- HPA autoscaling that responds to real inference metrics
- GPU layer offloading with custom sharding splits
- Infrastructure as code, not scripts and duct tape
- Grafana dashboards for inference metrics out of the box
- CUDA 13 and NVIDIA Blackwell GPU support
vLLM for speed. TGI for flexibility. llama.cpp for efficiency. LLMKube for all of them.
One operator, every runtime. The platform layer your inference stack is missing.
Deploy an LLM in seconds
Simple, declarative YAML that feels native to Kubernetes developers
apiVersion: inference.llmkube.dev/v1alpha1
kind: Model
metadata:
name: phi-3-mini
spec:
source: https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf/resolve/main/Phi-3-mini-4k-instruct-q4.gguf
format: gguf
quantization: Q4_K_M
hardware:
accelerator: cuda
gpu:
enabled: true
count: 1
resources:
cpu: "2"
memory: "4Gi"Early Adopter Program
Help shape the future of LLMKube and get direct access to the maintainer.
What you get
- Private Discord with other early adopters
- Direct input on the roadmap
- Your logo on our website (when ready)
- Early access to new features
What we need
- Real-world feedback on your use case
- 30 minutes monthly for a feedback call
- Permission to share your story (anonymized if needed)
Apply to join
Ready to deploy your first LLM?
Join the community of developers deploying LLMs on Kubernetes.
Open source and free forever