The reference catalog for LLM cloud APIs.
Side-by-side pricing, context windows, and capabilities across 56 providers and hundreds of models. Estimate your monthly bill before you write a single API call.
All providers
Sorted by name · 56 totalFrequently asked questions
What is LLM Cloud Hub?
LLM Cloud Hub is a side-by-side comparison catalog for large language model APIs. It tracks pricing, context windows, capabilities, and benchmarks across hundreds of models from every major provider, refreshed nightly.
How often is the data updated?
Pricing and model metadata are refreshed every night from each provider's public catalog. Last-changed timestamps are shown on every page so you can verify the data is current before making a decision.
How do I compare LLM API prices?
Pick any two models and visit /compare/{providerA}/{modelA}/{providerB}/{modelB} for a side-by-side breakdown. The cost calculator at /cost lets you plug in your own monthly token volumes for a workload-specific estimate.
Can I use the data in my own app?
Yes — the full catalog is available as JSON at /api/v1/models with no auth required. Please attribute LLM Cloud Hub if you re-publish.
Which is the cheapest LLM API right now?
The cheapest model varies by workload mix. Open the home page price column or sort the comparison table by input or output price — typical leaders include Google's Gemini Flash family and DeepSeek for general chat workloads.
Where does the pricing data come from?
Prices, context windows, and capabilities are sourced nightly from OpenRouter's public catalog, which aggregates official figures from each LLM provider. Every model page shows the exact UTC timestamp of its last pricing snapshot under "Pricing freshness".