Salad
Distributed inference cloud — RTX 3090/4090 from $0.03/h
- Absurdly cheap — RTX 3090 from $0.03/h
- Massive horizontal scale (1000+ nodes)
Cheapest GPU clouds · April 2026
Rent GPU compute from $0.10/h. 13 budget GPU clouds ranked by raw price — with the trade-offs spelled out.
If your priority is squeezing maximum compute out of every dollar, four GPU clouds dominate the budget tier in 2026: Vast.ai, RunPod, Hetzner GPU, and Paperspace. Hyperscalers (AWS, GCP, Azure) are systematically 3–5× more expensive for raw GPU compute and only make sense if you need their proprietary ML services.
The cheapest GPU clouds use one or more of these tactics:
Reality check: the cheapest tier requires fault-tolerant code (checkpointing, retry logic). For always-on production inference, add 50–80% to the sticker price for "Secure" or "On-Demand" tiers.
| Provider | Starting Price | Top GPUs | Highlights | Rating | CTA |
|---|---|---|---|---|---|
| Salad | from $0.03/h | RTX 3090, RTX 4090, RTX 3080 ≤24GB |
| ★★★★☆ | View pricing |
| Vast.ai Editor's Choice | from $0.10/h | RTX 3090, RTX 4090, A100 ≤80GB |
| ★★★★☆ | View pricing |
| Hyperstack | from $0.11/h | RTX A6000, A100 80GB, H100 ≤80GB |
| ★★★★☆ | View pricing |
| RunPod Editor's Choice | from $0.20/h | RTX 3090, RTX 4090, A100 80GB ≤80GB |
| ★★★★★ | View pricing |
| TensorDock | from $0.21/h | RTX 4090, RTX 3090, A100 80GB ≤80GB |
| ★★★★☆ | View pricing |
| Massed Compute | from $0.35/h | RTX A6000, A40, A100 80GB ≤80GB |
| ★★★★☆ | View pricing |
| Hetzner GPU | from €0.35/h | A100 PCIe, GTX 1080 ≤80GB |
| ★★★★☆ | View pricing |
| Jarvis Labs | from $0.39/h | RTX 6000 Ada, A100 40GB, A100 80GB ≤80GB |
| ★★★★☆ | View pricing |
| Lyceum Editor's Choice | from $0.39/h | A100 80GB, H100, H200 ≤141GB |
| ★★★★☆ | View pricing |
| Crusoe | from $0.40/h | H100, H200, B200 ≤192GB |
| ★★★★☆ | View pricing |
| Paperspace | from $0.45/h | A100, A6000, RTX 4000 ≤80GB |
| ★★★★☆ | View pricing |
| OVH GPU | from €0.54/h | T4, V100, A100 ≤80GB |
| ★★★★☆ | View pricing |
| Scaleway | from €0.83/h | L4, L40S, H100 ≤80GB |
| ★★★★☆ | View pricing |
Distributed inference cloud — RTX 3090/4090 from $0.03/h
Cheapest GPU cloud — peer-to-peer marketplace for budget training
Global GPU cloud specialist — H100, A100 80GB and L40 from $0.11/h
Best value GPU cloud — huge selection, community + secure cloud
Marketplace GPU cloud — RTX 4090 from $0.21/h, H100 from $1.99/h
Workstation-grade GPUs for AI/ML/VFX — A100 from $1.79/h
Salad starts at $0.03/h on distributed consumer GPUs (RTX 3090/4090) — but it is built for stateless inference workloads only, not training. For real cheap-but-reliable training, Hyperstack RTX A6000 from $0.11/h, Vast.ai community RTX 3090 from $0.10/h (interruptible), TensorDock RTX 4090 from $0.21/h, or RunPod Community at $0.20/h are all stronger options. The right pick depends on whether you need persistent state.
No, not the marketplace/community tiers. Use them for: batch training with checkpoints, hobby projects, hyperparameter sweeps, batch inference. For production APIs, use RunPod Secure ($0.59/h+), Lambda Labs, or Hetzner GPU — still cheap, but with uptime SLAs.
AWS bundles its GPU compute with proprietary services (SageMaker, IAM, VPC, support tiers) and prices for enterprise customers who value the ecosystem. For pure compute, you pay 3-5× more. Specialist clouds skip this overhead. Use AWS only when you need its ecosystem.
Vast.ai 4090 community at $0.34/h or RunPod Community 4090 at $0.39/h. Both fit Llama 3 8B QLoRA in 24GB. Total run cost for a typical fine-tune (~12 hours): $4-5. Compare to AWS at $3.06/h = $37 for the same job.
Persistent storage ($0.10–0.20/GB/month), egress data transfer ($0.05-0.12/GB), static IPs ($3-10/month), and idle time charges (some providers bill for stopped pods retaining storage). RunPod and Vast.ai are the most transparent; hyperscalers have the worst hidden cost reputation.
Get an email when GPU prices drop or availability changes at your preferred provider.
No spam. Unsubscribe any time.