Vast.ai
Cheapest GPU cloud — peer-to-peer marketplace for budget training
- Absolute cheapest GPU compute available
- Widest GPU variety including consumer cards
A100 cloud comparison · April 2026
The cost-effective workhorse for ML — 18 clouds with A100 40GB and 80GB compared. From $1.09/h to $3.67/h for identical hardware.
The NVIDIA A100 remains the workhorse for ML in 2026. While H100 has more raw FP8 throughput, A100 wins on cost-effectiveness for fine-tuning models up to 70B parameters, embedding generation, image training and most research workloads.
18 clouds offer A100 on-demand. Price varies 3.4× — from $1.09/h (Vast.ai community 80GB) to $3.67/h (AWS on-demand). Availability is significantly better than H100; stockouts are rare except on hyperscalers during peak hours.
Best value picks: RunPod and Lambda Labs for production reliability, Vast.ai for batch training, Hetzner GPU for EU/GDPR compliance.
| Provider | Starting Price | Top GPUs | Highlights | Rating | CTA |
|---|---|---|---|---|---|
| Vast.ai Editor's Choice | from $0.10/h | RTX 3090, RTX 4090, A100 ≤80GB |
| ★★★★☆ | View pricing |
| Hyperstack | from $0.11/h | RTX A6000, A100 80GB, H100 ≤80GB |
| ★★★★☆ | View pricing |
| RunPod Editor's Choice | from $0.20/h | RTX 3090, RTX 4090, A100 80GB ≤80GB |
| ★★★★★ | View pricing |
| TensorDock | from $0.21/h | RTX 4090, RTX 3090, A100 80GB ≤80GB |
| ★★★★☆ | View pricing |
| Hetzner GPU | from €0.35/h | A100 PCIe, GTX 1080 ≤80GB |
| ★★★★☆ | View pricing |
| Massed Compute | from $0.35/h | RTX A6000, A40, A100 80GB ≤80GB |
| ★★★★☆ | View pricing |
| Jarvis Labs | from $0.39/h | RTX 6000 Ada, A100 40GB, A100 80GB ≤80GB |
| ★★★★☆ | View pricing |
| Lyceum Editor's Choice | from $0.39/h | A100 80GB, H100, H200 ≤141GB |
| ★★★★☆ | View pricing |
| Crusoe | from $0.40/h | H100, H200, B200 ≤192GB |
| ★★★★☆ | View pricing |
| Paperspace | from $0.45/h | A100, A6000, RTX 4000 ≤80GB |
| ★★★★☆ | View pricing |
| OVH GPU | from €0.54/h | T4, V100, A100 ≤80GB |
| ★★★★☆ | View pricing |
| Lambda Labs Editor's Choice | from $1.10/h | A100 40GB, A100 80GB, H100 ≤80GB |
| ★★★★★ | View pricing |
| Together AI | from $1.49/h | H100, H200, A100 80GB ≤141GB |
| ★★★★☆ | View pricing |
| Nebius Editor's Choice | from $1.55/h | H100, H200, B200 ≤192GB |
| ★★★★★ | View pricing |
| CoreWeave | from $2.06/h | H100 SXM, A100 SXM, A40 ≤80GB |
| ★★★★☆ | View pricing |
| Google Cloud GPU | from $2.48/h | A100 40GB, A100 80GB, H100 ≤80GB |
| ★★★★☆ | View pricing |
| Azure GPU (NCv3/NDA) | from $2.94/h | A100, H100, V100 ≤80GB |
| ★★★★☆ | View pricing |
| AWS GPU (EC2) | from $3.06/h | A100, H100, V100 ≤80GB |
| ★★★★☆ | View pricing |
Cheapest GPU cloud — peer-to-peer marketplace for budget training
Global GPU cloud specialist — H100, A100 80GB and L40 from $0.11/h
Best value GPU cloud — huge selection, community + secure cloud
Marketplace GPU cloud — RTX 4090 from $0.21/h, H100 from $1.99/h
Affordable EU GPU servers — A100 at European prices
Workstation-grade GPUs for AI/ML/VFX — A100 from $1.79/h
For Llama-3 8B / Mistral 7B / Stable Diffusion training: 40GB is sufficient and ~30% cheaper. For Llama-3 70B QLoRA, video AI, or batch sizes >32 on 13B models: 80GB. The price gap shrinks further with FlashAttention-3 and 8-bit optimizers.
Choose A100 for: fine-tuning models ≤70B with QLoRA, ML research, image generation, RAG/embedding pipelines, and any workload not bottlenecked by FP8/Transformer Engine. H100 is only worth the premium for full fine-tunes of large models, multi-node training, or production inference of >70B models.
Vast.ai community at $1.09/h (interruptible) is the absolute cheapest. For reliable production: RunPod Secure at $1.79/h or Lambda Labs at $1.79/h. AWS p4d on-demand is $3.67/h — avoid unless you need SageMaker integration.
Yes — A100 is the cost-effective default for most ML workloads. H100 dominates only large-scale training (≥70B full fine-tune, multi-node) and high-throughput FP8 inference. For ~80% of practical ML, A100 80GB is the better buy.
Hetzner GPU offers A100 PCIe in Germany and Finland at €0.35/h — the cheapest A100 in Europe. OVH (FR/DE) offers A100 at €0.54/h. Both are GDPR-compliant. CoreWeave also has EU H100/A100 capacity.
Get an email when GPU prices drop or availability changes at your preferred provider.
No spam. Unsubscribe any time.