Independent comparison Updated April 2026 20 GPU providers tested Real hourly pricing

H100 cloud comparison · April 2026

Best H100 Cloud Providers 2026

Where to actually get NVIDIA H100 capacity — 16 clouds compared on on-demand price, availability and cluster size. From $1.99/h.

The H100 market in April 2026

The NVIDIA H100 is the dominant accelerator for serious LLM training and high-throughput inference in 2026. Compared to the A100, it delivers ~3× FP16 throughput and ~6× FP8 throughput thanks to the Transformer Engine — but availability is the bottleneck, not performance.

Across the 16 GPU clouds with on-demand H100s, hourly pricing spans $1.99/h to $4.10/h for identical hardware. The choice is rarely just price — it's where you can actually get H100 capacity right now.

Specialist clouds win on price. RunPod, Lambda Labs and CoreWeave dominate on-demand H100 availability and cost 40–60% less than AWS p5 / GCP A3 / Azure NDA100 v5 for equivalent compute.

Provider Starting Price Top GPUs Highlights Rating CTA
H Hyperstack from $0.11/h RTX A6000, A100 80GB, H100 ≤80GB
  • Outstanding entry pricing for A6000
  • Full networking stack (VPC, firewall, NAT)
★★★★☆ 4.3 View pricing
T TensorDock from $0.21/h RTX 4090, RTX 3090, A100 80GB ≤80GB
  • Among the cheapest H100 access in 2026
  • Wide host network = better availability
★★★★☆ 4.2 View pricing
M Massed Compute from $0.35/h RTX A6000, A40, A100 80GB ≤80GB
  • Strong A6000 / A40 lineup at moderate price
  • Pre-built VFX and AI templates
★★★★☆ 4.1 View pricing
J Jarvis Labs from $0.39/h RTX 6000 Ada, A100 40GB, A100 80GB ≤80GB
  • Excellent pricing for H100
  • RTX 6000 Ada — 48GB at moderate cost
★★★★☆ 4.3 View pricing
C Crusoe from $0.40/h H100, H200, B200 ≤192GB
  • Among the cheapest H200 access — from $2.10/h
  • B200 availability while most clouds wait-list
★★★★☆ 4.4 View pricing
Scaleway from €0.83/h L4, L40S, H100 ≤80GB
  • Strong EU presence (Paris + Amsterdam)
  • Mature cloud platform (S3, k8s, networking)
★★★★☆ 4.0 View pricing
T Together AI from $1.49/h H100, H200, A100 80GB ≤141GB
  • Best-in-class inference performance
  • Excellent open-source model coverage
★★★★☆ 4.4 View pricing
C CoreWeave from $2.06/h H100 SXM, A100 SXM, A40 ≤80GB
  • Best multi-node GPU cluster performance
  • High-speed InfiniBand interconnects
★★★★☆ 4.4 View pricing
Google Cloud GPU from $2.48/h A100 40GB, A100 80GB, H100 ≤80GB
  • Best TPU availability for TF workloads
  • Deep Vertex AI + BigQuery integration
★★★★☆ 4.3 View pricing
Azure GPU (NCv3/NDA) from $2.94/h A100, H100, V100 ≤80GB
  • Deep OpenAI / Azure OpenAI integration
  • Best choice for Microsoft-stack enterprises
★★★★☆ 4.1 View pricing
AWS GPU (EC2) from $3.06/h A100, H100, V100 ≤80GB
  • Most comprehensive ML toolchain (SageMaker)
  • Spot instances for massive cost savings
★★★★☆ 4.2 View pricing
#1
V

Vast.ai

Cheapest GPU cloud — peer-to-peer marketplace for budget training

from $0.10/h ★ 4.1
  • Absolute cheapest GPU compute available
  • Widest GPU variety including consumer cards
View pricing →
Price accurate?
#2
H

Hyperstack

Global GPU cloud specialist — H100, A100 80GB and L40 from $0.11/h

from $0.11/h ★ 4.3
  • Outstanding entry pricing for A6000
  • Full networking stack (VPC, firewall, NAT)
View pricing →
Price accurate?
#3
R

RunPod

Best value GPU cloud — huge selection, community + secure cloud

from $0.20/h ★ 4.6
  • Cheapest community GPUs from $0.20/h
  • Massive GPU variety including H100
View pricing →
Price accurate?
#4
T

TensorDock

Marketplace GPU cloud — RTX 4090 from $0.21/h, H100 from $1.99/h

from $0.21/h ★ 4.2
  • Among the cheapest H100 access in 2026
  • Wide host network = better availability
View pricing →
Price accurate?
#5
M

Massed Compute

Workstation-grade GPUs for AI/ML/VFX — A100 from $1.79/h

from $0.35/h ★ 4.1
  • Strong A6000 / A40 lineup at moderate price
  • Pre-built VFX and AI templates
View pricing →
Price accurate?
#6
J

Jarvis Labs

On-demand H100 / A100 / RTX 6000 Ada from $0.39/h

from $0.39/h ★ 4.3
  • Excellent pricing for H100
  • RTX 6000 Ada — 48GB at moderate cost
View pricing →
Price accurate?

Frequently Asked Questions

Which cloud has the cheapest H100 in 2026? +

RunPod Secure Cloud at $1.99/h is the cheapest on-demand H100 80GB. RunPod Community can be cheaper but is interruptible. For reserved/long-term commits, Lambda Labs and CoreWeave can quote significantly lower than the $1.99/h on-demand rate.

Why are H100s often unavailable on AWS? +

AWS p5 (8× H100) instances are concentrated in select regions (us-east-1, us-west-2, eu-west-1) and are heavily reserved by enterprise customers. On-demand stockouts are common during US working hours. Specialist clouds like RunPod and CoreWeave have larger free-pool inventories.

H100 vs A100 — which should I rent? +

For Llama-3 70B fine-tuning or large-scale training, H100 is 2–3× faster and despite costing more per hour, often cheaper per training run. For inference of <13B models or research workloads, A100 80GB is more cost-effective.

How many H100s do I need to fine-tune Llama-3 70B? +

For full fine-tuning: 8× H100 (one DGX-equivalent node) for ~12-24 hours per epoch with 100K samples. For QLoRA: 1× H100 80GB suffices for ~6-8 hours. CoreWeave and Lambda Labs are best for multi-node H100 jobs (InfiniBand interconnect).

H100 SXM vs PCIe — what is the difference? +

H100 SXM (used by CoreWeave, AWS p5, GCP A3) has NVLink up to 900 GB/s for multi-GPU jobs, while H100 PCIe (RunPod, Lambda) is limited to PCIe Gen5 ~128 GB/s but is ~10-15% cheaper. SXM is essential for ≥4-GPU training, PCIe is fine for single-GPU inference and ≤2-GPU training.