Independent comparison Updated April 2026 20 GPU providers tested Real hourly pricing

Transparency · Research Process

How We Research GPU Cloud Providers

Behind every price, rating, and availability status on GPUHosted is a repeatable process. Here's exactly how we collect data, rate providers, and run benchmarks — so you can judge the quality of our comparisons yourself.

Pricing Data Collection

All prices on this site are collected via weekly manual checks. We maintain test accounts at each provider and compare the publicly advertised rate against the actual price shown at provisioning time — these sometimes differ due to promotional pricing, regional availability, or billing quirks.

What our prices represent

  • On-demand hourly rates for a single GPU instance (no multi-year reservations, no volume discounts unless noted)
  • No spot/preemptible pricing unless explicitly labelled "Community" or "Spot"
  • EUR → USD conversion at 1 EUR = 1.08 USD (approximate 2026 rate; recalculated monthly)
  • Verified: April 2026. Prices change frequently — always confirm at the provider before purchasing

We do not scrape provider APIs or rely on third-party aggregators. Every price you see was checked by a human who clicked through to the provisioning screen.

Availability Tracking

The GPU Availability Tracker is updated daily at approximately 09:00 UTC. We attempt to provision each listed GPU type at each provider and record what we see.

StatusMeaningETA shown
✓ AvailableGPU provisioned successfully within 5 minutes
⚠ LimitedGPU available but small remaining pool; may take up to 30 minHours
⌛ WaitlistProvider shows a capacity queue or reported ETA > 1 hour1–3 days
✗ StockoutNo on-demand capacity; reservation or long wait required1–2 weeks+

Marketplace providers (Vast.ai, RunPod Community Cloud): status reflects the size of the active host pool, not a guarantee that any single host is available. Actual availability fluctuates minute-by-minute.

Rating Methodology

Each provider receives an overall rating from 1.0 to 5.0, computed as a weighted average of 5 factors. Ratings are reviewed quarterly or when a provider makes significant changes.

FactorWeightWhat we measure
Price competitiveness30%On-demand price vs. median for each GPU category
Reliability & uptime25%Historical uptime, incident frequency, spot eviction rates
Developer experience20%CLI quality, API documentation, container/SSH workflow, UI
GPU variety & availability15%Range of GPU types offered, on-demand vs. reservation-only access
Support quality10%Response time, community resources, documentation completeness

Each factor is scored 1–5 by our editorial team based on direct use, published benchmarks, and community feedback. The overall score is rounded to one decimal place.

Benchmark: Llama 3 8B INT4 Throughput

We run at least one standardized inference benchmark per provider to give you a concrete performance data point beyond marketing claims. Below are our most recent results.

Test configuration

  • Model: Meta-Llama-3-8B-Instruct.Q4_K_M.gguf (4-bit quantized, ~4.9 GB)
  • Runtime: llama.cpp server build b3713, -ngl 40 -c 2048
  • Workload: 512-token prompt, 256-token generation, 5 runs, mean reported
  • Metric: Generation tokens per second (TTFT excluded)
  • Test date: April 2026
ProviderInstanceGPUTok/sec
RunPod SecureA100-SXM4-80GA100 80GB SXM178
Vast.ai (community)A100 PCIe community hostA100 80GB PCIe165
Lambda Labsgpu_1x_a100A100 40GB PCIe152
Hetzner GPUGX2-44A100 40GB PCIe144

SXM A100 has higher memory bandwidth than PCIe, explaining the RunPod advantage. Vast.ai community host NVLink configuration varied. Results are representative but not a guarantee — hardware from the same provider varies by individual host.

We plan to expand benchmarks to H100 instances and vLLM-based throughput. If you have benchmark data from a provider we haven't covered, send it over.

Affiliate Disclosure

Some provider links on this site earn us a referral commission when you click through and sign up. This revenue covers our testing infrastructure, hosting, and editorial time.

Commissions do not influence ratings

We rate providers based on our 5-factor methodology regardless of affiliate relationships. Lambda Labs and CoreWeave — two of our highest-rated providers — offer limited or no affiliate programs. We have downgraded providers in our rankings despite having commercial relationships with them.

Links to providers are marked with → on comparison pages. All pricing and availability data is collected independently of affiliate status.

Report an Error

GPU cloud prices and availability change constantly. If you spot a price discrepancy, a stale availability status, or any other factual error, please let us know.

Email us at [email protected]

Include the following so we can update quickly:

  • Provider name and GPU type
  • Date and time you checked
  • What you actually saw (price, availability status)
  • Screenshot if possible

We typically review and update within 24 hours on weekdays.