Best H100 cloud GPU providers 2026 — RunPod vs Vast.ai vs Lambda vs TensorDock
Renting an H100 in 2026 is a $1.20–$3.50/hour decision — but the spread between the cheapest and the most reliable is wider than the headline numbers suggest. This is the unfiltered 4-way comparison: pricing, reliability, setup time, hidden costs, and which provider actually wins for what use case.
The 30-second answer
| Provider | H100 PCIe | H100 SXM | H200 | Free credit | Best for |
|---|---|---|---|---|---|
| Vast.ai | $1.20–$1.80 | $1.50–$2.10 | $2.40–$3.20 | — | Cheapest sustained mining |
| RunPod | $1.99–$2.49 | $2.20–$2.99 | $2.99–$3.99 | $5–10 | Easy first-time setup |
| Lambda Labs | $2.49 | $2.99 | $3.49 | — | Enterprise reliability |
| TensorDock | $1.49–$1.99 | $1.79–$2.49 | $2.79–$3.49 | — | Mid-range price + decent reliability |
Headline: Vast.ai for sustained mining (cheapest), RunPod for first-time setup (easiest UX + free credit), Lambda Labs if reliability matters more than price (enterprise SLAs), TensorDock as a middle-ground alternative.
Vast.ai — the price leader
Vast.ai is a peer-to-peer GPU marketplace. Independent hosts list their idle GPUs and you bid for compute. The result: 30–50% cheaper than enterprise providers for the same H100.
What Vast.ai gets right
- Price. H100 PCIe from $1.20/hr, H100 SXM from $1.50/hr. No other provider matches this consistently.
- Direct SSH. Every pod gets a real SSH endpoint with IP + port. No proxy quirks.
- Custom Docker images. Drop in any image URL, deploy in seconds.
- Per-second billing. No 1-hour minimums.
- Crypto payments. USDT, USDC, BTC accepted.
The trade-offs
- Variable reliability. Each host has a reliability score (95–99%+). Cheap hosts often have lower scores. Always filter for ≥99%.
- Manual host selection. You browse a marketplace and pick a host. ~5 extra minutes vs RunPod's preset list.
- Storage cost when stopped. $0.10/GB/mo for storage on stopped pods. Adds up if you keep volumes long-term.
Best for
Sustained 24/7 mining or inference workloads where every dollar saved compounds. A single H100 PCIe at $1.20/hr saves you $300–$600/month vs RunPod over a 30-day window.
Sign up: Vast.ai (our affiliate link — no extra cost).
RunPod — the easiest first run
RunPod runs its own data centers (plus a "community cloud" of independent hosts). Polished UI, predictable billing, real customer support, $5–10 free credit on signup.
What RunPod gets right
- Setup speed. H100 PCIe pod online in <90 seconds. Templates + custom images both work.
- Free credit. $5–10 covers ~3–5 hours of H100 time — enough to validate any setup.
- Reliability. 99.9%+ uptime on Secure Cloud (their own DCs).
- Persistent network volumes. $0.07/GB/mo storage that survives pod terminations.
- Real support. Live chat reaches actual humans within minutes.
The trade-offs
- Higher prices. 30–50% more expensive than Vast.ai for the same H100 SKU.
- SSH proxy by default. The default SSH method is a proxy that doesn't support non-interactive commands (no
ssh host 'cmd'). Direct TCP SSH works but requires the right base image with sshd installed. - Limited GPU SKU variety. Mostly H100/H200; less diverse than Vast.ai's marketplace.
Best for
First-time GPU rental, validating a new workload before committing to long-term spend, or production workloads that need predictable infrastructure. Use the free credit to validate, then move to Vast.ai for sustained operations.
Sign up: RunPod (claim $5–10 free credit).
Lambda Labs — enterprise grade
Lambda Labs targets the AI training market — research labs, well-funded startups, enterprise teams. Their pricing reflects that.
What Lambda Labs gets right
- Reliability. 99.99% SLA, redundant power + cooling, owned data centers.
- Networking. 8x H100 nodes with 3.2 Tbps InfiniBand for distributed training.
- Pre-installed stack. Lambda Stack includes CUDA, PyTorch, TensorFlow ready to go.
- Reservation pricing. 1-month and 1-year contracts at significant discount.
The trade-offs
- Capacity-constrained. H100 SXM availability is intermittent — sometimes you'll see "out of stock". Need to reserve in advance for guaranteed access.
- Higher floor price. $2.49/hr H100 PCIe; $2.99/hr SXM. ~40% more than Vast.ai.
- Less flexible setup. Lambda Stack is opinionated; custom Docker images are second-class citizens.
Best for
Production AI training where reliability and InfiniBand networking matter more than per-hour cost. Not optimal for crypto mining (cost-sensitive workloads burn margin on Lambda's pricing).
TensorDock — the middle-ground alternative
TensorDock is a younger entrant — peer-to-peer marketplace similar to Vast.ai but with stricter host vetting. Mid-range pricing, decent reliability, growing inventory.
What TensorDock gets right
- Vetted hosts only. No "reliability score" to filter on; every host meets minimum standards before listing.
- Cleaner UX than Vast.ai. Less marketplace noise, faster decisions.
- Reasonable pricing. H100 PCIe from $1.49/hr — between Vast and RunPod.
- API for automation. Spin up/down pods programmatically.
The trade-offs
- Smaller inventory. Fewer hosts = sometimes capacity-limited at peak times.
- Less battle-tested. Newer platform; some workflows still rough at the edges.
- Pricing rises in popular regions. US-East listings often spike during work hours.
Best for
Users who want Vast.ai prices without the manual host-vetting workflow. Good middle path if Vast.ai feels too DIY and RunPod feels too expensive.
Per-use-case recommendations
| Use case | Recommended provider | Why |
|---|---|---|
| PEARL / PoUW mining (24/7) | Vast.ai | Margin-sensitive workload. Every $0.50/hr saved = $360/month per GPU. |
| First-time experiment / validation | RunPod | Free credit covers validation. UI doesn't get in the way. |
| Multi-node distributed training | Lambda Labs | Only one with reliable InfiniBand at 8-GPU+ scale. |
| Production inference (latency-critical) | RunPod or Lambda Labs | Reliability matters more than $/hr. SLAs justify the premium. |
| Quick experiments, hobby projects | Vast.ai or TensorDock | Cheapest. If host dies mid-experiment, just relaunch. |
Pricing math — a worked example
Say you commit to a single H100 PCIe for 30 days, 24/7:
| Provider | Hourly | Monthly cost | Vs cheapest |
|---|---|---|---|
| Vast.ai (community) | $1.20 | $864 | baseline |
| TensorDock | $1.49 | $1,073 | +$209/mo |
| RunPod (Secure Cloud) | $1.99 | $1,433 | +$569/mo |
| Lambda Labs | $2.49 | $1,793 | +$929/mo |
Over a 6-month commitment, the spread is $5,574 — meaningful money for any single-developer or small-team operation.
Hidden costs to factor in
- Egress bandwidth. Most providers charge $0.05–$0.15 per GB outbound. For PEARL mining (low data egress) this is negligible. For inference serving (high outbound) this adds up — could be $50+/month.
- Storage. $0.07–$0.20 per GB per month. A 250GB volume = $17–$50/month additional.
- Spot vs on-demand. Spot instances are 50–80% cheaper but can be terminated mid-workload. Mining survives termination (just relaunch); training does not.
- Region pricing. US-West is usually 10–20% pricier than EU-Central or Asia-Pacific for the same SKU.
Setup time comparison (first H100 to running workload)
| Step | Vast.ai | RunPod | Lambda | TensorDock |
|---|---|---|---|---|
| Sign up + KYC (if needed) | 5 min | 3 min | 10 min (sometimes manual review) | 3 min |
| Add payment | 3 min | 3 min | 5 min | 3 min |
| Find + select GPU | 5–10 min | 2 min | 2 min | 2 min |
| Launch pod | 2 min | 2 min | 2 min | 2 min |
| SSH + first commands | 1 min | 1 min (after image config) | 1 min | 1 min |
| Total | ~16 min | ~11 min | ~20 min | ~11 min |
FAQ
Which provider has the cheapest H100 PCIe in 2026?
Vast.ai's community marketplace consistently — H100 PCIe from $1.20/hr from individual hosts. TensorDock is second-cheapest at $1.49–$1.99. RunPod and Lambda are roughly $2.00–$2.50.
Does Vast.ai's lower reliability matter for crypto mining?
Marginally. A 95% reliability host loses you 1.2 hours/day vs 24h. At $0.50/hr in lost mining output, that's $15/day or $450/month — meaningful. Filter for ≥99% reliability hosts on Vast.ai and the gap closes to under $30/month, well below the price savings.
Can I use the same Docker image across all 4 providers?
Yes. All four support custom Docker images. The same image (e.g., ghcr.io/terrapin88/pearl-miner-docker:latest for PEARL mining) works identically on all of them.
Do any of these accept crypto for payment?
Vast.ai and RunPod accept USDT/USDC. TensorDock accepts crypto via partner. Lambda Labs is fiat-only (corporate accounts mostly).
What about other providers — Paperspace, AWS, GCP?
AWS p4d.24xlarge (8x A100) is ~$32/hr — uncompetitive on $/H100. GCP A3 instances similar. Paperspace's H100 pricing has been creeping up; ~$2.49/hr now, no advantage over RunPod. CoreWeave is enterprise-focused with custom contracts. The 4 providers above cover 95% of solo/small-team use cases.
Is reservation pricing worth it?
Lambda Labs offers 1-month and 1-year reservations at 30–40% off on-demand. If you have a long-term sustained workload (e.g., 6+ months of mining), reservation math beats spot. Calculate: (on-demand × 12 × 30%) vs (reservation × 12). Usually wins past 4 months.
Bottom line
For PEARL mining and most cost-sensitive AI workloads in 2026: Vast.ai is the answer. Cheapest sustained pricing, direct SSH, custom Docker support. Filter for ≥99% reliability hosts and the math is unbeatable.
For first-time setup or production reliability: RunPod with the $5–10 free credit. Then migrate to Vast.ai once you've validated the workflow.
For distributed training at 8+ GPU scale: Lambda Labs (only realistic option for InfiniBand at scale).
For PEARL mining specifically — full setup walkthrough at our 30-minute Docker quickstart. Per-GPU profitability math at our PEARL ROI calculator.
Live network conditions (which determine your daily PRL share regardless of provider): /stats.
Affiliate disclosure: Lord Of Pearls earns commission when you sign up via the linked providers. No extra cost to you. Helps keep the explorer free + ad-light.