Mine PEARL in 30 minutes — the Docker one-command quickstart

Two months ago, getting a PEARL miner running from source took 3+ hours and required wrestling with CUDA versions, vLLM compilation flags, missing inference workers, and 5 separate gotchas. Today it's a Docker image and two environment variables. This is the under-30-minute setup.

What you need before starting

  1. A PEARL wallet address (starts with prl1...). If you don't have one yet → 5-minute wallet setup.
  2. A HuggingFace account + access token. Sign up at huggingface.co/join, generate a Read token at settings/tokens, and accept the Llama license at meta-llama/Llama-3.3-70B-Instruct. Important: Meta approval takes 1–24 hours. Submit the license request now, work on the rest while you wait.
  3. A cloud GPU account — RunPod or Vast.ai. Pick one based on our comparison.

Pick your provider — sign up via our affiliate links

First-time miners
RunPod
🎁 $5–10 free credit

Easier UI, predictable billing, $5–10 credit covers your first 4 hours of testing. Best for first-time setup.

Sign up + claim credit →
Sustained mining
Vast.ai
💰 30–50% cheaper

Marketplace pricing — H100 PCIe from $1.20/h. Best $/PRL once you've validated the setup works.

Sign up + browse GPUs →

Both links are affiliate — costs you nothing extra, supports this explorer. More on how the project is funded.

The 30-minute setup

Step 1 — Spin up an H100 SXM pod (5 min)

On either provider:

Click Deploy. The container starts immediately.

Step 2 — Wait for the chain to sync (~2 hours, hands-off)

The container's entrypoint walks through:

  1. Start pearld (Pearl full node)
  2. Wait for chain to fully sync from genesis (~2 hours on first boot)
  3. Start pearl-gateway (the mining bridge)
  4. Start vllm serve with the Llama 3.3 70B model (downloads ~140 GB on first run; ~30 min)
  5. Start pearl_worker.pythis is the critical piece — sends 32 concurrent inference requests to vLLM in a loop. Without it, vLLM sits idle and nothing mines. The official Pearl repo doesn't include this worker; the Docker image we recommend does.

You don't need to watch this happen. Tail the pod's logs once or twice if you want, but honestly — go cook dinner.

Step 3 — Verify mining is happening (5 min, after sync)

Once the container says vLLM is ready! Starting mining worker in the logs:

  1. Open lordofpearls.xyz and paste your prl1... wallet in the search bar.
  2. Wait 30–90 minutes after vLLM starts. You should see your first share submitted.
  3. First block (i.e. first PRL credited to your wallet) typically arrives within 1.5–3 days at current network hashrate. Variance is huge — don't panic if it takes longer.

Step 4 — Get Telegram alerts on your wallet (1 min)

While waiting for the first block, set up automatic notifications. Open @LordOfPearlsAlertsBOT, send /start, then /subscribe prl1... with your address. You'll get a DM the moment any block is mined to your wallet — much better than refreshing the explorer every hour.

The math after setup is done

At current network conditions (May 2026), a single H100 PCIe rented at $1.20/hr produces roughly:

Full breakdown + risk scenarios → PEARL mining ROI in 2026.

What if something breaks

Container won't start / "container is not running"

Check the logs (RunPod's Logs tab, or docker logs <container_id> on Vast.ai). Most common: missing HF_TOKEN, or you didn't accept the Llama license. The container exits early with a clear message.

Logs say "downloading blocks" forever

Normal during initial sync. Pearl's chain takes ~2 hours from genesis. If you've been at "downloading blocks" for >3 hours, the pod might have low bandwidth — check Vast.ai host's network speed score (≥ 500 Mbps preferred for the model download too).

vLLM crashes immediately on startup

Almost always: chain sync wasn't complete when vLLM tried to fetch its first block template. The official terrapin88 entrypoint guards against this by waiting for sync before starting vLLM. If you wrote your own setup, that's the gotcha.

"Shares submitted: 0" after several hours

Check that pearl_worker.py is running (the Docker image has it baked in; if you're rolling your own setup, you need this). Without inference requests, vLLM is idle and the NoisyGEMM kernel never finds blocks. This is the #1 reason DIY setups produce zero PRL.

Pod gets terminated mid-mining (Vast.ai)

Pick a host with a Reliability score ≥ 99% and a Verified badge. Cheap-but-flaky hosts will eat your mining time.

What this Docker image actually contains

Full credit to terrapin88 — they packaged the official Pearl miner with the missing pieces:

The image is MIT licensed, public on GitHub, and audited (clean code, no exfiltration, no hardcoded addresses other than the env vars you set).

FAQ

Can I mine PEARL on a consumer RTX card?

No. PEARL's mining kernel requires NVIDIA Hopper (SM 9.0): H100, H200, GH200. RTX 4090, 5090, A100 — none work. See best GPU for PEARL mining.

Do I need a mining pool?

No public PEARL pools exist yet. All current miners run solo with this same setup. Pools may emerge as the network grows.

How do I cash out the PEARL I mine?

Currently OTC only — community Discord, or peer marketplaces like pearl-otc.com. No CEX listings yet. Most miners hold or trade for USDT via OTC.

What's the minimum budget to try this?

$10–15 with RunPod's free credit. The free credit covers ~3–5 hours of H100 time. Add $5–10 to actually finish the chain sync and validate one full day. After that, the math is profitable per day.

Can I run this on multiple GPUs in one pod?

Yes — set the container start with multiple GPUs available, the entrypoint auto-detects and configures vLLM with --data-parallel-size $GPU_COUNT. 2 H100s in one pod gives ~2× the daily PRL output. Beyond 4 GPUs on one pod, you'll hit diminishing returns from vLLM batching limits.

Will my mining continue when I close my laptop?

Yes — the pod runs in the cloud, not on your machine. You can close your laptop, sleep your Mac, fly across an ocean. The pod keeps mining 24/7 until you stop it or run out of credit.

Bottom line

What used to take 3+ hours and required deep CUDA debugging is now 15 minutes of clicks + 2 hours of unattended sync. The Docker image solves every gotcha we accumulated over weeks of trial and error. Sign up to RunPod for the free credit if it's your first time, then move to Vast.ai for sustained operations. Drop your wallet into our explorer, set Telegram alerts, and check back in 2 days.

If you hit a wall, the GPU breakdown and ROI deep-dive cover most edge cases. The rest is just math + patience.