Product

Confidence through configuration

ReserveGrid OS is a policy-driven SV2 gateway. Configure your requirements, deploy your way, observe the results.


Features

Built for operators

Six pillars that make ReserveGrid OS production-ready from day one.

Transport
Channel setup, frame parsing, protocol violations (4 codes)
Framing
Message size, frame sequence, encoding issues (7 codes)
Auth
Invalid credentials, worker ID format, allowlist mismatch (3 codes)
Channel
Unknown channel_id, channel not open, subscriber role violation (4 codes)
Job
Invalid job_id, unknown job, stale prevhash reference (5 codes)
Share
Difficulty below target, ntime out of range, duplicate nonce (11 codes)
Event 1: share_accepted
share_id, event_id, timestamp, miner_id, job_id, reason_code, sv2_response
Event 2: share_forward_result
share_id, event_id, timestamp, forwarded, upstream_accepted, upstream_http_status, error_detail
Join key
Both events carry the same share_id for 1:1 correlation. The adapter blocks on channel capacity rather than dropping.
Invariant
Every share_accepted with sv2_response = "success" produces exactly one share_forward_result.
Shadow
Templates verified internally, results logged, no miner impact. Zero risk audit mode.
Observe
Miners connected, all templates pass, rejections logged but not enforced. Full share lifecycle active.
Inline
Full enforcement. No unverified job reaches a miner. Prevhash switching is fail-closed.
Migration
Switch modes by changing one TOML key. No binary rebuild, no schema change, no downtime.
Happy path
Verdict arrives within 50ms. New prevhash + job broadcast immediately. Target: p50 < 50ms, p95 < 150ms, p99 < 300ms.
Timeout path
No verdict within 50ms. Miners hold stale job for up to 5s. After 5s, deterministic disconnect with prevhash_switch_timeout.
Risk model
Zero stale work distributed. Gateway blocks on miner demand rather than dropping shares.
Escape hatch
VELDRA_ALLOW_NO_SHARE_UPSTREAM_READY_INLINE relaxes upstream-ready check if needed during initial deploy.
Core
mode, listen_addr, verifier_addr, upstream_url, upstream_failure_policy, http_api_key
Noise
privkey, pubkey, authorized_keys, handshake_timeout_ms
Timing
prevhash_verdict_timeout_ms, prevhash_stale_hold_ms, upstream_poll_interval_ms, job_broadcast_interval_ms
Share
share_upstream_url, share_upstream_secret, share_forward_queue_size, share_forward_max_in_flight
Auth
mode (allowlist/prefix_map/open), allowlist, prefix_map, max_worker_id_bytes, max_workers_per_channel
Escape hatches
6 VELDRA_ env vars for relaxing constraints during development and initial deployment
Prometheus
/metrics on HTTP API. 30+ metrics covering connections, jobs, shares, verdicts, prevhash latencies.
Event stream
/gateway/events NDJSON endpoint. Real-time share_accepted, share_forward_result, template_verdict events.
CSV export
Share history export via HTTP API. Bulk analysis, compliance audits, performance reporting.
Grafana dashboards
Pre-built dashboards for miner connections, job distribution, prevhash switching health, upstream responsiveness.
Deployment modes

Three ways to deploy

Shadow Mode

Audit templates without blocking miners

Gates templates
No
Prevhash fail-closed
N/A
Share forwarding
No
Config required
verifier_addr, policy context
Risk
Zero (no miner impact)

Observe Mode

Log verdicts, forward all shares upstream

Gates templates
No
Prevhash fail-closed
N/A
Share forwarding
Yes
Config required
listen_addr, upstream_url, share_upstream_url, verifier_addr
Risk
Low (unverified templates may reach miners)

Inline Mode

Enforce policy with fail-closed safety

Gates templates
Yes
Prevhash fail-closed
Yes (50ms timeout + 5s hold)
Share forwarding
Yes
Config required
All keys, especially timing, gates, and disconnect policy
Risk
Production (fail-closed protects integrity)
Configuration

51 TOML keys, zero magic

ReserveGrid OS is configured via TOML with environment variable overrides. No defaults hide pool behavior.

[gateway]
mode = "inline"                          # shadow, observe, or inline
listen_addr = "0.0.0.0:3333"             # Miner connection address
verifier_addr = "127.0.0.1:5001"         # pool-verifier TCP address
upstream_url = "http://bitcoind:8332"    # Block template source
upstream_failure_policy = "fail_closed"  # fail_closed or permit_all
http_api_key = "your-secret-here"        # Authentication for metrics/events
[timing]
# Prevhash switching
prevhash_verdict_timeout_ms = 50         # Wait time for verdict before timeout
prevhash_stale_hold_ms = 5000            # Hold stale job after timeout

# Polling and broadcast
upstream_poll_interval_ms = 1000         # Check for new templates every 1s
job_broadcast_interval_ms = 100          # Aggregate job broadcasts

# Message timeouts
channel_idle_timeout_ms = 3600000        # Close idle channels after 1 hour
nonce_expire_secs = 60                   # Expire old nonces from dedup filter
[share]
share_upstream_url = "http://pool-backend:9090/shares"
share_upstream_secret = "hmac-key-here"  # HMAC-SHA256 signing key
share_forward_queue_size = 10000         # Max pending shares in queue
share_forward_max_in_flight = 100        # Max concurrent upstream requests
share_batch_interval_ms = 100            # Batch shares before sending
share_forward_timeout_ms = 5000          # Upstream request timeout
[auth]
mode = "prefix_map"                      # allowlist, prefix_map, or open

# Allowlist mode: explicit list of authorized credentials
allowlist = [
  { username = "miner1", password = "secret1" },
  { username = "miner2", password = "secret2" }
]

# Prefix map mode: pattern-based authorization
prefix_map = [
  { prefix = "farm1_", secret = "farm1-key" },
  { prefix = "farm2_", secret = "farm2-key" }
]

max_worker_id_bytes = 512                # Limit worker_name field length
max_workers_per_channel = 1000           # Limit concurrent workers per channel
Security

Operator first

Encrypted transport

Noise NX with fixed keys. No certificate infrastructure, no external dependencies.

Fail-closed logic

Timeouts are deterministic. Stale work is held. Invalid templates trigger rejections with reason codes.

Auditability

Every event is logged with context. Operators can reconstruct decisions offline.

Demo

Try it now

Experience ReserveGrid OS in observe mode with a live SV2 testnet. Try observe mode first. It requires the same configuration but logs all verdicts without gating. No production impact, full visibility.
Get started

Ready to deploy SV2 with confidence?

Shadow mode costs nothing. Observe mode builds trust. Inline mode protects revenue.

Read the docs Observe