Defaults
| Bucket | Window | Limit (per key) | Routes covered |
|---|---|---|---|
prepare | 60s | 600 | All POST /v1/.../prepare endpoints |
submit | 60s | 600 | POST /v1/submit |
receipts.write | 60s | 1200 | POST /v1/receipts/{agent} |
receipts.read | 60s | 600 | GET /v1/receipts/* |
events.read | 60s | 600 | GET /v1/events/* |
agents.read | 60s | 300 | GET /v1/agents/*, treasury balances |
indexer.read | 60s | 60 | GET /v1/indexer/status |
meta | 60s | 60 | GET /v1/health, GET /v1/version |
lsh_live_* keys ship with the same defaults. We bump per-account
ceilings on request — talk to us in Discord
if your workload is steady-state above the defaults.
Headers on every response
429 Too Many Requests body always includes:
retry_after_ms (or Retry-After in seconds, also set on
429). Combined with Idempotency-Key you can
treat 429 as “try again in N ms” without worrying about double
spends.
Burst behaviour
The window is sliding, not fixed. The limiter records every hit with a millisecond timestamp and counts hits in the trailing 60s. This means:- A burst of 600 in 100ms is allowed once, then the bucket is empty for ~60s.
- A steady 10/s never trips.
- A spike to 700/s gets throttled at request 601.
Free dev tier
lsh_test_* keys use the same defaults. Devnet RPC has its own
external rate limiting on top — if you saturate the test bucket and
get 429s from the API, the next layer down may also start rejecting.
For sustained load testing, tell us and we’ll point you at a higher
tier or a dedicated devnet RPC.
