Rate Limits
Request rate limits per endpoint and how to handle them
AnakinScraper applies rate limits per API key to ensure reliable performance for all users. Limits are enforced using a sliding window algorithm — the window counts requests within the last N seconds and rejects once the limit is reached.
Limits by endpoint
| Endpoint | Rate limit | Bucket |
|---|---|---|
POST /v1/url-scraper | 60 requests/min | Scraping |
POST /v1/url-scraper/batch | 60 requests/min | Scraping |
POST /v1/web-scraper | 60 requests/min | Scraping |
POST /v1/search | 30 requests/min | Search |
POST /v1/agentic-search | 10 requests/min | Agentic Search |
GET /v1/url-scraper/{id} | No limit | — |
GET /v1/web-scraper/{id} | No limit | — |
GET /v1/agentic-search/{id} | No limit | — |
POST /v1/holocron/task | 20 requests/min | Holocron |
GET /v1/holocron/jobs/{id} | No limit | — |
Polling endpoints are not rate-limited. You can poll for job results as frequently as you need without hitting a limit.
Rate limit response
When you exceed a rate limit, the API returns a 429 Too Many Requests response:
{
"status": "error",
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Too many requests. Please try again later."
}
}Handling rate limits
Retry with exponential backoff
The recommended approach is to wait and retry with exponential backoff. Start with a short delay and double it on each retry.
import requests
import time
def scrape_with_retry(url, api_key, max_retries=3):
"""Submit a scrape job with automatic retry on rate limit."""
delay = 2
for attempt in range(max_retries + 1):
response = requests.post(
"https://api.anakin.io/v1/url-scraper",
headers={"X-API-Key": api_key},
json={"url": url}
)
if response.status_code == 429:
if attempt == max_retries:
raise Exception("Rate limit exceeded after retries")
print(f"Rate limited, retrying in {delay}s...")
time.sleep(delay)
delay *= 2
continue
response.raise_for_status()
return response.json()
result = scrape_with_retry("https://example.com", "ak-your-key-here")
print(result["jobId"])async function scrapeWithRetry(url, apiKey, maxRetries = 3) {
let delay = 2000;
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch("https://api.anakin.io/v1/url-scraper", {
method: "POST",
headers: {
"X-API-Key": apiKey,
"Content-Type": "application/json"
},
body: JSON.stringify({ url })
});
if (response.status === 429) {
if (attempt === maxRetries) {
throw new Error("Rate limit exceeded after retries");
}
console.log(`Rate limited, retrying in ${delay / 1000}s...`);
await new Promise(r => setTimeout(r, delay));
delay *= 2;
continue;
}
if (!response.ok) throw new Error(`HTTP ${response.status}`);
return await response.json();
}
}
const result = await scrapeWithRetry("https://example.com", "ak-your-key-here");
console.log(result.jobId);Use batch endpoints
If you're scraping multiple URLs, use the batch endpoint instead of submitting individual requests. A single batch request can include up to 10 URLs and only counts as one request against the rate limit.
# Bad: 10 requests, 10 against rate limit
for url in url1 url2 ... url10; do
curl -X POST .../v1/url-scraper -d "{\"url\": \"$url\"}"
done
# Good: 1 request, 1 against rate limit
curl -X POST https://api.anakin.io/v1/url-scraper/batch \
-H "X-API-Key: ak-your-key-here" \
-H "Content-Type: application/json" \
-d '{"urls": ["url1", "url2", "...", "url10"]}'Spread requests over time
If you have a large list of URLs, pace your submissions rather than sending them all at once. A simple approach is to add a short delay between requests:
import time
urls = ["https://example.com/1", "https://example.com/2", ...]
for url in urls:
result = scrape_with_retry(url, api_key)
job_ids.append(result["jobId"])
time.sleep(1) # ~60 requests/min stays within the limitTips
- Rate limits apply to submit endpoints only. Poll as often as you like — GET endpoints for checking job status are not rate-limited.
- Batch when possible. A single batch request with 10 URLs uses 1 rate-limit slot, not 10.
- Cache results. AnakinScraper caches responses for 24 hours. Repeat requests for the same URL return instantly and cost zero credits, but they still count against rate limits.
- Use the CLI for simple workloads. The Anakin CLI handles rate limiting and retries automatically.
Increasing your limits
If you need higher rate limits for your use case, contact us:
- Email — support@anakin.io
- Enterprise plan — includes custom rate limits. Talk to sales.