POST Scrape URL
Submit a single URL for scraping
Submit Scrape Job
POST
https://api.anakin.io/v1/url-scraperSubmit a single URL for scraping. The job is processed asynchronously — use the returned jobId to poll for results.
Request Body
{
"url": "https://example.com",
"country": "us",
"useBrowser": false,
"generateJson": false
}| Parameter | Type | Description |
|---|---|---|
url required | string | The URL to scrape. Must be valid HTTP/HTTPS. |
country | string | Country code for proxy routing. Default "us". See Supported Countries (207 locations). |
useBrowser | boolean | Use headless Chrome with Playwright. Default false. Best for JS-heavy sites. |
generateJson | boolean | AI-extract structured JSON from the content. Default false. |
sessionId | string | Browser session ID for scraping authenticated pages. See Browser Sessions. |
Response
202 Accepted{
"jobId": "job_abc123xyz",
"status": "pending"
}The job is processed asynchronously. Use the jobId with GET /v1/url-scraper/{id} to check status and retrieve results.
Code Examples
curl -X POST https://api.anakin.io/v1/url-scraper \
-H "X-API-Key: your_api_key" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"country": "us",
"useBrowser": false,
"generateJson": false
}'import requests
response = requests.post(
'https://api.anakin.io/v1/url-scraper',
headers={'X-API-Key': 'your_api_key'},
json={
'url': 'https://example.com',
'country': 'us',
'useBrowser': False,
'generateJson': True
}
)
data = response.json()
print(f"Job submitted: {data['jobId']}")const response = await fetch('https://api.anakin.io/v1/url-scraper', {
method: 'POST',
headers: {
'X-API-Key': 'your_api_key',
'Content-Type': 'application/json'
},
body: JSON.stringify({
url: 'https://example.com',
country: 'us',
useBrowser: false,
generateJson: true
})
});
const data = await response.json();
console.log(data.jobId);