Scrape behind the login wall.
Most valuable data is gated behind authentication. Browser Sessions gives you persistent, managed browser contexts. Log in once, and every subsequent scrape runs as that authenticated user.
The problem
Authentication shouldn't stop your pipeline.
The most valuable data lives behind login screens. Managing cookies, sessions, and IP consistency manually is brittle and time-consuming. We handle all of it.
The old way
- Manually handle cookies per request
- Re-authenticate on every session
- Manage localStorage yourself
- IP changes break sessions
- No way to share sessions
- Store credentials in code
- Handle 2FA manually
- Debug auth failures blind
With Browser Sessions
- Log in once via browser UI
- Session persists automatically
- Cookies & localStorage preserved
- Proxy pinned to session IP
- Save & share named sessions
- Credentials stored securely
- Works with SSO & 2FA
- Full session replay in dashboard
Session active · linkedin.com
authenticated · proxy pinned · 29d remaining
How it works
Log in once. Scrape forever.
Create a session
Open an interactive browser window, log into the target site. The session state, including cookies, localStorage, and auth tokens, is captured and encrypted.
Reference it in requests
Pass the session ID with any scrape request. We restore the full browser context, including cookies and local storage, before loading the page.
Scrape as that user
Every request runs as the authenticated user. Session is automatically refreshed before expiry. No re-authentication needed.
Session lifecycle
One login. Unlimited authenticated scrapes.
Session created
Interactive login · LinkedIn.com
2FA completed
Auth tokens captured + encrypted
Proxy pinned
Exit node: US-West · consistent IP
Request #1,204
linkedin.com/feed → 200 OK · 1.2s
Auto-refreshed
Session extended · 30d remaining
Request #9,847
linkedin.com/connections → 200 OK · 0.9s
session fidelity
cookies + localStorage preserved
re-auths needed
after initial login
session lifetime
auto-refreshed
Quick start
Authenticated scraping in minutes.
Create a session, log in once via the browser UI, then reference it in any scrape request. No credentials in your code.
import requests
API_KEY = "your_api_key"
# Create a named session
session = requests.post(
"https://api.anakin.io/v1/sessions",
headers={"X-API-Key": API_KEY},
json={"name": "linkedin-arun"},
).json()
# Scrape an authenticated page
result = requests.post(
"https://api.anakin.io/v1/scrape",
headers={"X-API-Key": API_KEY},
json={
"url": "https://linkedin.com/feed",
"session_id": session["id"],
"format": "markdown",
},
).json()
print(result["content"])X-API-Key header.Get API keyFAQ
Common questions
Log in once.
Scrape it all.
Stop re-authenticating on every request. Persistent sessions unlock the data behind every login wall.