Browser Sessions

Scrape behind the login wall.

Most valuable data is gated behind authentication. Browser Sessions gives you persistent, managed browser contexts. Log in once, and every subsequent scrape runs as that authenticated user.

Session state persists across requestsProxy pinned to session for IP consistencyWorks with any login method
linkedin.com/loginEmailarun@company.ioPassword••••••••••••Sign insess_a3f7c2d9e1b8sessionexpires in 29d 23h// authenticated requestslinkedin.com/feedauthenticated ✓linkedin.com/jobsauthenticated ✓linkedin.com/connectionsauthenticated ✓sessions.anakin.io / v1live

The problem

Authentication shouldn't stop your pipeline.

The most valuable data lives behind login screens. Managing cookies, sessions, and IP consistency manually is brittle and time-consuming. We handle all of it.

The old way

  • Manually handle cookies per request
  • Re-authenticate on every session
  • Manage localStorage yourself
  • IP changes break sessions
  • No way to share sessions
  • Store credentials in code
  • Handle 2FA manually
  • Debug auth failures blind

With Browser Sessions

  • Log in once via browser UI
  • Session persists automatically
  • Cookies & localStorage preserved
  • Proxy pinned to session IP
  • Save & share named sessions
  • Credentials stored securely
  • Works with SSO & 2FA
  • Full session replay in dashboard

Session active · linkedin.com

authenticated · proxy pinned · 29d remaining

How it works

Log in once. Scrape forever.

01

Create a session

Open an interactive browser window, log into the target site. The session state, including cookies, localStorage, and auth tokens, is captured and encrypted.

02

Reference it in requests

Pass the session ID with any scrape request. We restore the full browser context, including cookies and local storage, before loading the page.

03

Scrape as that user

Every request runs as the authenticated user. Session is automatically refreshed before expiry. No re-authentication needed.

Session lifecycle

One login. Unlimited authenticated scrapes.

session: linkedin-arun · sess_a3f7c2d9e1b8
Jan 12 09:14

Session created

Interactive login · LinkedIn.com

created
Jan 12 09:15

2FA completed

Auth tokens captured + encrypted

secured
Jan 12 09:16

Proxy pinned

Exit node: US-West · consistent IP

pinned
Jan 15 14:02

Request #1,204

linkedin.com/feed → 200 OK · 1.2s

ok
Feb 04 11:30

Auto-refreshed

Session extended · 30d remaining

active
Feb 11 08:00

Request #9,847

linkedin.com/connections → 200 OK · 0.9s

ok
0%

session fidelity

cookies + localStorage preserved

0

re-auths needed

after initial login

0d

session lifetime

auto-refreshed

Quick start

Authenticated scraping in minutes.

Create a session, log in once via the browser UI, then reference it in any scrape request. No credentials in your code.

1
Sign up and get your API key
2
Create a named session from the dashboard
3
Reference the session ID in your scrape requests
import requests

API_KEY = "your_api_key"

# Create a named session
session = requests.post(
    "https://api.anakin.io/v1/sessions",
    headers={"X-API-Key": API_KEY},
    json={"name": "linkedin-arun"},
).json()

# Scrape an authenticated page
result = requests.post(
    "https://api.anakin.io/v1/scrape",
    headers={"X-API-Key": API_KEY},
    json={
        "url": "https://linkedin.com/feed",
        "session_id": session["id"],
        "format": "markdown",
    },
).json()
print(result["content"])
Authenticate with the X-API-Key header.Get API key

FAQ

Common questions

Browser Sessions

Log in once.
Scrape it all.

Stop re-authenticating on every request. Persistent sessions unlock the data behind every login wall.