Browser Sessions

Scrape authenticated content by saving and reusing login sessions

Tip:

  • Session data is protected using AES-256-GCM encryption with complete user isolation.
  • The system does not collect, store, or retain passwords, authentication secrets, or credentials at any time.
  • Session data is permanently and irreversibly deleted upon user-initiated session removal.

What are Browser Sessions?

Browser sessions allow you to scrape content that requires authentication. Instead of handling complex login flows programmatically, you log in once through a real browser, and we save your session for future API requests.

This is useful for scraping:

  • Account dashboards and order history
  • Subscription-based content
  • Social media profiles
  • Any page that requires a login

How It Works

There are two ways to create a session:

Option A: Interactive (Dashboard)

  1. Create — From your dashboard, click Create Session to launch an interactive browser
  2. Log in — Navigate to the website and log in with your credentials. Complete 2FA or captchas as needed
  3. Save — Click Save Session to encrypt and store your cookies and localStorage

Option B: Programmatic (Browser Connect)

Save sessions directly from Browser Connect — just add ?save_session=my-name when connecting. The session auto-saves when you disconnect:

browser = await p.chromium.connect_over_cdp(
    "wss://api.anakin.io/v1/browser-connect?save_session=my-amazon-login&save_url=https://amazon.com",
    headers={"X-API-Key": "your_api_key"},
)
# ... automate login with Playwright ...
await browser.close()  # session auto-saved

Use in API Requests

Include the sessionId in your scrape requests. The API will use your saved session to access authenticated pages.


Using Sessions with the API

Add the sessionId parameter to your URL Scraper request:

{
  "url": "https://amazon.com/your-orders",
  "sessionId": "session_abc123xyz",
  "country": "us"
}

When using a session, browser-based scraping is automatically enabled since sessions require a full browser environment.

curl -X POST https://api.anakin.io/v1/url-scraper \
  -H "X-API-Key: your_api_key" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://amazon.com/your-orders",
    "sessionId": "session_abc123xyz",
    "country": "us"
  }'

Using Sessions with Browser Connect

You can also load saved sessions into Browser Connect for full programmatic control of an authenticated browser. Pass ?session_id or ?session_name when connecting:

import asyncio
from playwright.async_api import async_playwright

async def main():
    async with async_playwright() as p:
        browser = await p.chromium.connect_over_cdp(
            "wss://api.anakin.io/v1/browser-connect?session_id=session_abc123xyz",
            headers={"X-API-Key": "your_api_key"},
        )
        page = browser.contexts[0].pages[0]

        # Cookies are pre-loaded — navigate directly to authenticated pages
        await page.goto("https://amazon.com/your-orders")
        orders = await page.evaluate("document.title")
        print("Page:", orders)

        await browser.close()

asyncio.run(main())

This is useful when:

  • You need to interact with authenticated pages (click, scroll, fill forms)
  • The URL Scraper API doesn't give you enough control
  • You want to combine session auth with custom Playwright/Puppeteer automation

See the Browser Connect docs for full details.


Managing Sessions

You can manage your sessions from the dashboard:

  • View all saved sessions and their details
  • Check when a session was last used
  • Delete sessions you no longer need