.NET
Scrape your first page from C# / .NET using the built-in HttpClient — works with any .NET 6+ project.
Submit a scrape, poll for the result, and handle transient errors — using the standard library's HttpClient and System.Text.Json.
Authentication
Set your API key as an environment variable. Get a key from the Dashboard.
export ANAKIN_API_KEY=ak-your-key-hereThe base URL is https://api.anakin.io/v1. Every request authenticates via the X-API-Key header.
Install
No NuGet packages needed — HttpClient, System.Net.Http.Json, and System.Text.Json are all in the .NET 6+ standard library. Create a console project:
dotnet new console -o quickstart
cd quickstartScrape a page
Replace Program.cs with:
using System.Net.Http.Json;
using System.Text.Json.Nodes;
const string Base = "https://api.anakin.io/v1";
var apiKey = Environment.GetEnvironmentVariable("ANAKIN_API_KEY")
?? throw new InvalidOperationException("ANAKIN_API_KEY is not set");
using var http = new HttpClient { Timeout = TimeSpan.FromSeconds(30) };
http.DefaultRequestHeaders.Add("X-API-Key", apiKey);
async Task<JsonNode?> Request(HttpMethod method, string path, object? body = null)
{
var req = new HttpRequestMessage(method, Base + path);
if (body != null) req.Content = JsonContent.Create(body);
try
{
var resp = await http.SendAsync(req);
return await resp.Content.ReadFromJsonAsync<JsonNode>();
}
catch (HttpRequestException) { return null; } // caller retries on null
}
var submitted = await Request(HttpMethod.Post, "/url-scraper",
new { url = "https://example.com" });
var jobId = submitted!["jobId"]!.ToString();
for (int i = 0; i < 60; i++)
{
var job = await Request(HttpMethod.Get, $"/url-scraper/{jobId}");
if (job == null)
{
await Task.Delay(3000); // retry transient errors
continue;
}
var status = job["status"]!.ToString();
if (status == "completed")
{
Console.WriteLine(job["markdown"]);
return;
}
if (status == "failed")
{
throw new Exception($"scrape failed: {job["error"]}");
}
await Task.Delay(3000);
}
throw new TimeoutException("timed out after 3 minutes");Run it:
dotnet runWhat this does
- Submits
https://example.comto/url-scraperand gets back ajobId. - Polls
/url-scraper/{jobId}every 3 seconds (up to 60 attempts = 3 minutes). - Retries transient
HttpRequestExceptionerrors silently — only surfaces real failures. - Prints the final
markdownwhen the job completes.
Most jobs finish in 3–15 seconds.
Go further
Extract structured JSON with AI
Replace the submit body with generateJson: true to have AI return structured data:
var submitted = await Request(HttpMethod.Post, "/url-scraper", new {
url = "https://news.ycombinator.com",
generateJson = true
});The completed response includes a generatedJson field with structured data inferred from the page.
Scrape JavaScript-heavy sites
For SPAs and dynamically-loaded pages, add useBrowser: true:
var submitted = await Request(HttpMethod.Post, "/url-scraper", new {
url = "https://example.com/spa",
useBrowser = true
});Only use browser mode when needed — standard scraping is faster and cheaper.
Use it from ASP.NET Core
Register a typed HttpClient via IHttpClientFactory to get connection pooling and DI for free, then call from a hosted background service (IHostedService) or a Hangfire/Quartz job — the polling loop awaits up to 3 minutes per URL, so background execution is the natural fit.
// Program.cs
builder.Services.AddHttpClient("anakin", c => {
c.BaseAddress = new Uri("https://api.anakin.io/v1");
c.DefaultRequestHeaders.Add("X-API-Key",
builder.Configuration["ANAKIN_API_KEY"]);
c.Timeout = TimeSpan.FromSeconds(30);
});