Install the CLI
128 KB. No code, no analysis, no proprietary IP runs on your machine. The agent lives server-side.
┌──── browser-recon · agentic scraping reconnaissance ─────────────────────┐ │ TARGET = $1 ARGS = $@ │ │ → spawn(chrome) → observe(traffic) → validate() → report() │ └──────────────────────────────────────────────────────────────────────────┘
browser-recon is an AI agent for scraping reconnaissance. You browse a target site like a human; the agent watches; it returns a production-grade scraping plan — library, headers, cookies, rate-limits, cost, and runnable starter code.
You don't write code. You browse the site like a normal user for two minutes. The agent reverse-engineers the rest.
128 KB. No code, no analysis, no proprietary IP runs on your machine. The agent lives server-side.
Chrome opens. You click through the data you want — search, listings, reviews, anything. The agent captures every request behind the curtain.
The agent fires test requests through real proxies, validates what works, and returns the recommended library, headers, cookies, cost, and runnable starter code.
Most scraping advice is a guess. Ours is a measurement — every recommendation is grounded in a real test request, not in the LLM's priors.
Identifies Cloudflare, Akamai Bot Manager, PerimeterX, DataDome, Imperva. Knows what each one will do to a naive HTTP client.
Fires test traffic through the proxy tiers you'd use in production. Reports which library × proxy combination the target accepts.
Measured bandwidth × your proxy rate. Tied to data the agent saw, not a vague band.
Every report ends with a runnable Python starter using the recommended library, headers, and timing. Drop it in your stack.
Transit encrypted with TLS 1.3. Captures stored encrypted at rest with KMS (AES-256). Cookie values, auth tokens, API keys, and JWT-shaped strings are scrubbed before long-term storage. Not end-to-end — see the user guide.
Every report stays live for the duration of your tier. Re-run from the dashboard or re-scan for one credit.
Reports stay live for the duration of your plan. Re-scan a target for one credit to refresh.
| domain | stack | status | match | cost/1k | |
|---|---|---|---|---|---|
| walmart.com | curl_cffi · chrome120 · resi | ok | 7/9 | $0.84 | view → |
| staples.com | requests · datacenter | ok | 9/9 | $0.05 | view → |
| ticketmaster.com | curl_cffi · safari17 · resi | 22% | 2/9 | $1.21 | view → |
| airbnb.com | curl_cffi · chrome120 · resi | ok | 8/9 | $0.94 | view → |
Every report opens with the recommendation. Below the fold lives the proof — and the runnable starter code.
Cookie warmup required before the Reviews GraphQL endpoint will respond. Estimated cost: $0.40–$2.00 per 1,000 requests.
# starter.py — generated by browser-recon from curl_cffi import requests session = requests.Session(impersonate="chrome120") session.proxies = {"https": "http://resi-proxy:8080"} # warmup — primes Akamai's _abck cookie session.get("https://www.walmart.com/") # product reviews — paginate via offset r = session.post("https://www.walmart.com/orchestra/snb/graphql/Reviews", json=payload)
Credits expire monthly. Reports stay live for the duration of your tier. Re-scan for one credit to refresh.
Drop your email, or post about us on X — we let in a batch every week.
queue → [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 247 dev(s)