BETA · INVITE-ONLY · 247 ON THE LIST
v0.3.6 · measurement-first agent
┌──── browser-recon · agentic scraping reconnaissance ─────────────────────┐
│  TARGET = $1  ARGS = $@                                                  │
│  → spawn(chrome)  →  observe(traffic)  →  validate()  →  report()        │
└──────────────────────────────────────────────────────────────────────────┘

Stop reverse-engineering websites. Let an agent do it.

browser-recon is an AI agent for scraping reconnaissance. You browse a target site like a human; the agent watches; it returns a production-grade scraping plan — library, headers, cookies, rate-limits, cost, and runnable starter code.

recon_time
~10min
recommendation_acc
88%
install_size
128KB
stacks_handled
6+ anti-bot vendors
// validated against the protections of
walmart.com
staples.com
target.com
airbnb.com
ticketmaster.com
coinmarketcap.com
how_it_works

Three commands. One report.

You don't write code. You browse the site like a normal user for two minutes. The agent reverse-engineers the rest.

[1] /usr/local/bin

Install the CLI

128 KB. No code, no analysis, no proprietary IP runs on your machine. The agent lives server-side.

pipx install browser-recon
[2] ~/scan

Browse the target

Chrome opens. You click through the data you want — search, listings, reviews, anything. The agent captures every request behind the curtain.

recon scan https://target.com
[3] ~/report

Receive the verdict

The agent fires test requests through real proxies, validates what works, and returns the recommended library, headers, cookies, cost, and runnable starter code.

open browser-recon.com/r/uz06csw1y2jg
man recon

What the agent actually does.

Most scraping advice is a guess. Ours is a measurement — every recommendation is grounded in a real test request, not in the LLM's priors.

BROWSER-RECON(1) — User Commands NAME · capability surface
--detect

anti-bot fingerprinting

Identifies Cloudflare, Akamai Bot Manager, PerimeterX, DataDome, Imperva. Knows what each one will do to a naive HTTP client.

--validate

real proxies. real requests.

Fires test traffic through the proxy tiers you'd use in production. Reports which library × proxy combination the target accepts.

--cost

dollar projection per 1k requests

Measured bandwidth × your proxy rate. Tied to data the agent saw, not a vague band.

--starter

runnable python, not advice

Every report ends with a runnable Python starter using the recommended library, headers, and timing. Drop it in your stack.

--secure

tls 1.3 + aws kms

Transit encrypted with TLS 1.3. Captures stored encrypted at rest with KMS (AES-256). Cookie values, auth tokens, API keys, and JWT-shaped strings are scrubbed before long-term storage. Not end-to-end — see the user guide.

--archive

kept on your shelf

Every report stays live for the duration of your tier. Re-run from the dashboard or re-scan for one credit.

~/dashboard

Every scan. Archived.

Reports stay live for the duration of your plan. Re-scan a target for one credit to refresh.

browser-recon.com / dashboard
you@browser-recon

recent_scans

credits: 23 / 30 · renews: 2026-05-30
CREDITS
23/ 30
SCANS_THIS_MONTH
07
SUCCESS_RATE_30D
88%
domain stack status match cost/1k
walmart.comcurl_cffi · chrome120 · resiok7/9$0.84view →
staples.comrequests · datacenterok9/9$0.05view →
ticketmaster.comcurl_cffi · safari17 · resi22%2/9$1.21view →
airbnb.comcurl_cffi · chrome120 · resiok8/9$0.94view →
~/report

A verdict. The evidence. The code.

Every report opens with the recommendation. Below the fold lives the proof — and the runnable starter code.

/r/uz06csw1y2jg · walmart.com
scan_id: 9e602c15
2026-05-13 · confidence: 0.82

walmart.com runs dual Akamai + PerimeterX. curl_cffi + chrome120 + residential proxies reproduces the captured data.

Cookie warmup required before the Reviews GraphQL endpoint will respond. Estimated cost: $0.40–$2.00 per 1,000 requests.

library
curl_cffi
impersonation
chrome120
proxy_tier
residential
cost_per_1k
$0.84
# starter.py — generated by browser-recon
from curl_cffi import requests

session = requests.Session(impersonate="chrome120")
session.proxies = {"https": "http://resi-proxy:8080"}

# warmup — primes Akamai's _abck cookie
session.get("https://www.walmart.com/")

# product reviews — paginate via offset
r = session.post("https://www.walmart.com/orchestra/snb/graphql/Reviews", json=payload)
~/pricing

Pay per scan. Nothing else.

Credits expire monthly. Reports stay live for the duration of your tier. Re-scan for one credit to refresh.

first month only
tester
$5/mo
5 credits
  • 5 scans / month
  • reports 2 weeks
  • email support
  • one-time signup
join_waitlist
beginner
$10/mo
12 credits
  • 12 scans / month
  • reports 1 month
  • email support
join_waitlist
pro_max
$60/mo
100 credits
  • 100 scans / month
  • reports 3 months
  • priority queue
  • slack support
join_waitlist
// Higher volume, custom report shape, or starter code in your framework?   → talk_to_us(enterprise)
→ full pricing & faq
~/join

Get on the list. Skip it with a tweet.

Drop your email, or post about us on X — we let in a batch every week.

queue → [▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░░░] 247 dev(s)
247 developers in line · we let in a batch every week