Scrape Target.com with Python: 3 Methods That Actually Work

Last Updated on April 16, 2026

Target.com is one of those sites that looks simple to scrape — until you actually try it. If you've ever written a quick Python script with Requests and BeautifulSoup, fired it at a Target product page, and watched your price field come back as None, you're in very good company.

Having tested scraping approaches against most major retail sites, I can confirm: Target consistently ranks among the trickiest. With , it's a goldmine of product data — prices, ratings, inventory, reviews — but Target's combination of React-based client-side rendering and Akamai's bot detection means the naive approach fails almost immediately. Three Python methods actually work, though. I'll walk through each one, explain why the first attempt always breaks, and show you a no-code shortcut for when Python isn't worth the pain.

Why Your First Python Scrape of Target.com Returns None

Before solutions, the problem. This is the code most beginners write:

1import requests
2from bs4 import BeautifulSoup
3url = "https://www.target.com/p/some-product/-/A-12345678"
4response = requests.get(url, headers={"User-Agent": "Mozilla/5.0"})
5soup = BeautifulSoup(response.text, "html.parser")
6price = soup.select_one('[data-test="current-price"]')
7print(price)  # None

The output? None. Every time.

This isn't a bug in your code. The HTML that requests.get() returns from Target is essentially a skeleton — a React shell that says "hey, load this JavaScript to render the actual page." Product prices, ratings, reviews, and availability are all injected by JavaScript after the initial page load. Since Python's Requests library doesn't execute JavaScript, those elements simply don't exist in the response.

Forum threads are full of developers hitting this wall. One puts it bluntly: "An element shows up as None because it is rendered with Javascript and requests can't pull HTML rendered with Javascript." A confirms: "When you send an HTTP request to the Target URL, the HTML response lacks meaningful data."

And even if you solve the JavaScript problem, there's a second layer: Target's Akamai bot detection fingerprints your TLS handshake and flags Python's requests library before a single byte of HTML is exchanged. More on that shortly.

What Makes Target.com So Hard to Scrape with Python

Target isn't just "a website that uses JavaScript." It's a layered defense system — and understanding each layer is how you pick the right scraping method.

JavaScript-Rendered Product Data

Target.com is built on React. When you load a product or search page in a real browser, here's what happens:

  1. The server sends a minimal HTML shell
  2. JavaScript bundles load and execute
  3. The frontend calls Target's internal Redsky API
  4. Product data (prices, ratings, images, availability) renders into the DOM

If you skip steps 2–4 — which is exactly what requests.get() does — you get an empty page. : static HTTP requests capture roughly of available data on Target. The other 70% requires JavaScript execution or API access.

Search result pages are even worse. Only a handful of products appear in the initial HTML; the rest load as you scroll.

Target's Anti-Bot Defenses: Beyond Generic "Use Proxies" Advice

Most scraping guides hand-wave past anti-bot measures with "just use proxies." Target's defenses deserve more specificity.

TLS Fingerprinting (the big one). During the HTTPS handshake, your client sends a "Client Hello" packet that reveals your TLS version, cipher suites, extensions, and elliptic curves. These get hashed into a JA3 fingerprint. Python's requests library produces a 8d9f7747675e24454cd9b7ed35c58707 — that anti-bot databases flag instantly. Chrome sends 16 carefully ordered cipher suites with GREASE values; Python sends 60+ in non-browser order. The block happens before any HTTP content is exchanged.

IP reputation scoring. Akamai categorizes IPs into trust tiers. Datacenter IPs receive, in , "significant negative trust scores as they are likely to be used by bots." Residential IPs get positive scores. On Target specifically, datacenter IP ranges are flagged immediately.

JavaScript fingerprinting. Akamai injects JavaScript that collects your JS engine specs, hardware capabilities, OS data, fonts, plugins, and behavioral data (typing speed, mouse movement, click timing). This generates the _abck cookie — a stateful fingerprint token. Without a valid _abck, requests are blocked.

Rate limiting. Target triggers 429 errors at roughly 30–60 requests per minute per IP. Some users report getting that actually contain the "Pardon Our Interruption" block page — which makes automated detection tricky.

overall. The Akamai bypass specifically is .

3 Methods to Scrape Target.com with Python (Side-by-Side)

No single article out there compares all three viable approaches in one place. Here they are, assessed honestly:

CriteriaRequests + BS4Selenium / PlaywrightRedsky API
Handles JS rendering❌ No✅ Yes✅ Yes (JSON)
Speed per item⚡ ~0.5–1s🐢 ~5–10s⚡ ~0.5–1s
Anti-bot risk⚠️ High (TLS fingerprint)⚠️ Medium⚠️ Medium (auth keys may change)
Setup complexityLowMediumMedium-High (reverse-engineering)
Data completeness~30% (static HTML only)~95% (full page)~90% (structured JSON)
Best forStatic metadata, __TGT_DATA__Full product pages, reviewsBulk product data at scale

Now let's build each one.

Method 1: Scrape Target.com with Python Requests and BeautifulSoup

This method won't get you JavaScript-rendered prices on search pages. It's fast and lightweight, though, and extracts more than you'd expect — if you know where to look.

The trick: Target embeds some product data in <script> tags containing a __TGT_DATA__ variable with __PRELOADED_QUERIES__. This JSON blob includes product names, descriptions, features, and sometimes prices on individual product pages. You can also grab product titles and URLs from search result HTML.

Step 1: Set Up Your Python Environment

Create a project folder and install dependencies:

1mkdir target-scraper && cd target-scraper
2python -m venv venv
3source venv/bin/activate  # On Windows: venv\Scripts\activate
4pip install requests beautifulsoup4 curl_cffi

Use curl_cffi over standard requests here. It spoofs browser TLS fingerprints, which is the single biggest factor in avoiding blocks on Target. a with curl_cffi vs. just with standard requests — a 15x improvement.

Step 2: Scrape Target Search Results

Target's search URL format is straightforward: https://www.target.com/s?searchTerm={keyword}

1from curl_cffi import requests as cureq
2from bs4 import BeautifulSoup
3import time, random
4headers = {
5    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36",
6    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
7    "Accept-Language": "en-US,en;q=0.9",
8}
9url = "https://www.target.com/s?searchTerm=bluetooth+headphones"
10resp = cureq.get(url, headers=headers, impersonate="chrome124")
11soup = BeautifulSoup(resp.text, "html.parser")
12# Product cards use this data-test attribute
13cards = soup.find_all("div", {"data-test": "@web/site-top-of-funnel/ProductCardWrapper"})
14for card in cards:
15    link_tag = card.find("a")
16    title = link_tag.get_text(strip=True) if link_tag else "N/A"
17    href = "https://www.target.com" + link_tag["href"] if link_tag and link_tag.get("href") else "N/A"
18    print(f"{title}{href}")

You'll get product names and URLs. Prices? Probably not from this HTML. That's expected.

Step 3: Extract Embedded JSON Data from Product Pages

Individual product pages embed richer data in the __TGT_DATA__ script tag:

1import re, json
2product_url = "https://www.target.com/p/some-product/-/A-12345678"
3resp = cureq.get(product_url, headers=headers, impersonate="chrome124")
4soup = BeautifulSoup(resp.text, "html.parser")
5# Find the __TGT_DATA__ script
6scripts = soup.find_all("script")
7for script in scripts:
8    if script.string and "__TGT_DATA__" in script.string:
9        # Extract JSON from the script content
10        match = re.search(r'__TGT_DATA__\s*=\s*({.*?});?\s*$', script.string, re.DOTALL)
11        if match:
12            tgt_data = json.loads(match.group(1))
13            # Navigate the JSON structure for product details
14            queries = tgt_data.get("__PRELOADED_QUERIES__", {})
15            # Product data is nested inside — structure varies by page
16            print(json.dumps(queries, indent=2)[:500])  # Preview the structure

The JSON structure inside __TGT_DATA__ contains product names, descriptions, features, and often pricing data. The exact nesting varies, so you'll need to inspect the output and navigate accordingly.

Step 4: Handle Pagination

Target's search pagination uses the Nao parameter. Page 1 is Nao=0, page 2 is Nao=24, page 3 is Nao=48, and so on (incrementing by 24):

1for page in range(0, 120, 24):  # First 5 pages
2    paginated_url = f"https://www.target.com/s?searchTerm=bluetooth+headphones&Nao={page}"
3    resp = cureq.get(paginated_url, headers=headers, impersonate="chrome124")
4    # Parse and extract...
5    time.sleep(random.uniform(2, 5))  # Be polite

Step 5: Store Your Scraped Data

1import csv
2with open("target_products.csv", "w", newline="", encoding="utf-8") as f:
3    writer = csv.DictWriter(f, fieldnames=["title", "url", "price", "description"])
4    writer.writeheader()
5    for product in products:
6        writer.writerow(product)

What you'll get: Product titles, URLs, descriptions, and embedded metadata. What you won't get reliably: Dynamic prices and ratings from search result pages. For those, you need Method 2 or 3.

Method 2: Scrape Target.com with Selenium or Playwright

A headless browser renders JavaScript, loads dynamic content, and simulates real user behavior. This is the method that gets you prices, ratings, and reviews.

On the Selenium vs. Playwright question: in 2026 — and benchmarks show it's (11s vs. 28s for 20 pages). I'll show Selenium here since it has a larger community and more tutorials, but Playwright is the better choice if you're starting fresh.

Step 1: Install Selenium and ChromeDriver

1pip install selenium webdriver-manager

webdriver-manager handles ChromeDriver versioning automatically — no more "ChromeDriver version mismatch" headaches:

1from selenium import webdriver
2from selenium.webdriver.chrome.service import Service
3from selenium.webdriver.chrome.options import Options
4from webdriver_manager.chrome import ChromeDriverManager
5options = Options()
6options.add_argument("--headless=new")
7options.add_argument("--window-size=1920,1080")
8options.add_argument("--disable-blink-features=AutomationControlled")
9options.add_argument("--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36")
10driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options)

Step 2: Load Target Pages and Wait for Content

1from selenium.webdriver.common.by import By
2from selenium.webdriver.support.ui import WebDriverWait
3from selenium.webdriver.support import expected_conditions as EC
4driver.get("https://www.target.com/s?searchTerm=bluetooth+headphones")
5# Wait for product cards to render (explicit wait > time.sleep)
6WebDriverWait(driver, 15).until(
7    EC.presence_of_element_located((By.CSS_SELECTOR, '[data-test="product-title"]'))
8)

Explicit waits are critical. time.sleep(10) wastes time on fast loads and isn't long enough on slow ones — the worst of both worlds. WebDriverWait polls every 500ms until the element appears or the timeout expires.

Step 3: Scroll the Page to Load All Products

Target lazy-loads products as you scroll. Without scrolling, you'll get 4–5 products instead of the full page:

1import time
2last_height = driver.execute_script("return document.body.scrollHeight")
3for _ in range(10):
4    driver.execute_script("window.scrollBy(0, 300);")
5    time.sleep(1.5)
6    new_height = driver.execute_script("return document.body.scrollHeight")
7    if new_height == last_height:
8        break
9    last_height = new_height

that 10 scroll iterations with 1.5-second delays yields 8+ products vs. 4–5 without scrolling. Each scroll step should be 200–300px to mimic human behavior.

Step 4: Extract Product Data from the Rendered Page

1products = []
2cards = driver.find_elements(By.CSS_SELECTOR, '[data-test="@web/site-top-of-funnel/ProductCardWrapper"]')
3for card in cards:
4    try:
5        title = card.find_element(By.CSS_SELECTOR, '[data-test="product-title"]').text
6    except:
7        title = "N/A"
8    try:
9        price = card.find_element(By.CSS_SELECTOR, '[data-test="current-price"]').text
10    except:
11        price = "N/A"
12    try:
13        link = card.find_element(By.CSS_SELECTOR, 'a[href*="/p/"]').get_attribute("href")
14    except:
15        link = "N/A"
16    products.append({"title": title, "price": price, "link": link})
17for p in products:
18    print(f'{p["title"]}{p["price"]}')

Key data-test selectors for Target (verified 2026):

Data FieldSelector
Product carddata-test="@web/site-top-of-funnel/ProductCardWrapper"
Product titledata-test="product-title"
Current pricedata-test="current-price"
Rating valuedata-test="rating-value"
Rating countdata-test="rating-count"

Step 5: Scrape Product Reviews (Bonus)

Navigate to individual product pages, scroll to the reviews section, and extract review data:

1from bs4 import BeautifulSoup
2driver.get("https://www.target.com/p/some-product/-/A-12345678")
3# Scroll down to load reviews
4for _ in range(5):
5    driver.execute_script("window.scrollBy(0, 500);")
6    time.sleep(2)
7soup = BeautifulSoup(driver.page_source, "html.parser")
8reviews = soup.find_all("div", {"data-test": "review-card--text"})
9for review in reviews:
10    print(review.get_text(strip=True)[:100])

Reviews are loaded via Bazaarvoice integration and support pagination (up to 51 pages), sorting by recency, and a photos-only filter. show roughly 5.1 seconds per item with Selenium.

Don't forget to close the browser when done:

1driver.quit()

Method 3: Scrape Target.com Using the Redsky API

Target's frontend fetches everything from an internal API at redsky.target.com. You can call it directly with Python — no HTML parsing, no browser, no JavaScript rendering. The response is clean JSON with 40+ data fields covering pricing, ratings, reviews, images, availability, fulfillment, specs, and variants. For bulk product data, this is the fastest and most reliable method by a wide margin.

Step 1: Discover the Redsky API with Chrome DevTools

Most tutorials skip this part entirely. Here's how to find the API yourself:

  1. Open any Target product page in Chrome
  2. Open DevTools (F12) → Network tab
  3. Filter by Fetch/XHR
  4. Reload the page
  5. Look for requests to redsky.target.com or redsky.a]target.com
  6. Click one — examine the Request URL and Headers

You'll see something like:

1https://redsky.target.com/redsky_aggregations/v1/web/pdp_fulfillment_v1?key=9f36aeafbe60771e321a7cc95a78140772ab3e96&tcin=12345678&store_id=2148&zip=55401

The key parameters:

  • key — API key (static, not rotating — different endpoints use different keys)
  • tcin — Target.com Item Number (the 8-digit product ID)
  • store_id — Target store location
  • zip — ZIP code for fulfillment data

Extract the API key from the request headers. It's embedded in the URL as a query parameter.

Step 2: Make a Direct Python Request to the Redsky API

1from curl_cffi import requests as cureq
2import json
3API_KEY = "9f36aeafbe60771e321a7cc95a78140772ab3e96"  # Extract from DevTools
4TCIN = "12345678"
5url = f"https://redsky.target.com/redsky_aggregations/v1/web/pdp_fulfillment_v1?key={API_KEY}&tcin={TCIN}&store_id=2148&zip=55401"
6headers = {
7    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36",
8    "Accept": "application/json",
9    "Origin": "https://www.target.com",
10    "Referer": "https://www.target.com/",
11    "Sec-Fetch-Site": "same-site",
12    "Sec-Fetch-Mode": "cors",
13    "Sec-Fetch-Dest": "empty",
14}
15resp = cureq.get(url, headers=headers, impersonate="chrome124")
16data = resp.json()
17# Extract product details from the JSON response
18product = data.get("data", {}).get("product", {})
19title = product.get("item", {}).get("product_description", {}).get("title", "N/A")
20price = product.get("price", {}).get("formatted_current_price", "N/A")
21rating = product.get("ratings_and_reviews", {}).get("statistics", {}).get("rating", {}).get("average", "N/A")
22print(f"{title}{price} — Rating: {rating}")

No HTML parsing needed. The response is structured, clean, and fast.

Step 3: Scrape Product Search Results via the API

The product_summary_with_fulfillment_v1 endpoint accepts multiple TCINs at once:

1tcins = ["12345678", "23456789", "34567890"]
2tcin_str = ",".join(tcins)
3search_url = f"https://redsky.target.com/redsky_aggregations/v1/web/product_summary_with_fulfillment_v1?key={API_KEY}&tcins={tcin_str}&store_id=2148&zip=55401"
4resp = cureq.get(search_url, headers=headers, impersonate="chrome124")
5results = resp.json()
6for item in results.get("data", {}).get("product_summaries", []):
7    title = item.get("title", "N/A")
8    price = item.get("price", {}).get("formatted_current_price", "N/A")
9    print(f"{title}{price}")

To get TCINs, you can either extract them from search page HTML (they appear in product URLs as /A-XXXXXXXX) or from the __TGT_DATA__ embedded JSON.

Step 4: Scale Up with Concurrent Requests

1from concurrent.futures import ThreadPoolExecutor
2import time, random
3def fetch_product(tcin):
4    url = f"https://redsky.target.com/redsky_aggregations/v1/web/pdp_fulfillment_v1?key={API_KEY}&tcin={tcin}&store_id=2148&zip=55401"
5    time.sleep(random.uniform(2, 5))
6    resp = cureq.get(url, headers=headers, impersonate="chrome124")
7    return resp.json()
8tcin_list = ["12345678", "23456789", "34567890", "45678901"]
9with ThreadPoolExecutor(max_workers=3) as executor:
10    results = list(executor.map(fetch_product, tcin_list))

Keep concurrency conservative — 3–5 threads with 2–5 second random delays. Target's rate limit sits around .

Important Caveats About the Redsky API

Before you build a production pipeline on this, a few caveats:

  • API keys are static but endpoint-specific. Different Redsky endpoints use different keys. They don't rotate frequently, but Target could change them at any time.
  • This is an undocumented internal API. Target's engineering team has , which reduces legal risk, but it's not a supported public API with SLAs.
  • Product variants (colors, sizes) each have unique TCINs. You need to query each variant separately.
  • Missing Sec-Fetch-* headers cause immediate blocks. This is a common gotcha — always include Sec-Fetch-Site, Sec-Fetch-Mode, and Sec-Fetch-Dest.

Tips for Scraping Target.com at Scale Without Getting Blocked

These practices apply at production scale, regardless of method.

Rotate Residential Proxies (Not Datacenter)

Target's Akamai implementation flags datacenter IP ranges on sight. Residential proxies are mandatory for sustained scraping. Pricing varies widely — , , scaling down to $3–4/GB at volume.

Rotate IPs every 50–100 requests or on every request if your proxy pool supports it.

Spoof TLS Fingerprints with curl_cffi

This is the single highest-impact change you can make. Drop-in replacement for requests:

1from curl_cffi import requests as cureq
2# Standard requests — 12% success rate on protected sites
3# resp = requests.get(url, headers=headers)
4# curl_cffi — 92% success rate
5resp = cureq.get(url, headers=headers, impersonate="chrome124")

(8,200+ GitHub stars) supports Chrome versions from chrome99 through chrome146, plus Safari, Edge, and mobile variants. It's than tls_client in synchronous mode.

Set Realistic Request Pacing and Headers

  • Random delays: 2–7 seconds between requests (not a fixed interval — randomness matters)
  • User-Agent rotation: Maintain a pool of 5–10 real browser User-Agent strings and rotate
  • Session warmup: Visit target.com homepage before hitting product pages to establish cookies
  • Header consistency: Your Sec-Ch-Ua must match your claimed User-Agent browser version. Your Sec-Ch-Ua-Platform must match your claimed OS. Inconsistencies are a dead giveaway.
  • Session persistence: Maintain cookies across requests within a session. 48-hour session stability with rotating residential proxies.

Skip the Code: Scrape Target.com with Thunderbit (No-Code Alternative)

Target.com is, genuinely, one of the harder retail sites to scrape programmatically. JavaScript rendering, Akamai's TLS fingerprinting, datacenter proxy detection, ChromeDriver version headaches — it's a lot of moving parts. If you're learning Python, that's a great exercise. If you need Target product data for actual work, the cost-benefit math often doesn't add up.

For readers who need the data without the engineering project, handles the hard parts automatically.

How Thunderbit Handles Target.com's Challenges

Thunderbit's AI Web Scraper runs in your browser, which means it naturally renders JavaScript — no Selenium setup, no headless browser configuration, no ChromeDriver versioning. The browser is the scraper.

Here's the workflow:

  1. Install the and navigate to a Target product or search page
  2. Click "AI Suggest Fields" — Thunderbit reads the page and proposes column names (Product Title, Price, Rating, Image URL, etc.)
  3. Click "Scrape" — data extracts in seconds, directly from the rendered page

No proxies to configure. No TLS fingerprints to spoof. No None results.

Scrape Target Product Listings and Detail Pages

The multi-page workflow is where things get interesting. Scrape a Target search results page to get a list of products, then use Subpage Scraping to automatically visit each product URL and enrich your table with detail-page data — descriptions, full reviews, specifications — without writing pagination code or managing browser sessions.

Export directly to Excel, Google Sheets, Airtable, or Notion. No csv.writer boilerplate, no file encoding issues.

Automate Recurring Target.com Scrapes

For ongoing price monitoring or inventory tracking, Thunderbit's Scheduled Scraper lets you describe the schedule in plain language (e.g., "every Monday at 9am"). No cron jobs, no server setup, no keeping a Python script alive on a VPS. This is particularly useful for ecommerce teams tracking now use automated price scraping, and the ROI of price intelligence averages .

When to Use Which Method to Scrape Target.com with Python

Here's a quick decision framework:

Your SituationRecommended Method
Learning Python, small projectMethod 1: Requests + BS4 (for static data and __TGT_DATA__)
Need full product pages with prices and reviewsMethod 2: Selenium / Playwright
Bulk product data extraction at scaleMethod 3: Redsky API
Need data fast without writing codeThunderbit (no-code)
Recurring price monitoringThunderbit Scheduled Scraper or Redsky API + cron
One-time research project, non-technical teamThunderbit — honestly the fastest path

If you're building a production data pipeline, Method 3 (Redsky API) gives you the best speed and reliability. If you're doing one-off research or your team doesn't have Python expertise, Thunderbit saves hours. And if you're learning web scraping, Method 1 → Method 2 → Method 3 is a natural progression that teaches you something real at each step.

Worth covering briefly. Target's robots.txt has approximately 120+ Disallow paths but notably does not block /p/ (products) or /c/ (categories) — product and category pages are explicitly permitted for crawling. Cart, account, and checkout pages are restricted.

Target's Terms of Service do prohibit automated access. However, the Redsky API being (confirmed by Target engineering) reduces legal risk for API-based data collection.

Key legal precedents to be aware of:

  • (Ninth Circuit, 2022): scraping publicly available data does not violate CFAA
  • (2024): Meta lost — court found no CFAA violation for public data scraping

For large-scale commercial scraping, consult legal counsel. For market research, price comparison, and personal projects using publicly available data, you're on solid ground. Always respect rate limits and don't overload Target's servers.

Conclusion and Key Takeaways

Target.com earns its difficulty rating. The naive Requests + BeautifulSoup approach fails because Target renders product data via JavaScript and Akamai fingerprints your TLS handshake before you even get a response. With the right method, though, extraction is straightforward.

The three methods, ranked by reliability:

  1. Redsky API — fastest, most reliable for bulk data, returns clean JSON. Requires reverse-engineering the API endpoints via DevTools.
  2. Selenium / Playwright — handles JavaScript rendering, gets you everything on the page. Slower but comprehensive.
  3. Requests + BeautifulSoup — limited to static HTML and embedded __TGT_DATA__ JSON. Fast but incomplete.

The biggest technical wins:

  • Use curl_cffi instead of standard requests for a in anti-bot evasion
  • Residential proxies are mandatory — datacenter IPs are flagged immediately
  • Include Sec-Fetch-* headers on every request — missing them causes instant blocks
  • Session warmup (visiting the homepage first) significantly improves success rates

And if Python isn't worth the pain for your use case, handles JavaScript rendering, anti-bot measures, and data export automatically. Try the and see if it gets you what you need in minutes instead of hours.

For more scraping guides and data extraction tips, check out the or our .

FAQs

Can I scrape Target.com with just Python Requests and BeautifulSoup?

Partially. You can extract product titles, URLs, and some embedded JSON data from the __TGT_DATA__ script tags on product pages. But prices, ratings, reviews, and availability on search result pages are JavaScript-rendered and won't appear with static HTTP requests. For complete data, use Selenium/Playwright or the Redsky API.

Why does my Target.com scraper return None for prices?

Target loads pricing data via JavaScript after the initial page load. When you use requests.get(), you receive the pre-rendered HTML shell — before JavaScript executes and injects product data into the DOM. The price elements literally don't exist in the response. Use a headless browser (Selenium or Playwright) that renders JavaScript, call the Redsky API directly for JSON data, or use a tool like that scrapes from the rendered browser page.

Scraping publicly available data is generally permitted under current US case law (hiQ v. LinkedIn, Meta v. Bright Data). Target's robots.txt allows crawling of product and category pages. However, Target's Terms of Service prohibit automated access, so there's a gray area. For market research and price comparison using public data, you're on reasonable legal ground. For large-scale commercial operations, consult a lawyer.

What is Target's Redsky API and how do I access it?

Redsky is Target's internal API that powers their frontend product data. It's not a public API with documentation and API keys you sign up for — it's the backend their React app calls to render product pages. You can discover its endpoints by opening Chrome DevTools, filtering the Network tab by XHR/Fetch, and looking for requests to redsky.target.com. The API key is embedded in the request URL as a query parameter. Target engineering has confirmed the API is intentionally public-facing.

How do I avoid getting blocked when scraping Target.com?

The most impactful single change is using curl_cffi instead of standard Python requests to spoof browser TLS fingerprints — this alone raises success rates from . Beyond that: use residential proxies (not datacenter), rotate User-Agent strings, add random 2–7 second delays between requests, include all Sec-Fetch-* headers, and warm up sessions by visiting the homepage first. Alternatively, use a tool like that handles anti-bot measures automatically without any configuration.

Learn More

Ke
Ke
CTO @ Thunderbit. Ke is the person everyone pings when data gets messy. He's spent his career turning tedious, repetitive work into quiet little automations that just run. If you've ever wished a spreadsheet could fill itself in, Ke has probably already built the thing that does it.
Table of Contents

Try Thunderbit

Scrape leads & other data in just 2-clicks. Powered by AI.

Get Thunderbit It's free
Extract Data using AI
Easily transfer data to Google Sheets, Airtable, or Notion
Chrome Store Rating
PRODUCT HUNT#1 Product of the Week