Python Pinterest Scraping: "Pins, Boards & Infinite Scroll"

Last Updated on April 17, 2026

A few months ago, one of our engineers showed me a Python script he'd written over the weekend. It was supposed to pull product inspiration images from Pinterest for a market research project. He ran it, and the result was… 16 pins. Out of a board with over 2,000. He stared at the screen, then at me, and said, "I think Pinterest is mocking me."

He's not alone. This is hands-down the most common frustration I see from developers trying to scrape Pinterest with Python. You fire up requests and BeautifulSoup, hit a Pinterest URL, and get back either a handful of items or a blank HTML shell. The reason? Pinterest is a fully JavaScript-rendered single-page app — your static HTTP request never sees the real content. In this guide, I'll walk you through why that happens, the approaches that actually work (Playwright, internal API intercept, and no-code tools like ), and give you step-by-step code for scraping pins, boards, user profiles, infinite scroll, and full-resolution images. Whether you want to build a production-grade scraper or just grab some data fast, this article has you covered.

What Is Pinterest Scraping?

Pinterest scraping is the process of programmatically extracting data from Pinterest — things like pin images, titles, descriptions, board names, follower counts, and URLs. Instead of manually browsing and saving pins one by one, you use code (or a tool) to collect structured data from search results, boards, or user profiles at scale.

With on the platform and as of late 2025, Pinterest is one of the richest visual data sources on the web. For businesses, that data is gold — whether you're tracking product trends, benchmarking competitor content, or building influencer outreach lists.

Why Scrape Pinterest with Python?

Pinterest isn't just a mood board for wedding planners anymore. It's a serious business intelligence platform — have purchased something based on brand Pins, and , meaning users arrive with intent but without brand loyalty. That's a massive opportunity for discovery — and it explains why so many teams want structured Pinterest data.

Here's how that breaks down by team:

TeamData NeededBusiness Value
Ecommerce OperationsProduct images, prices, trending aestheticsCompetitive pricing, trend-informed inventory
MarketingBoard performance, pin engagement, competitor contentContent strategy, campaign benchmarking
Sales / Lead GenCreator profiles, follower counts, contact infoInfluencer outreach, partnership targeting
Real EstateHome staging pins, decor trends, room layoutsListing photography, staging guidance
Content CreatorsTrending topics, popular formats, seasonal themesContent calendar, visual style research

And here's the kicker: Pinterest's official API is limited. It requires a business account, approval (including a video demo of your app), and only gives you access to your own account's data. If you want to browse public boards, search results, or competitor profiles, scraping is the practical alternative. That's why so many teams turn to Python — or to no-code tools like Thunderbit when they want results without the setup.

Why BeautifulSoup Alone Fails on Pinterest (and What Actually Works)

If you've tried scraping Pinterest with requests + BeautifulSoup and gotten back 16 items or an empty page, you're not imagining things. Pinterest is built with React and renders 100% of its content via JavaScript. When you fetch a Pinterest URL with a plain HTTP request, the server hands you a minimal HTML skeleton — a few <link> and <script> tags, and an empty <div> where React mounts the app. All the pin cards, images, titles, and grid layouts are injected after JavaScript executes in the browser.

No JavaScript execution = no pins.

So what does work? Here's how the main approaches compare:

ApproachHandles JS?Gets Full Data?ComplexityBest For
requests + BeautifulSoupNo~0–16 itemsLowNot suitable for Pinterest
Selenium / PlaywrightYesYes, with scroll logicMediumFull control, Python pipelines
Pinterest internal API interceptYesYes, paginated JSONHighMaximum data, no browser needed
Third-party Scraper APIYesVariesLowScale without infrastructure
No-code tool (Thunderbit)YesAI-structuredVery LowNon-technical users, fast results

For this tutorial, I recommend Playwright as the Python approach. It renders JavaScript, supports scroll simulation, is well-maintained (, growing in job postings), and is in benchmarks. If you want the no-code route, I'll cover that too.

Pinterest Official API vs. Python Scraping vs. No-Code: Which Route to Choose

Before you start writing code, it's worth asking: do you even need to? Here's a decision framework:

CriteriaPinterest APIPython ScrapingThunderbit (No-Code)
Approval requiredBusiness account + video demoNoneNone
Access to public pins/boardsLimited (own data only)FullFull
Full-res image downloadVariesYes, with URL parsingYes, via image extraction
Handles infinite scrollN/AYes, with codeAutomatic
Maintenance neededLowHigh (selectors break)None (AI adapts)
Export to Sheets/AirtableManualCustom codeBuilt-in
Setup timeHours–days30–60 min2 minutes

If you're a marketer, ecommerce ops person, or anyone who just wants Pinterest data in a spreadsheet without writing or maintaining Python scripts, is the fast lane. You open any Pinterest page, click "AI Suggest Fields," hit "Scrape," and export directly to Google Sheets, Excel, Airtable, or Notion. Its subpage scraping feature can even follow individual pin links to enrich data automatically. I've watched team members who've never written a line of code pull 500+ pins into a Google Sheet in under three minutes.

For readers who want full control, want to integrate scraping into a Python pipeline, or just enjoy the craft of building things — read on.

Setting Up Your Python Environment for Pinterest Scraping

  • Difficulty: Intermediate
  • Time Required: ~30–60 minutes (including coding and testing)
  • What You'll Need: Python 3.9+, Chrome browser (for testing), terminal/command line access

Install Playwright and Dependencies

First, create a project folder and set up a virtual environment:

1mkdir pinterest-scraper
2cd pinterest-scraper
3python -m venv venv
4source venv/bin/activate  # On Windows: venv\Scripts\activate

Install Playwright and download the Chromium browser binary:

1pip install playwright
2playwright install chromium

You'll also use Python's built-in json, os, and csv modules for data export. No extra installs needed for those.

Project Folder Structure

I recommend keeping things organized from the start:

1pinterest-scraper/
2├── scraper.py
3├── config.py
4├── output/
5│   ├── pins.json
6│   └── pins.csv
7└── images/
8    ├── board-name-1/
9    └── board-name-2/

In config.py, set your user agent string. Pinterest blocks default headless browser signatures, so use a realistic one:

1USER_AGENT = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36"

Step 1: Build the Pinterest Search URL

Construct the search URL by inserting your query into the template:

1query = "mid century modern furniture"
2url = f"https://www.pinterest.com/search/pins/?q={query.replace(' ', '%20')}&rs=typed"

You can parameterize this for any search term. The rs=typed parameter tells Pinterest the query was typed (not suggested), which sometimes affects result relevance.

Step 2: Launch a Headless Browser and Load the Page

Here's the core Playwright setup. Note the custom user agent — without it, Pinterest will likely block you or serve a login wall.

1import asyncio
2from playwright.async_api import async_playwright
3from config import USER_AGENT
4async def scrape_search(query, max_pins=100):
5    url = f"https://www.pinterest.com/search/pins/?q={query.replace(' ', '%20')}&rs=typed"
6    async with async_playwright() as p:
7        browser = await p.chromium.launch(headless=True)
8        page = await browser.new_page(
9            user_agent=USER_AGENT,
10            viewport={"width": 1920, "height": 1080}
11        )
12        await page.goto(url)
13        await asyncio.sleep(3)  # Wait for JS to render initial pins

After this runs, the page should have the initial batch of pins loaded — typically 25–50.

Step 3: Extract Pin Data from the Page

Pinterest wraps each pin in a div with data-test-id='pinWrapper'. Inside, you'll find a link (<a>) with the pin URL and title (via aria-label), and an <img> with the thumbnail URL.

1        results = []
2        pins = await page.query_selector_all("div[data-test-id='pinWrapper']")
3        for pin in pins:
4            link = await pin.query_selector("a")
5            if not link:
6                continue
7            title = await link.get_attribute("aria-label") or ""
8            href = await link.get_attribute("href") or ""
9            img = await link.query_selector("img")
10            src = await img.get_attribute("src") if img else ""
11            results.append({
12                "title": title,
13                "url": f"https://www.pinterest.com{href}" if href.startswith("/") else href,
14                "image_url": src
15            })

At this point, results contains the pins visible in the initial viewport. To get more, you need to scroll — which brings us to the most important section.

Step 4: Save Results to JSON or CSV

After extraction, write your data to files for easy use:

1import json
2import csv
3def save_json(data, filepath="output/pins.json"):
4    with open(filepath, "w", encoding="utf-8") as f:
5        json.dump(data, f, ensure_ascii=False, indent=2)
6def save_csv(data, filepath="output/pins.csv"):
7    if not data:
8        return
9    with open(filepath, "w", newline="", encoding="utf-8-sig") as f:
10        writer = csv.DictWriter(f, fieldnames=data[0].keys())
11        writer.writeheader()
12        writer.writerows(data)

Use utf-8-sig encoding for CSV if you plan to open it in Excel — it prevents garbled characters.

Scraping Entire Pinterest Boards and User Profiles

This is a major content gap in existing tutorials. I couldn't find a single competing guide that covers board or profile scraping in depth — yet it's one of the most requested features in forums. Users want to download all pins from a board, organize images into per-board folders, and pull profile-level data like follower counts and board lists.

Scrape All Pins from a Board URL

Board URLs follow the pattern https://www.pinterest.com/{username}/{board-name}/. The DOM structure is similar to search results — pins are wrapped in div[data-test-id='pinWrapper'] — but you need to scroll to load all of them.

1async def scrape_board(board_url, max_pins=500):
2    async with async_playwright() as p:
3        browser = await p.chromium.launch(headless=True)
4        page = await browser.new_page(user_agent=USER_AGENT, viewport={"width": 1920, "height": 1080})
5        await page.goto(board_url)
6        await asyncio.sleep(3)
7        seen_ids = set()
8        all_pins = []
9        for scroll_round in range(100):  # Safety limit
10            pins = await page.query_selector_all("div[data-test-id='pinWrapper']")
11            new_count = 0
12            for pin in pins:
13                link = await pin.query_selector("a")
14                if not link:
15                    continue
16                href = await link.get_attribute("href") or ""
17                if href in seen_ids:
18                    continue
19                seen_ids.add(href)
20                new_count += 1
21                title = await link.get_attribute("aria-label") or ""
22                img = await link.query_selector("img")
23                src = await img.get_attribute("src") if img else ""
24                all_pins.append({
25                    "title": title,
26                    "url": f"https://www.pinterest.com{href}" if href.startswith("/") else href,
27                    "image_url": src
28                })
29            print(f"Scroll {scroll_round + 1}: {len(all_pins)} unique pins collected")
30            if new_count == 0 or len(all_pins) >= max_pins:
31                break
32            prev_height = await page.evaluate("document.body.scrollHeight")
33            await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
34            await asyncio.sleep(2.5)
35            curr_height = await page.evaluate("document.body.scrollHeight")
36            if curr_height == prev_height:
37                break  # No more content
38        await browser.close()
39        return all_pins

One thing to watch for: board pages sometimes have a "More Ideas" tab that separates saved pins from algorithmic recommendations. If you only want the user's actual saved pins, stop scrolling when you hit that divider.

Scrape a User Profile: Boards, Follower Count, and Pins

Profile URLs look like https://www.pinterest.com/{username}/. From a profile page, you can extract:

  • Follower/following counts: Look for div[data-test-id='follower-count']
  • Board list: Each board is a card linking to /{username}/{board-name}/
  • Total pin count: Sometimes shown in the profile header
1async def scrape_profile(username):
2    url = f"https://www.pinterest.com/{username}/"
3    async with async_playwright() as p:
4        browser = await p.chromium.launch(headless=True)
5        page = await browser.new_page(user_agent=USER_AGENT, viewport={"width": 1920, "height": 1080})
6        await page.goto(url)
7        await asyncio.sleep(3)
8        # Extract follower count
9        follower_el = await page.query_selector("div[data-test-id='follower-count']")
10        followers = await follower_el.inner_text() if follower_el else "N/A"
11        # Extract board links
12        board_links = await page.query_selector_all("a[href*='/" + username + "/']")
13        boards = []
14        for bl in board_links:
15            href = await bl.get_attribute("href") or ""
16            name = await bl.get_attribute("aria-label") or href.split("/")[-2]
17            if href.count("/") >= 3 and href != f"/{username}/":
18                boards.append({"name": name, "url": f"https://www.pinterest.com{href}"})
19        await browser.close()
20        return {"username": username, "followers": followers, "boards": boards}

To scrape all pins across every board on a profile, iterate through the board list and call scrape_board() for each one. You can organize downloaded images into per-board folders automatically.

Building a Production-Ready Infinite Scroll Handler

This is the section that separates a toy scraper from a real one. The #1 pain point — and I've seen it in at least a dozen forum threads — is that scrapers only return 16–25 items because they don't scroll far enough, or they use a fixed scroll count like for i in range(5): scroll() and hope for the best.

That approach is unreliable. Pinterest loads new content in batches of ~25 pins, triggered by scroll events. If you scroll five times, you might get 125 pins — or you might get 75 if the network was slow, or 150 if the batches were small. You need a smarter pattern.

The Scroll-Until-No-New-Content Pattern

Here's a robust scroll function that tracks unique pin IDs, uses a configurable timeout, includes retry logic, and prints progress:

1import time
2import random
3async def scroll_and_collect(page, max_pins=1000, max_scrolls=200, scroll_pause=2.5):
4    seen_ids = set()
5    all_pins = []
6    no_new_count = 0
7    for i in range(max_scrolls):
8        pins = await page.query_selector_all("div[data-test-id='pinWrapper']")
9        new_this_round = 0
10        for pin in pins:
11            link = await pin.query_selector("a")
12            if not link:
13                continue
14            href = await link.get_attribute("href") or ""
15            if href in seen_ids:
16                continue
17            seen_ids.add(href)
18            new_this_round += 1
19            title = await link.get_attribute("aria-label") or ""
20            img = await link.query_selector("img")
21            src = await img.get_attribute("src") if img else ""
22            all_pins.append({
23                "title": title,
24                "url": f"https://www.pinterest.com{href}" if href.startswith("/") else href,
25                "image_url": src
26            })
27        print(f"  Scroll {i+1}: {new_this_round} new pins | {len(all_pins)} total unique pins")
28        if len(all_pins) >= max_pins:
29            print(f"  Reached max_pins limit ({max_pins}). Stopping.")
30            break
31        if new_this_round == 0:
32            no_new_count += 1
33            if no_new_count >= 3:
34                print("  No new pins after 3 consecutive scrolls. End of content.")
35                break
36        else:
37            no_new_count = 0
38        prev_height = await page.evaluate("document.body.scrollHeight")
39        await page.evaluate("window.scrollTo(0, document.body.scrollHeight)")
40        await asyncio.sleep(scroll_pause + random.uniform(0.5, 1.5))
41        curr_height = await page.evaluate("document.body.scrollHeight")
42        if curr_height == prev_height and new_this_round == 0:
43            print("  Page height unchanged and no new pins. Likely end of feed.")
44            break
45    return all_pins

Why this design works:

  • Deduplication by href: Each pin's URL is unique, so we use it as the ID. This prevents counting the same pin twice when the DOM re-renders during scrolling.
  • Three-strike rule: If three consecutive scrolls yield zero new pins, we stop. This handles the case where the page is still loading but no new content exists.
  • Random jitter on pause: Adding 0.5–1.5 seconds of random delay between scrolls makes the behavior look more human and reduces the chance of triggering anti-bot measures.
  • Max scrolls safety limit: Prevents infinite loops if something goes wrong.

Handling Edge Cases

  • "More Ideas" break: On board pages, Pinterest sometimes inserts a "More Ideas" section. If you only want the board's actual pins, you can check for this element and stop scrolling when it appears.
  • Rate limiting during long sessions: If you're scrolling through a board with thousands of pins, Pinterest may start throttling responses. If you notice scrolls producing zero new pins intermittently (not three in a row), increase your scroll pause to 5+ seconds.

Getting Full-Resolution Pinterest Images (Not Thumbnails)

This one drives people nuts. You scrape a bunch of pins, download the images, and they're all tiny 236px thumbnails. Users on forums describe it as "trash quality, like too small size." The fix is understanding Pinterest's image URL structure.

Understanding Pinterest Image URL Paths

All Pinterest images are served from https://i.pinimg.com/{size}/{hash}.jpg. The {size} segment controls the resolution:

Size PathDimensionsUsage
/236x/236px wideDefault grid view (what you get by default)
/474x/474px wideMedium resolution
/736x/736px widePin detail/expanded view
/originals/Original upload dimensionsFull resolution

Utility Function: Upgrade Any Pinterest Image URL to Full Resolution

Here's a function that rewrites any Pinterest image URL to the highest available quality, with fallback logic:

1import requests as req
2def upgrade_image_url(url, preferred_size="originals"):
3    """Rewrite a Pinterest image URL to the highest available resolution."""
4    sizes = ["originals", "736x", "474x", "236x"]
5    if preferred_size not in sizes:
6        preferred_size = "originals"
7    for size in sizes[sizes.index(preferred_size):]:
8        upgraded = url
9        for s in sizes:
10            upgraded = upgraded.replace(f"/{s}/", f"/{size}/")
11        try:
12            resp = req.head(upgraded, timeout=5, allow_redirects=True)
13            if resp.status_code == 200:
14                return upgraded
15        except Exception:
16            continue
17    return url  # Return original if all fail

Important note (as of 2025): The /originals/ path increasingly returns HTTP 403 Forbidden errors. A confirms this behavior as of mid-2025. The reliable maximum is /736x/. My function tries /originals/ first, then falls back to /736x/ automatically.

Downloading Images to Organized Folders

1import os
2import time
3def download_images(pins, folder="images/default", delay=1.5):
4    os.makedirs(folder, exist_ok=True)
5    for i, pin in enumerate(pins):
6        img_url = upgrade_image_url(pin.get("image_url", ""), preferred_size="736x")
7        if not img_url:
8            continue
9        filename = f"pin_{i+1}.jpg"
10        filepath = os.path.join(folder, filename)
11        try:
12            resp = req.get(img_url, timeout=15)
13            if resp.status_code == 200:
14                with open(filepath, "wb") as f:
15                    f.write(resp.content)
16                print(f"  Downloaded {filename} ({len(resp.content) // 1024} KB)")
17            else:
18                print(f"  Failed {filename}: HTTP {resp.status_code}")
19        except Exception as e:
20            print(f"  Error downloading {filename}: {e}")
21        time.sleep(delay + random.uniform(0.3, 0.8))

Add a rate-limiting delay between downloads. I use 1.5–2.3 seconds with jitter. Without it, Pinterest will block your IP after a few hundred requests.

Exporting Your Scraped Pinterest Data

Export to CSV or JSON

We covered the basics earlier. For larger datasets (10,000+ pins), consider JSON Lines format — one JSON object per line — which is easier to stream and process:

1def save_jsonl(data, filepath="output/pins.jsonl"):
2    with open(filepath, "w", encoding="utf-8") as f:
3        for item in data:
4            f.write(json.dumps(item, ensure_ascii=False) + "\n")

Export to Google Sheets, Airtable, or Notion

If you want to push data directly from Python to Google Sheets, you'll need the gspread library and a Google Cloud service account. For Airtable, use pyairtable. For Notion, notion-client. Each requires API key setup and adds real complexity to your pipeline.

Or — and I'm biased here, but it's genuinely the fastest path — you can use to scrape Pinterest and export to any of these destinations in one click. No API keys, no service accounts, no extra code. The handles the export natively.

Tips to Avoid Getting Blocked While Scraping Pinterest

Pinterest's anti-bot system is rated by ScrapeOps — not trivial, but not the hardest target either. It uses browser fingerprinting, behavioral analysis, and IP-based rate limiting. Here's what works:

  • Rotate user agents: Use a pool of real Chrome user agent strings and pick one randomly per session.
  • Add random delays: 2–5 seconds between scrolls and requests, with jitter. For sessions without proxies, increase to 10–15 seconds.
  • Use a realistic viewport: Set viewport={"width": 1920, "height": 1080} — don't use tiny or unusual dimensions.
  • Consider proxies for scale: If you're scraping thousands of pins, rotate residential proxies. Without them, expect IP blocks after a few hundred requests.
  • Respect robots.txt: Pinterest's robots.txt blocks most automated crawlers and has . Keep this in mind for compliance.
  • Avoid logged-in scraping: Stick to publicly visible content while logged out. Scraping behind a login raises both legal and technical risks.

Thunderbit handles anti-bot and CAPTCHA challenges automatically via its AI engine — one less thing to maintain if you go the no-code route.

I'll keep this brief because it's not the focus of the article, but it matters.

Pinterest's Terms of Service (Section 2a) state that you agree not to "scrape, collect, search, copy or otherwise access data or content from Pinterest in unauthorized ways, such as by using automated means (without our express prior permission)." That said, courts have generally held that scraping publicly available data does not violate the Computer Fraud and Abuse Act — see and Meta v. Bright Data (Jan 2024), where the court ruled that scraping publicly visible data while logged out is legal.

A few ground rules:

  • Only scrape publicly visible content while logged out
  • Don't use scraped data for spam or to impersonate users
  • Respect copyright on images — extract metadata where possible, and avoid redistributing copyrighted images commercially without permission
  • If you plan to use scraped data for commercial purposes, talk to a lawyer

For a deeper look at the legal landscape, see our .

Wrapping Up: What You've Learned and Where to Go Next

You now know why static scraping fails on Pinterest (it's a React SPA — no JavaScript, no data), how to use Playwright to scrape search results, boards, and user profiles, how to build a production-ready infinite scroll handler that doesn't quit after 16 pins, and how to get full-resolution images instead of tiny thumbnails.

Quick recap of what matters:

  • requests + BeautifulSoup won't work on Pinterest. Don't waste your time.
  • Playwright is the best Python tool for the job — fast, well-supported, and handles JS rendering natively.
  • Infinite scroll requires a deduplication-based scroll loop, not a fixed count.
  • Full-res images need URL path rewriting — target /736x/ (since /originals/ often returns 403).
  • Board and profile scraping are underserved in existing tutorials but straightforward with the right selectors.
  • For non-coders or teams who want speed, lets you scrape Pinterest in 2 clicks and export to Google Sheets, Excel, Airtable, or Notion — no Python required. Try it free via the .

If you're building a Python pipeline, the code in this guide gives you a solid foundation. If you just need the data, Thunderbit is the shortcut. Either way, you're no longer stuck with 16 pins and a blank stare.

For more on scraping and data extraction, check out our guides on , , and . You can also explore or watch tutorials on the .

FAQs

1. Can you scrape Pinterest with BeautifulSoup?

Not effectively on its own. Pinterest renders all content via JavaScript, so requests + BeautifulSoup only sees an empty HTML shell. You need a headless browser like Playwright or Selenium to render the page first, or you can use a no-code tool like Thunderbit that handles JS rendering automatically.

2. How many pins can you scrape from Pinterest in one session?

It depends on your scroll logic and anti-bot handling. With the production-ready infinite scroll handler in this guide (deduplication, timeout, retry logic), you can reliably scrape hundreds to thousands of pins per board or search query. For very large boards, expect to spend several minutes scrolling and collecting.

3. Why do my scraped Pinterest images come out tiny?

By default, Pinterest serves /236x/ thumbnails in the grid view. To get higher resolution, rewrite the image URL path to /736x/ or /originals/. Note that /originals/ increasingly returns 403 errors as of 2025, so /736x/ is the reliable maximum.

4. Is it legal to scrape Pinterest?

Scraping publicly available data is generally accepted under recent court rulings (e.g., hiQ v. LinkedIn, Meta v. Bright Data), but Pinterest's Terms of Service prohibit unauthorized automated access. Stick to public content, don't use scraped data for spam, respect copyright, and consult legal counsel for commercial use cases.

5. What is the best no-code alternative to scrape Pinterest?

can extract Pinterest pin data — titles, images, URLs, descriptions — in 2 clicks, with built-in export to Google Sheets, Excel, Airtable, or Notion. It handles JavaScript rendering, infinite scroll, and anti-bot challenges automatically, so you don't need to write or maintain any code.

Learn More

Shuai Guan
Shuai Guan
Co-founder/CEO @ Thunderbit. Passionate about cross section of AI and Automation. He's a big advocate of automation and loves making it more accessible to everyone. Beyond tech, he channels his creativity through a passion for photography, capturing stories one picture at a time.
Table of Contents

Try Thunderbit

Scrape leads & other data in just 2-clicks. Powered by AI.

Get Thunderbit It's free
Extract Data using AI
Easily transfer data to Google Sheets, Airtable, or Notion
Chrome Store Rating
PRODUCT HUNT#1 Product of the Week