Skip to content

A complete Reddit scraper without API keys, available with every sort/filter, resilient caching, and full media capture into tidy JSON/CSV pipelines.

License

Notifications You must be signed in to change notification settings

rodneykeilson/ScrapiReddit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Scrapi Reddit

CI

Scrapi Reddit is a zero-auth toolkit for scraping public Reddit listings. Use the CLI for quick data pulls or import the library to integrate pagination, comment harvesting, and CSV exports into your own workflows. This scraper fetches data from Reddit's Public API and does not require any API key.

Features

  • Listing coverage: Scrape subreddit posts (with their comments), the front page, r/popular (geo-aware), r/all, user activity, or custom listing URLs without OAuth.
  • Search mode: Run keyword searches (site-wide or scoped to a subreddit) with custom type filters, sort orders, and time windows.
  • Comment controls: Toggle comment collection per post with resumable runs that reuse cached JSON and persist to CSV.
  • Post deep dives: Target individual posts to download full comment trees on demand.
  • Resilient fetching: Automatic pagination, exponential backoff for rate limits, and structured logging with adjustable verbosity.
  • Media archiving: Optional media capture downloads linked images, GIFs, and videos alongside post metadata.
  • Media filters: Media filters let you keep only the assets you need (e.g., videos only or static images only).
  • Flexible exports: Save outputs as JSON and optionally flatten posts/comments into CSV for downstream analysis.
  • Scriptable tooling: Configurable CLI (config files + wizard) alongside a Python API for scripting and integration.

Important Notes

  • Respect Reddit's User Agreement and local laws. Scraped data may have legal or ethical constraints.
  • Heavy scraping can trigger rate limits or temporary IP bans. Keep delays reasonable (I recommend 3 or 4 seconds delay).

Dependencies

  • Python 3.9+
  • requests (runtime)
  • pytest (tests, optional)

Installation

pip install scrapi-reddit

After installation the console entry point scrapi-reddit is available on your PATH.

Quick Start (CLI)

scrapi-reddit python --limit 200 --fetch-comments --output-format both

This command downloads up to 200 posts from r/python, fetches comments (up to 500 per post), and writes JSON + CSV outputs under ./scrapi_reddit_data.

Common CLI Options

  • --fetch-comments Enable post-level comment requests (defaults off).
  • --comment-limit 0 Request the maximum 500 comments per post.
  • --continue Resume a previous run by reusing cached post JSON files and skipping previously downloaded media.
  • --media-filter video,gif Restrict downloads to specific categories or extensions (video, image, animated, or extensions such as mp4, jpg, gif).
  • --search "python asyncio" --search-types post,comment --search-sort top --search-time week Query Reddit search.json with flexible filters (types: post/link, comment, sr, user, media).
  • --download-media Save linked images/GIFs/videos under each target's media directory.
  • --popular --popular-geo <region-code> Pull popular listings with geo filters.
  • --user <name> Scrape user overview/submitted/comments sections.
  • --config scrape.toml Load defaults from a TOML file (see examples/ for ready-made templates; CLI flags override values inside the file).
  • --wizard Launch an interactive prompt that writes reusable configs or runs immediately.

Advanced CLI Examples

Fetch multiple subreddits with varied sorts and time windows, downloading all fetched media:

scrapi-reddit python typescript --subreddit-sorts top,hot --subreddit-top-times day,all --limit 500 --output-format both --download-media

Resume a long run after interruption:

scrapi-reddit python --fetch-comments --continue --limit 1000 --log-level INFO

Download a single post (JSON + CSV):

scrapi-reddit --post-url https://www.reddit.com/r/python/comments/xyz789/example_post/

Fetch top search results with the keyword "python asyncio", including the comments for each fetched post and download all media:

scrapi-reddit --search "python asyncio" --search-types post,comment --search-sort top --search-time week --limit 200 --output-format both --fetch-comments --download-media

Python API

Import the library when you need finer control inside Python scripts.

Step 1 – Configure a session

from scrapi_reddit import build_session

session = build_session("your-app-name/0.1", verify=True)

Step 2 – Define scrape options

from pathlib import Path
from scrapi_reddit import ScrapeOptions

options = ScrapeOptions(
    output_root=Path("./scrapes"),
    listing_limit=250,
    comment_limit=0,      # auto-expand to 500
    delay=3.0,
    time_filter="day",
    output_formats={"json", "csv"},
    fetch_comments=True,
    resume=True,          # reuse cached JSON/media on reruns
    download_media=True,
    media_filters={"video", ".mp4"},
)

Step 3 – Scrape a listing or search

from scrapi_reddit import ListingTarget, build_search_target, process_listing

target = ListingTarget(
    label="r/python top (day)",
    output_segments=("subreddits", "python", "top_day"),
    url="https://www.reddit.com/r/python/top/.json",
    params={"t": "day"},
    context="python",
)

process_listing(target, session=session, options=options)

search_target = build_search_target(
    "python asyncio",
    search_types=["comment"],
    sort="new",
    time_filter="day",
)

process_listing(search_target, session=session, options=options)

Step 4 – Scrape a single post

from scrapi_reddit import PostTarget, process_post

post_target = PostTarget(
    label="Example post",
    output_segments=("posts", "python", "xyz789"),
    url="https://www.reddit.com/r/python/comments/xyz789/example_post/.json",
)

process_post(post_target, session=session, options=options)

Both helpers write JSON/CSV to the configured output directory and emit progress via logging. When download_media=True (or --download-media on the CLI) any discoverable images, GIFs, and videos are saved under a media/ directory per target. Media is organized by the item that produced it: media/posts/<format>/ for post attachments and (when comment scraping is enabled) media/comments/<format>/ for comment attachments. Formats include mp4, webm, gif, jpg, and png; additional extensions fall back to an other/ directory. Reddit preview URLs occasionally expire, so you may see warning logs for 404 responses when older links have been removed.

Documentation

Contributing

Bug reports and pull requests are welcome. For feature requests or questions, please open an issue. When contributing, add tests that cover new behavior and ensure python -m pytest passes before submitting a PR.

License

Released under the MIT License. You may use, modify, and distribute this project with attribution and a copy of the license. Use at your own risk.

About

A complete Reddit scraper without API keys, available with every sort/filter, resilient caching, and full media capture into tidy JSON/CSV pipelines.

Topics

Resources

License

Stars

Watchers

Forks

Languages