Scrapi Reddit is a zero-auth toolkit for scraping public Reddit listings. Use the CLI for quick data pulls or import the library to integrate pagination, comment harvesting, and CSV exports into your own workflows. This scraper fetches data from Reddit's Public API and does not require any API key.
- Listing coverage: Scrape subreddit posts (with their comments), the front page, r/popular (geo-aware), r/all, user activity, or custom listing URLs without OAuth.
- Search mode: Run keyword searches (site-wide or scoped to a subreddit) with custom type filters, sort orders, and time windows.
- Comment controls: Toggle comment collection per post with resumable runs that reuse cached JSON and persist to CSV.
- Post deep dives: Target individual posts to download full comment trees on demand.
- Resilient fetching: Automatic pagination, exponential backoff for rate limits, and structured logging with adjustable verbosity.
- Media archiving: Optional media capture downloads linked images, GIFs, and videos alongside post metadata.
- Media filters: Media filters let you keep only the assets you need (e.g., videos only or static images only).
- Flexible exports: Save outputs as JSON and optionally flatten posts/comments into CSV for downstream analysis.
- Scriptable tooling: Configurable CLI (config files + wizard) alongside a Python API for scripting and integration.
- Respect Reddit's User Agreement and local laws. Scraped data may have legal or ethical constraints.
- Heavy scraping can trigger rate limits or temporary IP bans. Keep delays reasonable (I recommend 3 or 4 seconds delay).
- Python 3.9+
requests(runtime)pytest(tests, optional)
pip install scrapi-redditAfter installation the console entry point scrapi-reddit is available on your PATH.
scrapi-reddit python --limit 200 --fetch-comments --output-format bothThis command downloads up to 200 posts from r/python, fetches comments (up to 500 per post), and writes JSON + CSV outputs under ./scrapi_reddit_data.
--fetch-commentsEnable post-level comment requests (defaults off).--comment-limit 0Request the maximum 500 comments per post.--continueResume a previous run by reusing cached post JSON files and skipping previously downloaded media.--media-filter video,gifRestrict downloads to specific categories or extensions (video,image,animated, or extensions such asmp4,jpg,gif).--search "python asyncio" --search-types post,comment --search-sort top --search-time weekQuery Reddit search.json with flexible filters (types: post/link, comment, sr, user, media).--download-mediaSave linked images/GIFs/videos under each target's media directory.--popular --popular-geo <region-code>Pull popular listings with geo filters.--user <name>Scrape user overview/submitted/comments sections.--config scrape.tomlLoad defaults from a TOML file (seeexamples/for ready-made templates; CLI flags override values inside the file).--wizardLaunch an interactive prompt that writes reusable configs or runs immediately.
Fetch multiple subreddits with varied sorts and time windows, downloading all fetched media:
scrapi-reddit python typescript --subreddit-sorts top,hot --subreddit-top-times day,all --limit 500 --output-format both --download-mediaResume a long run after interruption:
scrapi-reddit python --fetch-comments --continue --limit 1000 --log-level INFODownload a single post (JSON + CSV):
scrapi-reddit --post-url https://www.reddit.com/r/python/comments/xyz789/example_post/Fetch top search results with the keyword "python asyncio", including the comments for each fetched post and download all media:
scrapi-reddit --search "python asyncio" --search-types post,comment --search-sort top --search-time week --limit 200 --output-format both --fetch-comments --download-mediaImport the library when you need finer control inside Python scripts.
from scrapi_reddit import build_session
session = build_session("your-app-name/0.1", verify=True)from pathlib import Path
from scrapi_reddit import ScrapeOptions
options = ScrapeOptions(
output_root=Path("./scrapes"),
listing_limit=250,
comment_limit=0, # auto-expand to 500
delay=3.0,
time_filter="day",
output_formats={"json", "csv"},
fetch_comments=True,
resume=True, # reuse cached JSON/media on reruns
download_media=True,
media_filters={"video", ".mp4"},
)from scrapi_reddit import ListingTarget, build_search_target, process_listing
target = ListingTarget(
label="r/python top (day)",
output_segments=("subreddits", "python", "top_day"),
url="https://www.reddit.com/r/python/top/.json",
params={"t": "day"},
context="python",
)
process_listing(target, session=session, options=options)
search_target = build_search_target(
"python asyncio",
search_types=["comment"],
sort="new",
time_filter="day",
)
process_listing(search_target, session=session, options=options)from scrapi_reddit import PostTarget, process_post
post_target = PostTarget(
label="Example post",
output_segments=("posts", "python", "xyz789"),
url="https://www.reddit.com/r/python/comments/xyz789/example_post/.json",
)
process_post(post_target, session=session, options=options)Both helpers write JSON/CSV to the configured output directory and emit progress via logging.
When download_media=True (or --download-media on the CLI) any discoverable images, GIFs, and videos are saved under a media/ directory per target. Media is organized by the item that produced it: media/posts/<format>/ for post attachments and (when comment scraping is enabled) media/comments/<format>/ for comment attachments. Formats include mp4, webm, gif, jpg, and png; additional extensions fall back to an other/ directory. Reddit preview URLs occasionally expire, so you may see warning logs for 404 responses when older links have been removed.
- API Reference
- Configuration & Wizard Guide
- Error Handling & Edge Cases
- Sample Workflows
- Example Configs
Bug reports and pull requests are welcome. For feature requests or questions, please open an issue. When contributing, add tests that cover new behavior and ensure python -m pytest passes before submitting a PR.
Released under the MIT License. You may use, modify, and distribute this project with attribution and a copy of the license. Use at your own risk.