An open API service providing repository metadata for many open source software ecosystems.

GitHub / simonpierreboucher / Crawler

A robust, modular web crawler built in Python for extracting and saving content from websites. This crawler is specifically designed to extract text content from both HTML and PDF files, saving them in a structured format with metadata.

JSON API: http://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/simonpierreboucher%2FCrawler
PURL: pkg:github/simonpierreboucher/Crawler

Stars: 0
Forks: 0
Open issues: 0

License: None
Language: Python
Size: 87.9 KB
Dependencies parsed at: Pending

Created at: 9 months ago
Updated at: 9 months ago
Pushed at: 9 months ago
Last synced at: 4 months ago

Topics: concurrent-crawling, content-extraction, data-collection, data-extraction-pipeline, data-preservation-and-recovery, data-scraping, error-handling, html-parsing, http-requests, metadata-storage, modular-design, pdf-text-extraction, python-crawler, rate-limiting, structured-data-storage, text-processing, url-normalization, web-crawling, yaml-configuration

    Loading...