A collection of resources for building low-latency, scalable web crawlers on Apache Storm®
Apache StormCrawler (Incubating) is an open source SDK for building distributed web crawlers based on Apache Storm®. The project is under Apache license v2 and consists of a collection of reusable resources and components, written mostly in Java.
The aim of Apache StormCrawler (Incubating) is to help build web crawlers that are :
- scalable
- resilient
- low latency
- easy to extend
- polite yet efficient
Apache StormCrawler (Incubating) is a library and collection of resources that developers can leverage to build their own crawlers. The good news is that doing so can be pretty straightforward! Have a look at the Getting Started section for more details.
Apart from the core components, we provide some external resources that you can reuse in your project, like for instance our spout and bolts for OpenSearch® or a ParserBolt which uses Apache Tika® to parse various document formats.
Apache StormCrawler (Incubating) is perfectly suited to use cases where the URL to fetch and parse come as streams but is also an appropriate solution for large scale recursive crawls, particularly where low latency is required. The project is used in production by many organisations and is actively developed and maintained.
The Presentations page contains links to some recent presentations made about this project.