Quickstart


NOTE: These instructions assume that you have Apache Maven® installed. You will also need to install Apache Storm® 2.6.4 to run the crawler.

Once Apache Storm® is installed, the easiest way to get started is to generate a brand new Apache StormCrawler (Incubating) project using:

mvn archetype:generate -DarchetypeGroupId=org.apache.stormcrawler -DarchetypeArtifactId=stormcrawler-archetype -DarchetypeVersion=3.1.0

You'll be asked to enter a groupId (e.g. com.mycompany.crawler), an artefactId (e.g. stormcrawler), a version, a package name and details about the user agent to use.

This will not only create a fully formed project containing a POM with the dependency above but also the default resource files, a default CrawlTopology class and a configuration file. Enter the directory you just created (should be the same as the artefactId you specified earlier e.g. stormcrawler) and follow the instructions on the README file.

Alternatively if you can't or don't want to use the Maven archetype above, you can simply copy the files from archetype-resources.

Have a look at the code of the CrawlTopology class, the crawler-conf.yaml file as well as the files in 'src/main/resources/', they are all that is needed to run a crawl topology : all the other components come from the core module.

What this CrawlTopology does is very simple : it gets URLs to crawl from a URLFrontier instance and emits them on the topology. These URLs are then partitioned by hostname to enfore the politeness and then fetched. The next bolt (SiteMapParserBolt) checks whether they are sitemap files and if not passes them on to a HTML parser. The parser extracts the text from the document and passes it to a dummy indexer which simply prints a representation of the content onto the standard out. The last component of the topology gathers information about newly discovered URLs (as part of the parsing bolts) or changes to the status of the URLs emitted by the spout (redirections, errors, success) and sends these back to URLFrontier.

Of course this topology is very primitive and its purpose is merely to give you an idea of how Apache StormCrawler (Incubating) works. In reality, you'd use a different spout and index the documents to a proper backend. Look at the external modules to see what's already available. Another limitation of this topology is that it will work in local mode only or on a single worker.

You can run the topology in local mode with :

storm local target/_INSERTJARNAMEHERE_.jar CrawlTopology -conf crawler-conf.yaml


The WIKI pages contain useful information on the components and configuration and should help you going further.