web-crawlernutchgora

Nutch 2.3 not storing crawl data correctly in Cassandra


I'm running a crawl with mostly default options with Nutch 2.3 with a Cassandra backend. As a seed list a file with 71 urls is used and I'm crawling with the following command:

bin/crawl ~/dev/urls/ crawlid1 5

The keys are stored in Cassandra and the f, p and sc column families are created, however, if I try to read the WebPage objects, the content and text fields are empty, despite the output stating that the fetch and parser jobs supposedly ran.

Furthermore, no new links are added to the link db, despite db.update.additions.allowed having its default value of true.

After completion, I try to read out the crawl data with the code below. This only shows some fields being populated. Looking at the code in FetcherJob and ParserJob, I don't see any reason why the content or text fields should be empty. I'm probably missing some basic setting, but googling for my problem didn't yield anything. I also set breakpoints in the ParserMapper and FetcherMapper and they seem to be executed.

Does anyone know how to store fetched/parsed content in Cassandra with Nutch 2?

import static java.nio.charset.StandardCharsets.UTF_8;

import java.io.Closeable;
import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;

import org.apache.gora.query.Query;
import org.apache.gora.query.Result;
import org.apache.gora.store.DataStore;
import org.apache.gora.store.DataStoreFactory;
import org.apache.gora.util.GoraException;
import org.apache.hadoop.conf.Configuration;
import org.apache.nutch.storage.WebPage;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * Reads the rows from a {@link DataStore} as a {@link WebPage}.
 * 
 * @author Jeroen Vlek, jv@datamantics.com Created: Feb 25, 2015
 *
 */
public class NutchWebPageReader implements Closeable {
    private static final Logger LOGGER = LoggerFactory.getLogger(NutchWebPageReader.class);

    DataStore<String, WebPage> dataStore;

    /**
     * Initializes the datastore field with the {@link Configuration} as defined
     * in gora.properties in the classpath.
     */
    public NutchWebPageReader() {
        try {
            dataStore = DataStoreFactory.getDataStore(String.class, WebPage.class, new Configuration());
        } catch (GoraException e) {
            throw new RuntimeException(e);
        }
    }

    /**
     * @param args
     */
    public static void main(String[] args) {
        Map<String, WebPage> pages = null;
        try (NutchWebPageReader pageReader = new NutchWebPageReader()) {
            pages = pageReader.getAllPages();
        } catch (IOException e) {
            LOGGER.error("Could not close page reader.", e);
        }
        LOGGER.info("Found {} results.", pages.size());

        for (Entry<String, WebPage> entry : pages.entrySet()) {
            String key = entry.getKey();
            WebPage page = entry.getValue();
            String content = "null";
            if (page.getContent() != null) {
                new String(page.getContent().array(), UTF_8);
            }
            LOGGER.info("{} with content {}", key, content);
        }
    }

    /**
     * @return
     * 
     */
    public Map<String, WebPage> getAllPages() {
        Query<String, WebPage> query = dataStore.newQuery();
        Result<String, WebPage> result = query.execute();
        Map<String, WebPage> resultMap = new HashMap<>();
        try {
            while (result.next()) {
                resultMap.put(result.getKey(), dataStore.get(result.getKey()));
            }
        } catch (Exception e) {
            LOGGER.error("Something went wrong while processing the query result.", e);
        }

        return resultMap;
    }

    /*
     * (non-Javadoc)
     * 
     * @see java.io.Closeable#close()
     */
    @Override
    public void close() throws IOException {
        dataStore.close();
    }

}

And here is my nutch-site.xml:

<property>
    <name>storage.data.store.class</name>
    <value>org.apache.gora.cassandra.store.CassandraStore</value>
    <description>Default class for storing data</description>
</property>
<property>
    <name>http.agent.name</name>
    <value>Nibbler</value>
</property>
<property>
    <name>fetcher.verbose</name>
    <value>true</value>
    <description>If true, fetcher will log more verbosely.</description>
</property>
<property>
    <name>fetcher.parse</name>
    <value>true</value>
    <description>If true, fetcher will parse content. NOTE: previous
        releases would
        default to true. Since 2.0 this is set to false as a safer default.</description>
</property>
<property>
    <name>http.content.limit</name>
    <value>999999999</value>
</property>

EDIT

I was using Cassandra 2.0.12, but I just tried it with 2.0.2 and that didn't resolve the issue. So the versions I'm using:

Changing result.get() into dataStore.get(result.getKey()) resulted in some fields actually being populated, but content and text are still empty.

Some output:

[jvlek@orochimaru nutch]$ runtime/local/bin/nutch inject ~/dev/urls/
InjectorJob: starting at 2015-03-02 18:34:29
InjectorJob: Injecting urlDir: /home/jvlek/dev/urls
InjectorJob: Using class org.apache.gora.cassandra.store.CassandraStore as the Gora storage class.
InjectorJob: total number of urls rejected by filters: 0
InjectorJob: total number of urls injected after normalization and filtering: 69
Injector: finished at 2015-03-02 18:34:32, elapsed: 00:00:02
[jvlek@orochimaru nutch]$ runtime/local/bin/nutch readdb -url http://www.wired.com/
key:    http://www.wired.com/
baseUrl:        null
status: 0 (null)
fetchTime:      1425317669727
prevFetchTime:  0
fetchInterval:  2592000
retriesSinceFetch:      0
modifiedTime:   0
prevModifiedTime:       0
protocolStatus: (null)
parseStatus:    (null)
title:  null
score:  1.0
marker _injmrk_ :       y
marker dist :   0
reprUrl:        null
metadata _csh_ :        ??

[jvlek@orochimaru nutch]$ runtime/local/bin/nutch generate -batchId 1
GeneratorJob: starting at 2015-03-02 18:34:50
GeneratorJob: Selecting best-scoring urls due for fetch.
GeneratorJob: starting
GeneratorJob: filtering: true
GeneratorJob: normalizing: true
GeneratorJob: finished at 2015-03-02 18:34:54, time elapsed: 00:00:03
GeneratorJob: generated batch id: 1 containing 66 URLs
[jvlek@orochimaru nutch]$ runtime/local/bin/nutch readdb -url http://www.wired.com/
key:    http://www.wired.com/
baseUrl:        null
status: 0 (null)
fetchTime:      1425317669727
prevFetchTime:  0
fetchInterval:  2592000
retriesSinceFetch:      0
modifiedTime:   0
prevModifiedTime:       0
protocolStatus: (null)
parseStatus:    (null)
title:  null
score:  1.0
marker _injmrk_ :       y
marker _gnmrk_ :        1
marker dist :   0
reprUrl:        null
batchId:        1
metadata _csh_ :        ??

Solution

  • It's a bug in Gora. A blocker ticket has been opened:

    http://mail-archives.apache.org/mod_mbox/gora-user/201503.mbox/%3CCAGaRif3NfKmvRE%3DBhLuFw8fmxUOLW1wJhNefp_%2Bk901kjJs2ig%40mail.gmail.com%3E

    https://issues.apache.org/jira/browse/GORA-416