elasticsearchweb-crawlerapache-stormstormcrawler

How to filter stromcrawler data from elasticsearch


I am using apache-storm 1.2.3 and elasticsearch 7.5.0. I have successfully extracted data from 3k news website and visualized on Grafana and kibana. I am getting a lot of garbage (like advertisement) in content.I have attached SS of CONTENT.content Can anyone please suggest me how can i filter them. I was thinking to feed html content from ES to some python package.am i on right track if not please suggest me good solution. Thanks In Advance.

this is crawler-conf.yaml file

config:
  topology.workers: 1
  topology.message.timeout.secs: 300
  topology.max.spout.pending: 100
  topology.debug: false

  fetcher.threads.number: 50

  # override the JVM parameters for the workers
  topology.worker.childopts: "-Xmx2g -Djava.net.preferIPv4Stack=true"

  # mandatory when using Flux
  topology.kryo.register:
    - com.digitalpebble.stormcrawler.Metadata

  # metadata to transfer to the outlinks
  # used by Fetcher for redirections, sitemapparser, etc...
  # these are also persisted for the parent document (see below)
  # metadata.transfer:
  # - customMetadataName

  # lists the metadata to persist to storage
  # these are not transfered to the outlinks
  metadata.persist:
   - _redirTo
    - error.source
   - isSitemap
   - isFeed

  http.agent.name: "Nitesh Singh"
  http.agent.version: "1.0"
  http.agent.description: "built with StormCrawler Elasticsearch Archetype 1.16"
  http.agent.url: "http://someorganization.com/"
  http.agent.email: "nite0sh@gmail.com"

  # The maximum number of bytes for returned HTTP response bodies.
  # The fetched page will be trimmed to 65KB in this case
  # Set -1 to disable the limit.
  http.content.limit: 65536

  # FetcherBolt queue dump => comment out to activate
  # if a file exists on the worker machine with the corresponding port number
  # the FetcherBolt will log the content of its internal queues to the logs
  # fetcherbolt.queue.debug.filepath: "/tmp/fetcher-dump-{port}"

  parsefilters.config.file: "parsefilters.json"
  urlfilters.config.file: "urlfilters.json"

  # revisit a page daily (value in minutes)
  # set it to -1 to never refetch a page
  fetchInterval.default: 1440

  # revisit a page with a fetch error after 2 hours (value in minutes)
  # set it to -1 to never refetch a page
  fetchInterval.fetch.error: 120
fetchInterval.error: -1

  # text extraction for JSoupParserBolt
  textextractor.include.pattern:
   - DIV[id="maincontent"]
   - DIV[itemprop="articleBody"]
   - ARTICLE

  textextractor.exclude.tags:
   - STYLE
   - SCRIPT

  # custom fetch interval to be used when a document has the key/value in its metadata
  # and has been fetched successfully (value in minutes)
  # fetchInterval.FETCH_ERROR.isFeed=true: 30
  # fetchInterval.isFeed=true: 10

  # configuration for the classes extending AbstractIndexerBolt
  # indexer.md.filter: "someKey=aValue"
  indexer.url.fieldname: "url"
  indexer.text.fieldname: "content"
  indexer.canonical.name: "canonical"
  indexer.md.mapping:
  - parse.title=title
  - parse.keywords=keywords
  - parse.description=description
  - domain=domain

  # Metrics consumers:
  topology.metrics.consumer.register:
     - class: "org.apache.storm.metric.LoggingMetricsConsumer"
 parallelism.hint: 1


Solution

  • Did you configure the text extractor? e.g.

      # text extraction for JSoupParserBolt
      textextractor.include.pattern:
       - DIV[id="maincontent"]
       - DIV[itemprop="articleBody"]
       - ARTICLE
    
      textextractor.exclude.tags:
       - STYLE
       - SCRIPT
    

    This will restrict the text to specific elements if found and / or remove the ones specified in the exclusion.

    Most news sites will use some form of tag to mark the main content.

    The example you gave as an element for which you could add a pattern.

    There are various boilerplate removal libraries you could embed in a ParseFilter but their accuracy varies quite a lot.