javaweb-crawlercrawler4j

How to resume crawling after last depth I reached when I restart my crawler?


Hello Everyone I am making a web application that crawl lots of pages from a specific website, I started my crawler4j software with unlimited depth and pages but suddenly it stopped because of internet connection. Now I want to continue crawling that website and not to fetch the urls I visited before considering I have last pages depth.

Note : I want some way that not to check my stored url with the urls I will fetch because I don't want to send very much requests to this site.

**Thanks **☺


Solution

  • You can use "resumeable" crawling with crawler4j by enabling this feature

    crawlConfig.setResumableCrawling(true);
    

    in the given configuration. See the documentation of crawler4j here.