I am currently attempting to create a Perl webspider using WWW::Mechanize.
What I am trying to do is create a webspider that will crawl the whole site of the URL (entered by the user) and extract all of the links from every page on the site.
What I have so far:
use strict;
use WWW::Mechanize;
my $mech = WWW::Mechanize->new();
my $urlToSpider = $ARGV[0];
$mech->get($urlToSpider);
print "\nThe url that will be spidered is $urlToSpider\n";
print "\nThe links found on the url's starting page\n";
my @foundLinks = $mech->find_all_links();
foreach my $linkList(@foundLinks) {
unless ($linkList->[0] =~ /^http?:\/\//i || $linkList->[0] =~ /^https?:\/\//i) {
$linkList->[0] = "$urlToSpider" . $linkList->[0];
}
print "$linkList->[0]";
print "\n";
}
What it does:
1. At present it will extract and list all links on the starting page
2. If the links found are in /contact-us or /help format it will add 'http://www.thestartingurl.com' to the front of it so it becomes 'http://www.thestartingurl.com/contact-us'.
The problem:
At the moment it also finds links to external sites which I do not want it to do, e.g if I want to spider 'http://www.tree.com' it will find links such as http://www.tree.com/find-us. However it will also find links to other sites like http://www.hotwire.com.
How do I stop it finding these external urls?
After finding all the urls on the page I then also want to save this new list of internal-only links to a new array called @internalLinks but cannot seem to get it working.
This should do the trick:
my @internalLinks = $mech->find_all_links(url_abs_regex => qr/^\Q$urlToSpider\E/);
If you don't want css links try:
my @internalLinks = $mech->find_all_links(url_abs_regex => qr/^\Q$urlToSpider\E/, tag => 'a');
Also, the regex you're using to add the domain to any relative links can be replaced with:
print $linkList->url_abs();