I have just scraped a bunch of Google Buzz data, and I want to know which Buzz posts reference the same news articles. The problem is that many of the links in these posts have been modified by URL shorteners, so it could be the case that many distinct shortened URLs actually all point to the same news article.
Given that I have millions of posts, what is the most efficient way (preferably in python) for me to
Does anyone know if the URL shorteners impose strict request rate limits? If I keep this down to 100/second (all coming form the same IP address), do you think I'll run into trouble?
UPDATE & PRELIMINARY SOLUTION The responses have led to to the following simple solution
import urllib2
response = urllib2.urlopen("http://bit.ly/AoifeMcL_ID3") # Some shortened url
url_destination = response.url
That's it!
The easiest way to get the destination of a shortened URL is with urllib
. Given that the short URL is valid (response code 200), the URL be returned to you.
>>> import urllib
>>> resp = urllib.urlopen('http://bit.ly/bcFOko')
>>> resp.getcode()
200
>>> resp.url
'http://mrdoob.com/lab/javascript/harmony/'
And that's that!