I am trying to grab and parse multiple URLs using urllib and BeautifulSoup, but I get the following error:
AttributeError: 'list' object has no attribute 'timeout'
From what I understand, the parser is telling me that I submitted a list and it is looking for a single URL. How can I process multiple URLs?
Here is my code:
from bs4 import BeautifulSoup
from bs4.element import Comment
import urllib.request
def tag_visible(element):
if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
return False
if isinstance(element, Comment):
return False
return True
addresses = ["https://en.wikipedia.org", "https://stackoverflow.com", "https://techcrunch.com"]
def text_from_html(body):
soup = BeautifulSoup(body, 'html.parser')
texts = soup.findAll(text=True)
visible_texts = filter(tag_visible, texts)
return u" ".join(t.strip() for t in visible_texts)
html = urllib.request.urlopen(addresses).read()
print(text_from_html(html))
Your error clearly said 'list' object has no attribute 'timeout'
It's because urlopen doesn't take in a list. you should nest it in a loop like this:
my_texts = []
for each in addresses:
html = urllib.request.urlopen(each).read()
print(text_from_html(html)) # or assign to variable like:
my_texts.append(text_from_html(html))
I would suggest you to use a better module for http than urllib
, use requests
instead (import requests
)