pythonweb-scrapingbeautifulsoup

beautifulsoup 4: Segmentation fault (core dumped)


I crawled the following page:

http://www.nasa.gov/topics/earth/features/plains-tornadoes-20120417.html

But I got Segmentation fault (core dumped) when calling: BeautifulSoup(page_html), where page_html is the content from requests library. Is this a bug for BeautifulSoup? Is there any way to get around with this? Even approach like try...except would help me to get my code running. Thanks in advance.

The code is as following:

import requests
from bs4 import BeautifulSoup

toy_url = 'http://www.nasa.gov/topics/earth/features/plains-tornadoes-20120417.html'
res = requests.get(toy_url,headers={"USER-Agent":"Firefox/12.0"})
page = res.content
soup = BeautifulSoup(page)

Solution

  • This problem is caused by a bug in lxml, which is fixed in lxml 2.3.5. You can upgrade lxml, or use Beautiful Soup with the html5lib or the HTMLParser parser.