Dear Stackoverflow community!
I would like to scrape news articles from the CNN RSS feed and get the link for each scraped article. This workes very well with the Python NewsPaper library, but unfortunately I am unable to get the output in a usable format i.e. a list or a dictionary.
I want to add the scraped links into one SINGLE list, instead of many separated lists.
import feedparser as fp
import newspaper
from newspaper import Article
website = {"cnn": {"link": "http://edition.cnn.com/", "rss": "http://rss.cnn.com/rss/cnn_topstories.rss"}}
for source, value in website.items():
if 'rss' in value:
d = fp.parse(value['rss'])
#if there is an RSS value for a company, it will be extracted into d
for entry in d.entries:
if hasattr(entry, 'published'):
article = {}
article['link'] = entry.link
print(article['link'])
The output is as follows:
http://rss.cnn.com/~r/rss/cnn_topstories/~3/5aHaFHz2VtI/index.html
http://rss.cnn.com/~r/rss/cnn_topstories/~3/_O8rud1qEXA/joe-walsh-trump-gop-voters-sot-crn-vpx.cnn
http://rss.cnn.com/~r/rss/cnn_topstories/~3/xj-0PnZ_LwU/index.html
.......
I would like to have ONE list with all the links in it i.e:
list =[http://rss.cnn.com/~r/rss/cnn_topstories/~3/5aHaFHz2VtI/index.html , http://rss.cnn.com/~r/rss/cnn_topstories/~3/_O8rud1qEXA/joe-walsh-trump-gop-voters-sot-crn-vpx.cnn , http://rss.cnn.com/~r/rss/cnn_topstories/~3/xj-0PnZ_LwU/index.html ,... ]
I tried appending the content via a for loop as follows:
for i in article['link']:
article_list = []
article_list.append(i)
print(article_list)
But then the output is like this:
['h']
['t']
['t']
['p']
[':']
['/']
['/']
['r']
['s']
...
Does anyone know an alternative method, how to get the content into one list? Or alternatively a dictionary as following:
dict = {'links':[link1 , link2 , link 3]}
Thank you VERY much in advance for your help!!
Try modifying your code like this and see if it works:
article_list = []
for entry in d.entries:
if hasattr(entry, 'published'):
article = {}
article['link'] = entry.link
article_list.append(article['link'])