pythonfacebook-graph-apipython-3.xfacebook-graph-api-v2.2graph-api-explorer

facebook page info using Graph Api 2.2


I have unique ID of 1000 facebook pages inside a google spreadsheet . I want to crawl all the pages to get their info (Likes, email etc) what should i do? Also i cannot run the search query in my browser and where to run a script. Plz be as detailed as possible. Thank u :)

I tried to make a python script for this but it works for 1st entry only.

import urllib as url2
import json


f=open('ids.txt')
for i in f:
        url="http://graph.facebook.com/"+str(int(i))+"?fields=likes"
        data = url2.urlopen(url).read()
        print data
        data2=json.loads(data)
        print "number of likes on page with id "+str(data2["id"])+" has "+str(data2["likes"])+" likes !"

f.close()

The ids.txt file contains id of facebook pages.

1 493343230696447
2 1767379894975
3 122116091270024
4 545044065615713

Solution

  • A file object is a line iterator, not a word iterator. So you need to change:

    for i in f:
        url="http://graph.facebook.com/"+str(int(i))+"?fields=likes"
    

    To:

    for i in f:
        # i holds the line, not the index
        index, page_id = i.strip().split()[:2]
        url="http://graph.facebook.com/"+page_id+"?fields=likes"
        # ...
    

    This way you split the line, after removing the line break ('\n'), to the index and the page_id, separately.

    There is no need to cast the page_id string to an integer and back to string.