pythonpython-3.xbeautifulsouprequest

Not able to scrape the all the reviews


I am trying to scrape this website and trying to get the reviews but I am facing an issue,


Solution

  • Looking at the website, the "Show more reviews" button makes an ajax call and returns the additional info, all you have to do is find it's link and send a get request to it (which I've done with some simple regex):

    import requests
    import re
    from bs4 import BeautifulSoup
    headers = {
    "user-agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) snap Chromium/74.0.3729.169 Chrome/74.0.3729.169 Safari/537.36"
    }
    url = "https://www.capterra.com/p/134048/HiMama-Preschool-Child-Care-App/#reviews"
    Data = []
    #Each page equivalant to 50 comments:
    MaximumCommentPages = 3 
    
    with requests.Session() as session:
        info = session.get(url)
        #Get product ID, needed for getting more comments
        productID = re.search(r'"product_id":(\w*)', info.text).group(1)
        #Extract info from main data
        soup = BeautifulSoup(info.content, "html.parser")
        table = soup.findAll("div", {"class":"review-comments"})
        for x in table:
            Data.append(x)
        #Number of pages to get:
        #Get additional data:
        params = {
            "page": "",
            "product_id": productID
        }
        while(MaximumCommentPages > 1): # number 1 because one of them was the main page data which we already extracted!
            MaximumCommentPages -= 1
            params["page"] = str(MaximumCommentPages)
            additionalInfo = session.get("https://www.capterra.com/gdm_reviews", params=params)
            print(additionalInfo.url)
            #print(additionalInfo.text)
            #Extract info for additional info:
            soup = BeautifulSoup(additionalInfo.content, "html.parser")
            table = soup.findAll("div", {"class":"review-comments"})
            for x in table:
                Data.append(x)
    
    #Extract data the old fashioned way:
    counter = 1
    with open('review.csv', 'w') as f:
        for one in Data:
            f.write(str(counter))
            f.write(one.text)
            f.write('\n')
            counter += 1
    

    Notice how I'm using a session to preserve cookies for the ajax call.

    Edit 1: You can reload the webpage multiple times and call the ajax again to get even more data.

    Edit 2: Save data using your own method.

    Edit 3: Changed some stuff, now gets any number of pages for you, saves to file with good' ol open()