rweb-scrapingrvest

Trouble scraping links from a web page with rvest


New to web scraping so forgive the basic question, but I'm trying to scrape film URLs from lists on Letterboxd and having some issues. Using this list as an example, I was able to find the link location in the HTML here:

link class

However, I can't actually get the link out of this. I've tried two methods so far. First I tried grabbing all of the link elements, hoping to then filter out the ones I didn't need:

library(rvest)
link <- 'https://letterboxd.com/horrorville/list/horrorville-community-80s-video-store-horror/'
page <- read_html(link)

page %>%
  html_elements('a') %>%
  html_attr('href')

This did return URLs, but none of them were actually for the films from the list. I then tried selecting based on the class name. I'm not very familiar with HTML, but my understanding is that class="frame has-menu" indicates two separate classes frame and has-menu which can be combined in rvest by separating them with periods, so I tried this:

page %>%
  html_elements('.frame.has-menu') %>%
  html_attr('href')

That didn't return anything at all.

I saw this other question which sounded similar, so I tried examining the Network tab of my browser's (Firefox) devtools as the responder suggested. I wasn't quite sure what to make of it, but it looked like the requests relating to the films were using GET requests, whereas in the other question the responder said the issue is that rvest can't handle POST requests. What is my issue here?


Solution

  • Took a bit of trial and error to narrow down to the correct HTML portion that contains the individual URLs (perhaps someone else can add to this post if there is more quicker and efficient way to get to the correct HTML portion). I used Chrome's Developer Tools (right click on any page > Inspect) and then looked through the code to see where the movie URLs are.

    There are two pages for the movies URL, so we grab both page URLs and loop through. The data-target-link attribute contains only part url, so we append the site URL to it to get the full path.

    library(rvest)
    
    link <- 'https://letterboxd.com/horrorville/list/horrorville-community-80s-video-store-horror/'
    
    all_links <- c(link, paste0(link, "page/2/"))
    
    # Loop through the different pages and extract URLs
    all_movies_urls <- lapply(
      X = all_links
      ,function(lnk) {
        
        # Read entire page's html
        page_html <- read_html(lnk)
        
        # Narrow down to the correct html containing URLs
        movie_urls <- page_html %>% 
          html_nodes(".poster-container") %>% 
          html_nodes(".really-lazy-load") %>% 
          html_attr("data-target-link")
        
        # Append site URL to each movie URL
        movie_urls <- paste0("https://letterboxd.com", movie_urls)
        
      }
    )
    
    # Unlist to get vector of URLs
    unlist(all_movies_urls)