I've been attempting to scrape data from a website using Scrapy 2.11.2 along with its Scrapy/Playwright plugin (0.0.34). This is the website that I'm trying to scrape: here.
The problem is the website doesn't seem to fully render its dynamic portions of its content. I do get an HTML response back and I do see certain JS scripts and other artifacts being downloaded in the logs. I examine the HTML response and I see element product-availability-root is empty, e.g.:
<product-availability-root></product-availability-root>
whereas when I browse to it on the browser, that element has a lot of children/content in it.
So I've tried using Scrapy with Playwright, in both headless and full non-headless mode, both without success in populating that element.
Here's what I'm using in my Scrapy request:
yield scrapy.Request(url,
meta
"playwright": True,
"playwright_include_page": True,
"playwright_launch_options": {
"headless": False # I've used both True and False
},
"playwright_page_methods": [
PageMethod("wait_for_timeout", 30000),
PageMethod("evaluate", "window.scrollBy(0, document.body.scrollHeight)"),
PageMethod("screenshot", path="screenshot.png", full_page=True)
],
"errback": self.errback
},
callback=self.parse,
dont_filter=True)
The screenshot comes back as a blank page. I've tried the same code on a different website and I know the scrolling to the end works - and the other site fully renders. So there's something tricky about this website. I've put in a large timeout to give the browser plenty of time to render. I've also tried Playwright with Chromium and Firefox, with no difference in results.
I've also tried using DOM breakpoints against the product-availability-root element in Chrome trying to debug this. However, the combination of obfuscated code and a huge call stack makes it hard to figure out what's going on.
Any hints or suggestions?
You need to ensure that the JavaScript that populates the <product-availability-root>
tag has run. I suspect that you're trying to extract the content before the page is fully rendered.
import scrapy
from bs4 import BeautifulSoup
from urllib.parse import urlparse
class DocsSpider(scrapy.Spider):
name = "docs"
allowed_domains = ["disneycruise.disney.go.com"]
def start_requests(self):
url = "https://disneycruise.disney.go.com/cruises-destinations/list/"
yield scrapy.Request(
url,
meta=dict(
playwright=True,
playwright_include_page=True,
),
)
async def parse(self, response):
page = response.meta["playwright_page"]
await page.wait_for_selector("product-availability-root")
await page.wait_for_function(
"""
() => {
const element = document.querySelector("product-availability-root");
return element && element.innerHTML.trim().length > 0;
}
"""
)
await page.wait_for_selector('.product-card-list', timeout=30000)
html = await page.content()
await page.close()
soup = BeautifulSoup(html, 'html.parser')
for title in soup.find_all('h2', class_="product-card-content__name"):
print(title.get_text())
The parse()
method attacks this in a few ways (and there's some redundancy here, which can be pruned down!). It waits for <product-availability-root>
to be available in the DOM. This seems to be a static part of the document, so it should be immediate. It then waits for some content to be dynamically rendered into the <product-availability-root>
node. This should be sufficient. But, just to be super sure that everything we need is there, I also wait for one or more tags with .product-card-list
class to be present.
Just to illustrate that the content is indeed retrieved, I extract the <h2>
tags using BeautifulSoup and then print the titles. The output should look something like this:
2-Night Disney Magic at Sea Cruise from Sydney ending in Brisbane
2-Night Disney Magic at Sea Cruise from Auckland
2-Night Disney Magic at Sea Cruise from Melbourne
2-Night Disney Magic at Sea Cruise from Sydney
3-Night Bahamian Cruise from Fort Lauderdale ending in San Juan
Of course, you can loop over the div.product-card-wrapper
elements and extract whatever content you require. Presumably you'll want to grab the