page I'm scraping: https://data.nordpoolgroup.com/auction/day-ahead/prices?deliveryDate=2025-01-15¤cy=EUR&aggregation=DailyAggregate&deliveryAreas=AT,FR
I'm trying to scrape the table at the bottom of the page which has delivery date and corresponding prices. I'm able to scrape the data loaded on the page but as you can see this is a scrollable table with reveals more rows when you scroll down.
Could someone please point me in the right direction as to how to scroll down the table and continue iterating through.
In general you need to execute JavaScript on the client to scroll an element. For example, driver.execute_script("window.scrollTo(0, 500);")
.
However, your case is a bit more complicated because this is a virtualized table with elements loaded and removed as we scroll. So we will need to find the data table, scroll step by step and collect the elements along the way. We can't just scroll all the way down and then collect everything as they're also being removed.
When we scroll little by little there's necessary some overlap between the elements loaded in the table on each scroll, so we need to take care to remove duplicates.
I've built a simple example that works for your page:
driver = webdriver.Chrome(service=Service())
driver.get("https://data.nordpoolgroup.com/auction/day-ahead/prices?deliveryDate=2025-01-15¤cy=EUR&aggregation=DailyAggregate&deliveryAreas=AT,FR")
# close the cookie banner
cookie_button = WebDriverWait(driver, 5).until(
EC.element_to_be_clickable((By.CSS_SELECTOR, '#cdk-overlay-0 .btn'))
)
cookie_button.click()
# make sure the data grid is visible
WebDriverWait(driver, 5).until(
EC.visibility_of_element_located((By.ID, 'dailyAggregateGrid'))
)
# we scroll incrementally until we reach the full scroll height
scroll_max_height = driver.execute_script("return document.querySelector('#dailyAggregateGrid .dx-scrollable-container').scrollHeight")
print(f"Scroll heigth is: {scroll_max_height}")
scroll_increment = 200
scroll_height = 0
while scroll_height < scroll_max_height:
scroll_height += scroll_increment
print(f"Scrolling to {scroll_height}")
driver.execute_script(f"document.querySelector('#dailyAggregateGrid .dx-scrollable-container').scrollTo(0, {scroll_height})")
# wait some time to load the elements. Alternatively, you can also watch for changes
time.sleep(1)
# todo: read the elements here. Make sure to handle duplicates as there's some overlap
item_dates = driver.find_elements(By.CSS_SELECTOR, "#dailyAggregateGrid .dx-datagrid-first-header")
print([i.text for i in item_dates])
Hope this helps!