pythonseleniumweb-crawler

Python Selenium click google "I agree" button


I am trying to scrape some google data but I first want to click the 'I agree' button that google pops up. This is the script I use to do that:

import time
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

search_question = input("Ask a question: ")

driver = webdriver.Chrome("*Your Webdriver location*")
driver.wait = WebDriverWait(driver, 5)

driver.get("https://google.com")

time.sleep(1)
agree = driver.wait.until(EC.presence_of_element_located((By.XPATH, '//*[@id="introAgreeButton"]/span/span')))
agree.click()
# time.sleep(0.2)

search = driver.find_element_by_class_name("gLFyf")
search.send_keys(search_question)
search.send_keys(Keys.ENTER)

The problem is selenium doesn't seem to locate the button and therefore I get a timeout error. (I have tried also with find_element_by_xpath and still not working).


Solution

  • If you scroll up in the devtools inspector you'll notice that your element is within an iframe: iframe in devtools

    You need to switch to that frame first, click your button then switch back to the default content (the main page)

    
    driver.get("https://google.com")
    
    #active the iframe and click the agree button
    WebDriverWait(driver, 10).until(EC.frame_to_be_available_and_switch_to_it((By.XPATH, "//iframe")))
    agree = WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.XPATH, '//*[@id="introAgreeButton"]/span/span'))) 
    agree.click()
    
    #back to the main page
    driver.switch_to_default_content()
    

    That works for me.

    FYI - There's only 1 iframe on the page, that's why the xpath //iframe works. If there were multiple you'd need to identify it with higher accuracy.