I have made a capybara script that either runs in headless mode or in browser mode. It tries to create a page in a wiki. It does so by clicking 'Add' in the menu bar and then 'Page' in the dropdown that is opened.
This works in browser mode. But in headless mode (webkit), after the link 'Page' is clicked, an empty page is returned. Who can tell me why this can happen?
In my code, I have this
click_link 'Add'
if ENV['BROWSER'] == 'headless'
wait_for_ajax
verbose( "headless add page", 3)
p page.html
lnk = all( :css, "#createPageLink").first
p lnk
wait_for_ajax
lnk.click
else
verbose( "klik Page", 3)
click_link 'Add a Page'
lnk = all( :css, "#createPageLink").first
p lnk
lnk.click
end
wait_for_ajax
p page.html
This output in headless mode
PAGEHTML</div></body></html>"
#<Capybara::Node::Element tag="a" path="/html/body[@id='com-atlassian-confluence']/div[@id='full-height-container']/div[@id='splitter']/div[@id='splitter-content']/div[@id='main']/div[@id='main-header']/div[@id='navigation']/ul/li[3]/div/ul[@id='add-menu-link-space']/li[1]/a[@id='createPageLink']">
""
So the link is found (p lnk shows Capybara::Node::Element), but the click on the links returns an empty page, whereas when the link is clicked by the browser, I get the html that is behind the link.
I hope you can tell me what I am overlooking....
Ruud
Capybara-webkit is obsolete and basically equivalent to an 8 year old browser. Most likely it no longer supports the JS & CSS being used on the page you are interacting with. If you need headless support you're going to be much better off not using capybara-webkit
and instead using selenium with chrome in headless mode, or one of the direct to Chrome via CDP drivers (like apparition) for interacting with any site written/updated in the last few years.