cucumbercapybaracapybara-webkit

Capybara can not match xml page


I have problem with matching response text on xml page on capybara.

When I use page.should(have_content(arg1)) capybara raises error that there is no \html element (there shouldn't be as it's xml).

When I use page.should(have_xpath(arg1)) it raises Element at 40 no longer present in the DOM (Capybara::Webkit::NodeNotAttachedError)

What is the correct way to test xml ?


Solution

  • When using capybara-webkit, the driver will try to use a browser's HTML DOM to look for elements. That doesn't work, because you don't have an HTML document.

    One workaround is to fall back to Capybara's string implementation:

    xml = Capybara.string(page.body)
    expect(xml).to have_xpath(arg1)
    expect(xml).to have_content(arg1)
    

    Assuming your page returns a content type of text/xml, capybara-webkit won't mess with the response body at all, so you can pass it through to Capybara.string (or directly to Nokogiri if you like).