Alright, switching from working Hpricot to Libxml-ruby due to speed and well the disappearance of _why, looked at Nokogiri for a second but decided to look at Libxml-ruby for speed and longevity. I must be missing something basic but what im trying to do isn't working, here's my XML string:
<?xml version="1.0" encoding="utf-8" ?>
<feed>
<title type="xhtml"></title>
<entry xmlns="http://www.w3.org/2005/Atom">
<id>urn:publicid:xx.xxx:xxxxxx</id>
<title>US--xxx-xxxxx</title>
<updated>2009-08-19T15:49:51.103Z</updated>
<published>2009-08-19T15:44:48Z</published>
<author>
<name>XX</name>
</author>
<rights>blehh</rights>
<content type="text/xml">
<nitf>
<head>
<docdata>
<doc-id regsrc="XX" />
<date.issue norm="20090819T154448Z" />
<ed-msg info="Eds:" />
<doc.rights owner="xx" agent="hxx" type="none" />
<doc.copyright holder="xx" year="2009" />
</docdata>
</head>
<body>
<body.head>
<hedline>
<hl1 id="headline">headline</hl1>
<hl2 id="originalHeadline">blah blah</hl2>
</hedline>
<byline>john doe<byttl>staffer</byttl></byline>
<distributor>xyz</distributor>
<dateline>
<location>foo</location>
</dateline>
</body.head>
<body.content>
<block id="Main">
story content here
</block>
</body.content>
<body.end />
</body>
</nitf>
</content>
</entry>
</feed>
there are about 150 such entries from the complete feed.
I just want to loop through the 150 entries and then grab out content and attributes but I'm having a hell of a time with libxml-ruby had it working fine with Hpricot.
This little snippet shows that im not even getting the entries:
parser = XML::Parser.string(file)
doc = parser.parse
entries = doc.find('//entry')
puts entries.size
entries.each do |node|
puts node.inspect
end
Any ideas? I looked through the docs, and couldn't find a simple here's an XML file, and here are samples of getting out x,y,z. This should be pretty simple.
Nokogiri has proved to have some speed and longevity, so here's some samples of how to deal with the namespaces in the sample XML. I used Nokogiri for a big RDF/RSS/Atom aggregator that was processing thousands of feeds daily using something similar to this to grab the fields I wanted before pushing them into a backend database.
require 'nokogiri'
doc = Nokogiri::XML(file)
namespace = {'xmlns' => 'http://www.w3.org/2005/Atom'}
entries = []
doc.search('//xmlns:entry', namespace).each do |_entry|
entry_hash = {}
%w[title updated published author].each do |_attr|
entry_hash[_attr.to_sym] = _entry.at('//xmlns:' << _attr, namespace).text.strip
end
entry_hash[:headlines] = _entry.search('xmlns|hedline > hl1, xmlns|hedline > hl2', namespace).map{ |n| n.text.strip }
entry_hash[:body] = _entry.at('//xmlns:body.content', namespace).text.strip
entry_hash[:title] = _entry.at('//xmlns:title', namespace).text
entries << entry_hash
end
require 'pp'
pp entries
# >> [{:title=>"US--xxx-xxxxx",
# >> :updated=>"2009-08-19T15:49:51.103Z",
# >> :published=>"2009-08-19T15:44:48Z",
# >> :author=>"XX",
# >> :headlines=>["headline", "blah blah"],
# >> :body=>"story content here"}]
Both CSS and XPath in Nokogiri can handle namespaces. Nokogiri would simplify using them by grabbing all namespaces defined in the root node, but, in this XML sample, the namespace is defined in the entry node, making us do it manually.
I switched to CSS notation for the headlines, just to show how to do them. For convenience, Nokogiri would normally allow a wildcarded namespace for CSS, if it had been able to find the namespace declaration, which would have simplified the accessor to '|headline > hl1'
for the hl1
node.