I have a Array of Hashes that have JSONs with duplicated fields and I want to delete the duplicated ones:
[ {
"code" : "32F",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
},{
"code" : "32F",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
},{
"code" : "90X",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
}]
This is the wished output:
[{
"code" : "32F",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
},{
"code" : "90X",
"lon" : 0.963335,
"fint" : "2022-05-03T13:00:00",
"prec" : 0.0,
}]
Any ideas?
Thanks!
First you have wrong syntax
"code" : "32F",
-- you don't need whitespace here
Right variant is "code": "32F",
And you even don't need quotes here. Just code: "32F",
To delete duplicates from array -- use uniq!
Be careful
ary = [1, 1]
ary.uniq! # => [1]
ary # => [1]
ary = [1, 2]
ary.uniq! # => nil
ary # => [1, 2]
Or use uniq
without bang to return new array
ary = [1, 1]
ary.uniq # => [1]
ary # => [1, 1]
ary = [1, 2]
ary.uniq # => [1, 2]
ary # => [1, 2]
In your case
ary =
[{
code: "32F",
lon: 0.963335,
fint: "2022-05-03T13:00:00",
prec: 0.0,
},{
code: "32F",
lon: 0.963335,
fint: "2022-05-03T13:00:00",
prec: 0.0,
},{
code: "90X",
lon: 0.963335,
fint: "2022-05-03T13:00:00",
prec: 0.0,
}]
ary.uniq!
# => [{:code=>"32F", :lon=>0.963335, :fint=>"2022-05-03T13:00:00", :prec=>0.0}, {:code=>"90X", :lon=>0.963335, :fint=>"2022-05-03T13:00:00", :prec=>0.0}]