I have two simple jsons like:
info.json
[
{
"filename": "drilldown.json",
"line": 14,
"column": "43-49"
},
{
"filename": "drilldown.json",
"line": 10,
"column": "38-44"
}
]
keywords.json
[
{
"keyword": " `master1` "
},
{
"keyword": " `Master2` "
}
]
Would like to have the result as:
results.json
[
{
"filename": "drilldown.json",
"line": 14,
"column": "43-49",
"keyword": " `master1` "
},
{
"filename": "drilldown.json",
"line": 10,
"column": "38-44",
"keyword": " `Master2` "
}
]
In my case there's no relation between either of the json but would like them to be merged the way shown above
Any pointers regarding the same would be much appreciated. Thanks
Tried these two solutions were closest but they have some inter-relation between two json files
Using `jq` to add key/value to a json file using another json file as a source
You can merge (add
) the items based on their index in their respective arrays using --slurp
(or -s
) and transpose
:
jq -s 'transpose | map(add)' info.json keywords.json
For a deep merge of two inputs, use first * last
instead of add
. For more than two inputs, use reduce .[1:][] as $i (first; . * $i)
instead of it.
An iterative approach could skim through all the items using to_entries
to access the indices, then add them successively using +=
:
jq 'reduce (inputs | to_entries)[] as {$key, $value} (.; .[$key] += $value)' info.json keywords.json
Again, use *=
instead of +=
for a deep merge.
You could also build up the entire result from scratch using setpath
based on the --stream
representation of the inputs:
jq --stream -n 'reduce (inputs | select(has(1))) as $i (.; setpath($i[0]; $i[1]))' info.json keywords.json
This approach always performs a deep merge.
Output:
[
{
"filename": "drilldown.json",
"line": 14,
"column": "43-49",
"keyword": " `master1` "
},
{
"filename": "drilldown.json",
"line": 10,
"column": "38-44",
"keyword": " `Master2` "
}
]