I have a document with an order array that contains ids. Sometimes I want to set specific indexes in that array:
db.getCollection('myCollection').update(
{
_id: 'someId',
},
{
$set: {
'order.1': 'foo2',
},
},
);
This works correcly and updates the second array element. If the array is empty, the 0th element is null.
Trying the same with bulkWrite leads to really weird behavior:
db.getCollection('myCollection').bulkWrite([
{
updateOne: {
filter: { _id: 'eager' },
update: [
{
$set: {
'order.1': 'foo3',
},
},
],
},
},
]);
If the array in the db is empty this does nothing.
However, if there are elements in the array, this replaces every array element with an object of the form { '1': 'foo3' }
.
Does anyone have any idea why this happens and how to make bulkWrite behave? :D
In the single update query, you're using the regular update with Update Operators like $set
. Mongo Playground
But in the bulkWrite
example, you're using the aggregation pipeline format []
for updateOne
, which interprets the field notations for $set
differently - it applies that expression to each element of the array.
Remove the []
so that it applies as a regular update. So:
db.getCollection('myCollection').bulkWrite([
{
updateOne: {
filter: { _id: 'eager' },
update: { $set: { 'order.1': 'foo3' } },
},
},
]);
And wrt "I used an array because I wanted to update multiple fields (with a $set each)" - you can use a single set operation with multiple fields specified. Example from the docs:
{ $set: { "a.2": <new value>, "a.10": <new value>, } }
However, if you need to reference the value of a field in the update, then you will need to use the pipeline notation and change the $set
expression to something that would work in an aggregation pipeline.