I am using MongoDB version 3.0.14 (with wiredtiger) as 4 member repliate set.
I am facing a strange issue in production where suddenly most of the queries start to block on Global.timeAcquiringMicros.r
(on Secondary Mongod Server, where the java-client is sending normal read queries using SecondaryPreferred Read Preference)
2020-06-09T12:30:23.959+0000 I COMMAND [conn210676] query <db>.<collection> query: { $query: { _id: { $gte: "<value>", $lt: "<value>" } }, $orderby: { _id: -1 }, $maxTimeMS: 16000 } planSummary: IXSCAN { _id: 1 } ntoreturn:0 ntoskip:0 nscanned:11 nscannedObjects:11 keyUpdates:0 writeConflicts:0 numYields:6 nreturned:11 reslen:270641 locks:{ Global: { acquireCount: { r: 14 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 4580895 } }, Database: { acquireCount: { r: 7 } }, Collection: { acquireCount: { r: 7 } } } 1706ms
2020-06-09T12:30:25.887+0000 I COMMAND [conn210607] query <db>.<collection> query: { $query: { _id: { $gte: "<value1>", $lt: "<value2>" } }, $orderby: { _id: -1 }, $maxTimeMS: 16000 } planSummary: IXSCAN { _id: 1 } cursorid:76676946055 ntoreturn:0 ntoskip:0 nscanned:40 nscannedObjects:40 keyUpdates:0 writeConflicts:0 numYields:12 nreturned:40 reslen:1062302 locks:{ Global: { acquireCount: { r: 26 }, acquireWaitCount: { r: 1 }, timeAcquiringMicros: { r: 21622755 } }, Database: { acquireCount: { r: 13 } }, Collection: { acquireCount: { r: 13 } } } 3639ms
These query are taking the intent shared lock (r) on the Global database and it has to wait for 4580895/21622755 microseconds (according to timeAcquiringMicros). I have the following queries:
_id
indexGlobal.timeAcquiringMicros.r
is high, I am assuming there is any operation taking 'R', 'W' lock on Global database, then, what is the way to capture such queries? I have tried db.currentOp()
but couldn't find anything.Explain result of the 2nd query (whose Global.timeAcquiringMicros.r = 21622755
):
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "<db>.<coll>",
"indexFilterSet" : false,
"parsedQuery" : {
"$and" : [
{
"_id" : {
"$lt" : "<stop>"
}
},
{
"_id" : {
"$gte" : "<start>"
}
}
]
},
"winningPlan" : {
"stage" : "FETCH",
"inputStage" : {
"stage" : "IXSCAN",
"keyPattern" : {
"_id" : 1
},
"indexName" : "_id_",
"isMultiKey" : false,
"direction" : "backward",
"indexBounds" : {
"_id" : [
"(\"<stop>\", \"<start>\"]"
]
}
}
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 69,
"executionTimeMillis" : 57,
"totalKeysExamined" : 69,
"totalDocsExamined" : 69,
"executionStages" : {
"stage" : "FETCH",
"nReturned" : 69,
"executionTimeMillisEstimate" : 50,
"works" : 70,
"advanced" : 69,
"needTime" : 0,
"needFetch" : 0,
"saveState" : 2,
"restoreState" : 2,
"isEOF" : 1,
"invalidates" : 0,
"docsExamined" : 69,
"alreadyHasObj" : 0,
"inputStage" : {
"stage" : "IXSCAN",
"nReturned" : 69,
"executionTimeMillisEstimate" : 0,
"works" : 70,
"advanced" : 69,
"needTime" : 0,
"needFetch" : 0,
"saveState" : 2,
"restoreState" : 2,
"isEOF" : 1,
"invalidates" : 0,
"keyPattern" : {
"_id" : 1
},
"indexName" : "_id_",
"isMultiKey" : false,
"direction" : "backward",
"indexBounds" : {
"_id" : [
"(\"<stop>\", \"<start>\"]"
]
},
"keysExamined" : 69,
"dupsTested" : 0,
"dupsDropped" : 0,
"seenInvalidated" : 0,
"matchTested" : 0
}
}
},
"serverInfo" : {
"host" : "<host>",
"port" : 27017,
"version" : "3.0.14",
"gitVersion" : "08352afcca24bfc145240a0fac9d28b978ab77f3"
},
"ok" : 1
}
I was able to capture one of the currentOp() when this kind of behaviour happens:
{
"desc" : "conn225729",
"threadId" : "0x1321c1400",
"connectionId" : 225729,
"opid" : 189970948,
"active" : false,
"op" : "getmore",
"ns" : "<db>.<coll>",
"query" : {
},
"client" : "<client-ip>:55596",
"numYields" : 0,
"locks" : {
"Global" : "r"
},
"waitingForLock" : true,
"lockStats" : {
"Global" : {
"acquireCount" : {
"r" : NumberLong(1)
},
"acquireWaitCount" : {
"r" : NumberLong(1)
},
"timeAcquiringMicros" : {
"r" : NumberLong(7500907)
}
}
}
}
Global.timeAcquiringMicros.r
("waitingForLock" : true)Please help and refer to some MongoDB docs which explain the issue. Also, let me know if any other logs are required.
A MongoDB secondary node retrieves operation log events from the primary in batches. When it applies a batch of oplog events, it takes a global exclusive write lock (W).
A read intent lock(r) is mutually exclusive with the W
lock.
This means that writes and reads must interleave on the secondary nodes, so heavy writes can block reads, and heavy reads can delay replication.
Non-blocking secondary reads was a major feature of MongoDB 4.0 a couple of years ago.
If you can upgrade, that specific lock contention should not happen anymore.