Write Concern Problem when rotating MongoDB

Hello everyone,

We are facing an odd behaviour in our MongoDB cluster, which is causing slowness in the database.
We have a cluster running on Percona 7.0.28-15, and the cluster consist in 4 instances:

1 Primary node
2 Secondary nodes
1 Hidden Secondary

We have them on GCP under instance groups, so we can rotate/recreate the instance whenever we need to. However, while doing the rotation process, we have an increase in SlowQueries that are longer than usual waiting for the "MajorityWriteConcern”.

Our rotation process consists of issuing db.getSiblingDB(“admin”).shutdownServer() from the secondary, then recreating the instance that will automatically rejoin the cluster, and for the primary, we issue a stepDown before the shutdown server.

Could you help us to understand what is happening? See one of the full logs below. We get tons of it when rotating:

{
"t": {
"$date": "2026-01-08T14:28:24.589+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn145",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "applicationStorage.$cmd",
"appName": "opgcore",
"command": {
"update": "###",
"ordered": "###",
"writeConcern": {
"w": "###",
"wtimeout": "###",
"j": "###"
},
"txnNumber": "###",
"$db": "###",
"$clusterTime": {
"clusterTime": "###",
"signature": {
"hash": "###",
"keyId": "###"
}
},
"lsid": {
"id": "###"
}
},
"numYields": 0,
"reslen": 245,
"locks": {
"ParallelBatchWriterMode": {
"acquireCount": {
"r": 2
}
},
"FeatureCompatibilityVersion": {
"acquireCount": {
"w": 2
}
},
"ReplicationStateTransition": {
"acquireCount": {
"w": 3
}
},
"Global": {
"acquireCount": {
"w": 2
}
},
"Database": {
"acquireCount": {
"w": 2
}
},
"Collection": {
"acquireCount": {
"w": 2
}
}
},
"flowControl": {
"acquireCount": 1
},
"readConcern": {
"level": "local",
"provenance": "implicitDefault"
},
"writeConcern": {
"w": "majority",
"j": false,
"wtimeout": 10000,
"provenance": "clientSupplied"
},
"waitForWriteConcernDurationMillis": 100,
"storage": {},
"cpuNanos": 471426,
"remote": "172.20.0.100:37476",
"protocol": "op_msg",
"durationMillis": 100
}
}

How many nodes are you recreating at the same time? with a 4 node cluster, your majority write concern is 3 so if more than 1 node is offline or becomes unresponsive at the same time you won’t be able to process majority writes.

We are recreating only one node at a time, and we wait until that one is back to start the next one. That’s a monitored process so we guarantee the communication between the other nodes.

Also, with one of the 4 nodes being a hidden node, wouldn’t the majority write concern be 2 instead of 3?

The hidden node by default is still a voter so it counts towards the quorum (unless you also set it with votes=0). If recreating one node at a time, there should be no issues with majority concern.

We set it as a non-voter, to give more details, that is our configuration right now:

Initial rs.status() with the majority settings:
{
“set” : “appstore”,
“date” : ISODate(“2026-01-09T13:18:30.255Z”),
“myState” : 2,
“term” : NumberLong(81),
“syncSourceHost” : “10.100.64.39:27017”,
“syncSourceId” : 8,
“heartbeatIntervalMillis” : NumberLong(2000),
“majorityVoteCount” : 2,
“writeMajorityCount” : 2,
“votingMembersCount” : 3,
“writableVotingMembersCount” : 3,
“optimes” : {
“lastCommittedOpTime” : {
“ts” : Timestamp(1767964710, 10),
“t” : NumberLong(81)
}

The rs.config for the nodes:

appstore [direct: other] test> rs.conf()
{
_id: 'appstore',
version: 189865,
term: 81,
members: [
{
_id: 5,
host: '10.100.71.214:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: true,
priority: 0,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 0
},
{
_id: 7,
host: '10.100.64.38:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
},
{
_id: 8,
host: '10.100.64.39:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
},
{
_id: 9,
host: '10.100.64.43:27017',
arbiterOnly: false,
buildIndexes: true,
hidden: false,
priority: 1,
tags: {},
secondaryDelaySecs: Long('0'),
votes: 1
}
],
protocolVersion: Long('1'),
writeConcernMajorityJournalDefault: true,
settings: {
chainingAllowed: true,
heartbeatIntervalMillis: 2000,
heartbeatTimeoutSecs: 10,
electionTimeoutMillis: 10000,
catchUpTimeoutMillis: -1,
catchUpTakeoverDelayMillis: 30000,
getLastErrorModes: {},
getLastErrorDefaults: { w: 1, wtimeout: 0 },
replicaSetId: ObjectId('5dab1192b6e40c0d078581cf')
}
}