Agent failed to start

Hi! I have Mongo 5.0 cluster, so my URI:

PBM_MONGODB_URI="mongodb://pbmagent:password@ip-10-100-10-240.ap-southeast-1.compute.internal:27017,ip-10-100-10-28.ap-southeast-1.compute.internal:27017,ip-10-100-10-31.ap-southeast-1.compute.internal:27017/admin?replicaSet=myreplica"

agent restart fails with error:

pbm-agent[29828]: 2022/04/17 21:48:02 connect to the node: connect: create mongo client: a direct connection cannot be made if multiple hosts are specif
1 Like

Hi, the pbm agent connection string on each node should be pointed to the local mongod process, do not use the replica set connection string in this case.
For example use:

PBM_MONGODB_URI=“mongodb://pbagent:password@localhost:27017”

2 Likes

Hi.
Thanks for reply, that was my #1 idea and I did it, but now pbm backup only admin DB and ignore mine. Could it be auth/permission issue? I applied pbmuser creation script from PBM guide and able to connect to Mongo as pbmuser and list all the DBs. Any ideas?
On my same version 5.0 test cluster PBM works fine, but Mongo auth turned off there, so that is the only one stuff I suspect to be involved.

1 Like

That sounds like a perms issue. Can you check that you created the backup role & user on each replica set (including CFG)?

db.getSiblingDB("admin").createRole({ "role": "pbmAnyAction",
      "privileges": [
         { "resource": { "anyResource": true },
           "actions": [ "anyAction" ]
         }
      ],
      "roles": []
   });
db.getSiblingDB("admin").createUser({user: "pbmuser",
       "pwd": "secretpwd",
       "roles" : [
          { "db" : "admin", "role" : "readWrite", "collection": "" },
          { "db" : "admin", "role" : "backup" },
          { "db" : "admin", "role" : "clusterMonitor" },
          { "db" : "admin", "role" : "restore" },
          { "db" : "admin", "role" : "pbmAnyAction" }
       ]
    });
1 Like

Hi Ivan,
failed to apply it on SECONDARY’s with the error:

uncaught exception: Error: not master

I tried to connect to /admin, /?replicaSet=xxxx, /?replSetName=xxxx, all the same.
Of course I used only one host name in URI, not the whole cluster list.

1 Like

What failed to apply? can you describe exactly the steps you are taking and where do you see this error?

1 Like

Hi Ivan,

I ssh to SECONDARY, connect to Mongo node, not replicaSet, applied the script for role creation:

root@ip-10-100-10-145:/home/ubuntu# mongo "mongodb://pbmuser:password@localhost:27017"
MongoDB shell version v5.0.7
connecting to: mongodb://localhost:27017/?compressors=disabled&gssapiServiceName=mongodb
MongoDB server version: 5.0.7
mybd:SECONDARY> db.getSiblingDB("admin").createRole({ "role": "pbmAnyAction",
...       "privileges": [
...          { "resource": { "anyResource": true },
...            "actions": [ "anyAction" ]
...          }
...       ],
...       "roles": []
...    });
uncaught exception: Error: not master :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype.createRole@src/mongo/shell/db.js:1656:15
@(shell):1:1
mybd:SECONDARY>

how to apply the script and create pbm role/user on secondary nodes?

1 Like

Hello, you only need to create the user on the primaries. It will be propagated via replication.

1 Like

Hi Ivan,

ok, so why pbm-agent backup only admin DB now? how to debug the issue? logically, having access to admin DB, agent must be able to access all the others.

1 Like

If you already confirmed privileges are fine I suggest you should check pbm agent logs on all nodes for clues.

1 Like

Hi Ivan,

thanks, found the issue:

"The index specification for index 'hostname_1' contains invalid field names. The field 'safe' is not valid for an index specification. Specification: { v: 1, unique: true, key: { hostname: 1 }, name: \"hostname_1\", ns: \"mymongo.hostowner\", safe: null }. Run the 'collMod' command on the collection without any arguments to remove the invalid index options"

applied collMod for all the problematic collections and backup works for now.

1 Like

Glad you sorted it out. Thanks for letting us know what was the problem

1 Like