Some observations when working with Kubernetes 1.3 introduced PetSets to build MongoDB.
Here’s my PetSet YAML:
# mypetset.yml apiVersion: v1 kind: Service metadata: annotations: service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" name: a-mongo labels: name: a-mongo tier: database type: mongo spec: ports: - port: 27017 name: mongo type: NodePort selector: tier: database name: a-mongo --- apiVersion: apps/v1alpha1 kind: PetSet metadata: name: a-mongo spec: serviceName: "a-mongo" replicas: 3 template: metadata: labels: name: a-mongo tier: database annotations: pod.alpha.kubernetes.io/initialized: "true" spec: terminationGracePeriodSeconds: 10 containers: - name: mongo image: mongo:3.0.12 command: - mongod - "--replSet" - rs0 imagePullPolicy: Always ports: - containerPort: 27017 name: mongo resources: requests: cpu: "0.1m" memory: "64Mi" limits: cpu: "1" memory: "1Gi" volumeMounts: - name: data mountPath: /data/db volumeClaimTemplates: - metadata: name: data annotations: volume.alpha.kubernetes.io/storage-class: fo spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 10Gi
If you’re on AWS, or GCE, your mongo instances are backed by persistent storage (ElasticBlockStore, GCEDisk) that will ensure that even if one of your pets die, when its revived, it will reuse the same storage.
So with your
config_file, lets start initializing your petset.
$ kubectl create -f mypetset.yml
Give it a while and you should see 3 pods appear.
Note: In PetSets, pods are started one after another so if one of your pets fails startup, any other pets in the order that hasn’t been started will not be started.
$ kubectl get pods a-mongo-0 1/1 Running 0 22m a-mongo-1 1/1 Running 0 21m a-mongo-2 1/1 Running 0 21m
Ok looks like we’re done! Let’s initiate the cluster.
First, lets get all the available IP of our cluster.
$ kubectl describe pods a-mongo | grep IP IP: 10.244.6.3 IP: 10.244.7.3 IP: 10.244.6.4
Awesome. Now keep this somewhere handy.
$ kubectl exec -it a-mongo-0 /bin/bash [email protected] $ mongo > rs.initiate()
One of the biggest caveat in setting up your mongo replicaset on PetSets is that you need to perform an extra step before adding your other cluster. We want to do this before we add members so that we don’t run into an issue where we lose our PRIMARY node due to configuration.
> cfg = rs.conf() > cfg.members.host = "10.244.6.3:27017" # replace this with the cluster IP of your a-mongo-0 container. > rs.reconfig(cfg)
Now you can safely add your members.
Note: So what happened? Apparently MongoDB ReplicaSets do not play nice if one of its members is inconsistent in its naming. ie. If you’re using IP addresses, all your cluster members should be using it. Reference
rs0:PRIMARY> rs.add("10.244.7.3") rs0:PRIMARY> rs.add("10.244.6.4")
If you do a
rs.status(), you should see the other 2 members listed as
And now you have a full cluster!
Google Groups - Could not find member to sync from
Mongo Secondaries Stuck at Startup State