How to change instance size of an existing Kubernetes Cluster

Just recently, I was trying to reverse my poor decision of starting my Kubernetes Cluster on AWS with a very simple t2.micro nodes.

If you’re on AWS, your nodes are probably created via an Auto-scaling group.

To change your instance size, you’ll need to:

  1. Copy the launch configuration that Kubernetes is probably running.
  2. Change the instance size of the launch configuration
  3. Update the Kubernetes Auto Scaling Group to use the new launch configuration
  4. ???
  5. Profit

Well, not that fast!

If you do just the above steps, you’ll probably notice that your instances get scaled up accordingly with the new instance size. But wait, why is the new node not connecting at all?

Getting it to work

Don’t fret, the only step that you’re missing is that you’ll need to replace the user-data in the newly created launch configuration.

First, in your awscli

$ aws autoscaling describe-launch-configurations

Lookout for the original launch configuration JSON object. The name key should be ...kubernetes.

Once you found that element, search for the user-data key and copy the value into the User-Data section of your newly created launch configuration. Remeber to check the button Input is already base64 encoded


This section is in the “Configure details” section of the Create new launch configuration page. You’ll see it once you expand the Advanced Details tab.


Save and you’re done!


Remember to ensure that when you’re copying the user-data, your editor shouldn’t add linebreaks. You can use this little nifty tool http://www.textfixer.com/tools/remove-line-breaks.php to make sure that line breaks are not added.


References

How can we add AWS instances of a different size - Stackoverflow