Monday 12 June 2017

Increase Instance root volume size using EBS snapshots

In this article we'll explore how we can u EBS snapshots to increase the size of the root disk of a Linux instance. This is an easy process but it will however incur some downtime for the instance because we'll be detaching and attaching root disks.

So, let's get started.

Go to the snapshot section under Elastic Block Storage within the EC2 dashboard. I had already created a snapshot from an existing root volume.
Now I'll select the snapshot and from actions click on create volume. This will create a new volume from the existing snapshot.


Note that the size of the volume from which I created the snapshot was 8 GB but I'm setting the size of the volume to be created from the snapshot as 12 GB.

As soon as the volume creation process is complete we are brought to the volumes dashboard where we can see that the volume we just created is available.


I've detached the volume that was originally attached to the instance while keeping the instance powered down. To do so we just need to select the corresponding volume and then go to actions and click on detach volume.

To attach our new volume we'll select the volume and then go to actions and click on attach volume.
Now we'll be prompted to enter the instance name and device name.


Note that the root volume must be the first device to be detected when the instance is powering on and therefore should be named appropriately.
I've gone through many online posts and they mention that the device name to be set should be /dev/sda1 for the root volume. But I've experienced that the instance does not power on while attaching the root volume with this name. When I set the name to /dev/xvda the instance powered on without any issues.

After entering the instance name and device name we click on attach and power on the instance.

And that is it. Our instance has powered on successfully.


Now if I login to the instance via ssh, I should be able to see the size of the root (/) file system as 12 GB.

[user.DESKTOP-4NUE93O] ➤ ssh -i "sahil-ec2-test-keypair.pem" ec2-user@ec2-52-44-99-240.compute-1.amazonaws.com
X11 forwarding request failed on channel 0
Last login: Mon Jun 12 07:03:48 2017 from 117.197.123.65

       __|  __|_  )
       _|  (     /   Amazon Linux AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-ami/2017.03-release-notes/
18 package(s) needed for security, out of 23 available
Run "sudo yum update" to apply all updates.
[ec2-user@ip-192-168-1-150 ~]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        488M   56K  488M   1% /dev
tmpfs           497M     0  497M   0% /dev/shm
/dev/xvda1       12G  976M   11G   9% /
[ec2-user@ip-192-168-1-150 ~]$ uptime
 13:42:11 up 0 min,  1 user,  load average: 0.04, 0.01, 0.00
[ec2-user@ip-192-168-1-150 ~]$
[ec2-user@ip-192-168-1-150 ~]$
[ec2-user@ip-192-168-1-150 ~]$ logout


I would like to mention an interesting point before concluding the article. I've observed that the disk/volume that we create from the snapshot persists even after the deletion of the snapshot. This sounded like a fact worth mentioning.
Thank you for reading and I hope it helps.

No comments:

Post a Comment

Using capture groups in grep in Linux

Introduction Let me start by saying that this article isn't about capture groups in grep per se. What we are going to do here with gr...