Increasing a KVM Virtual Machine Disk when using LVM and ext4

Scenario: I am running several virtual machine guests using QEMU/KVM. Both the host and guest operating systems are Ubuntu 12.04 LTS. The guests are raw images (as opposed to QEMU qcow2 or disk partitions). My instructions for setting up the virtual machines are here. One of the guests image is stored on the host as:

sgordon@host:~$ ls -lhs /var/vm/
total 14G
14G -rwxr-xr-x 1 libvirt-qemu kvm 20G May 18 14:26 it.img

Currently the image is allocated 20GB of space. But note that as Linux ext4 file systems support sparse files, it only takes up 14GB on the disk, as the guest is only using 14GB of the 20GB space available. (The -s option for ls shows the actual size). In the guest virtual machine I am using LVM and ext4 filesystems.

My goal is to increase the size of the guests hard disk, i.e. increase the raw image size. In this case I will increase it from 20GB to 45GB. Mathew Branwell gives a detailed description of the steps involved. I've basically copied his instructions, with some minor changes to resizing the guest image. In summary the steps are:

  1. On the host: Increase the size of the guests image
  2. On the guest: Increase the disk partition
  3. On the guest: Increase the LVM physical and logical volumes
  4. On the guest: Increase the file system size

Note that these steps only apply for increasing the size. Decreasing the size is a bit different and has greater potential of data loss.

Guest: Shutdown the Guest Virtual Machine

Before getting started, lets just summarise some details about the guest, in particular the LVM Physical Volumes, Logical Volumes and filesystems. Note that in the examples guest is shown in the prompt indicating the commands are run in the guest (as opposed to the host).

sgordon@guest:~$ sudo pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/vda5  vmit lvm2 a-   19.81g    0 
sgordon@guest:~$ sudo lvs
  LV   VG   Attr   LSize Origin Snap%  Move Log Copy%  Convert
  home vmit -wi-ao 5.84g                                      
  root vmit -wi-ao 4.66g                                      
  tmp  vmit -wi-ao 4.66g                                      
  var  vmit -wi-ao 4.66g  
sgordon@guest:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/vmit-root  4.7G  1.8G  2.7G  40% /
udev                   728M  4.0K  728M   1% /dev
tmpfs                  293M  264K  293M   1% /run
none                   5.0M     0  5.0M   0% /run/lock
none                   732M     0  732M   0% /run/shm
/dev/vda1              189M   24M  155M  14% /boot
/dev/mapper/vmit-tmp   4.7G  198M  4.3G   5% /tmp
/dev/mapper/vmit-var   4.7G  1.3G  3.2G  28% /var
/dev/mapper/vmit-home  5.9G  2.5G  3.1G  45% /home

The first step is to shutdown the guest virtual machine:

sgordon@guest:~$ sudo shutdown -P now

Broadcast message from sgordon@guest
	(/dev/pts/0) at 14:51 ...

The system is going down for power off NOW!

Host: Increase the Virtual Machine Image Size

Now resize the image on the host machine:

sgordon@host:~$ sudo qemu-img resize /var/vm/guest.img +25G
Image resized.

This increases the guest image file by 25GB. To check:

sgordon@host:~$ ls -lhs /var/vm/
total 14G
14G -rwxr-xr-x 1 root root 45G May 18 14:53 guest.img

Now start and login to the guest for the remaining steps:

sgordon@host$ sudo virsh start guest
Domain it started
sgordon@host$ ssh guest

Guest: Increase the Partition Size

First on the guest we need to increase the partition sizes using fdisk:

sgordon@guest:~$ sudo fdisk /dev/vda

Command (m for help): 

fdisk takes single letter commands as described in the help. Using the command p to print the current partitions:

Command (m for help): p

Disk /dev/vda: 48.3 GB, 48318382080 bytes
16 heads, 63 sectors/track, 93622 cylinders, total 94371840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00079e72

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      391167      194560   83  Linux
/dev/vda2          393214    41940991    20773889    5  Extended
/dev/vda5          393216    41940991    20773888   8e  Linux LVM

Note that my disk (/dev/vda) is 48.3GB, but my partitions are only using 20GB (20773888 blocks).

Be careful! Your partitions will be different from mine. Use the devices/sizes for your partitions in the following examples. I have an initial boot partion on /dev/vda1 followed by an extended partition with Linux LVM.

I want to increase the size of the Extended and Linux LVM partitions by first deleting them and then creating new ones of the same type but different size. Note the device names and system/IDs in your partition table.

To delete partitions use the d command and specifiy the partition number (2 and 5 in my case):

Command (m for help): d
Partition number (1-5): 5

Command (m for help): d
Partition number (1-5): 2

Command (m for help): p

Disk /dev/vda: 48.3 GB, 48318382080 bytes
16 heads, 63 sectors/track, 93622 cylinders, total 94371840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00079e72

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      391167      194560   83  Linux

Now lets create new partitions. I will use the same structure as before but use the entire disk. Use the n command:

Command (m for help): n
Partition type:
   p   primary (1 primary, 0 extended, 3 free)
   e   extended
Select (default p): e
Partition number (1-4, default 2): 2
First sector (391168-94371839, default 391168): <ENTER> 
Using default value 391168
Last sector, +sectors or +size{K,M,G} (391168-94371839, default 94371839): <ENTER>
Using default value 94371839

I create a new extended partition number 2 using the default first and last sectors. I also need to create the second new logical partition:

Command (m for help): n
Partition type:
   p   primary (1 primary, 1 extended, 2 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (393216-94371839, default 393216): <ENTER> 
Using default value 393216
Last sector, +sectors or +size{K,M,G} (393216-94371839, default 94371839): <ENTER>
Using default value 94371839

I also need to specify the type of the logical partition, i.e. Linux LVM. From the original partition table the Id of the Linux LVM system is 8e:

Command (m for help): t
Partition number (1-5): 5
Hex code (type L to list codes): 8e
Changed system type of partition 5 to 8e (Linux LVM)

The resulting partition table follows. Note that is is the same structure as the original table but the size of the last two partitions (/dev/vda2 and /dev/vda5) have changed from 20GB to about 45GB.

Command (m for help): p

Disk /dev/vda: 48.3 GB, 48318382080 bytes
16 heads, 63 sectors/track, 93622 cylinders, total 94371840 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00079e72

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048      391167      194560   83  Linux
/dev/vda2          391168    94371839    46990336    5  Extended
/dev/vda5          393216    94371839    46989312   8e  Linux LVM

If your partition table is correct then the last step is to write those changes:

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.
sgordon@guest:~$

Note the warning: you need to reboot for the new partition table to be read:

sgordon@guest:~$ sudo shutdown -r now
Broadcast message from sgordon@it
	(/dev/pts/0) at 15:15 ...

The system is going down for reboot NOW!

Guest: Increase the LVM Physical and Logical Volume Sizes

Note that this step only applies if you are using LVM.

After rebooting and logging back in, we need to update LVM. First note the current size of the Physical Volume. On my guest virtual machine I have one physical volume:

sgordon@guest:~$ sudo pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/vda5  vmit lvm2 a-   19.81g    0 

We need to resize the physical volume:

sgordon@guest:~$ sudo pvresize /dev/vda5
  Physical volume "/dev/vda5" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
sgordon@guest:~$ sudo pvs
  PV         VG   Fmt  Attr PSize  PFree 
  /dev/vda5  vmit lvm2 a-   44.81g 25.00g

The physical volume now takes the full space of 45GB, and there is 25GB free. We will extend the Logical Volumes to use that extra 25GB. In my case I have four logical volumes: home, root, tmp and var (and the correpending directories are mount to them). In this case I want to increase the size of the home and var volumes by 15GB and 10GB, respectively. I will use lvextend which requires you to specify the logical volume relative to the volume group.

sgordon@guest:~$ sudo vgs
  VG   #PV #LV #SN Attr   VSize  VFree 
  vmit   1   4   0 wz--n- 44.81g 25.00g

The name of my volume group is vmit. So now lets extend the two logical volumes:

sgordon@guest:~$ sudo lvextend -L +15G /dev/vmit/home
  Extending logical volume home to 20.84 GiB
  Logical volume home successfully resized
sgordon@guest:~$ sudo lvextend -L +10G /dev/vmit/var
  Extending logical volume var to 14.66 GiB
  Logical volume var successfully resized

The result, as shown below, is the volume group now has no space free and the home>/kbd> and var logical volumes are now 21GB and 15GB, respectively.

sgordon@guest:~$ sudo vgs
  VG   #PV #LV #SN Attr   VSize  VFree
  vmit   1   4   0 wz--n- 44.81g    0 
sgordon@guest:~$ sudo lvs
  LV   VG   Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  home vmit -wi-ao 20.84g                                      
  root vmit -wi-ao  4.66g                                      
  tmp  vmit -wi-ao  4.66g                                      
  var  vmit -wi-ao 14.66g                                      

Guest: Increase the Filesystem Sizes

The last step is to grow the filesystems to use the entire logical volume. Currently the filesystems sizes are still the original sizes, as shown below.

sgordon@guest:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/vmit-root  4.7G  1.8G  2.7G  40% /
udev                   728M  4.0K  728M   1% /dev
tmpfs                  293M  252K  293M   1% /run
none                   5.0M     0  5.0M   0% /run/lock
none                   732M     0  732M   0% /run/shm
/dev/vda1              189M   24M  155M  14% /boot
/dev/mapper/vmit-tmp   4.7G  198M  4.3G   5% /tmp
/dev/mapper/vmit-var   4.7G  1.3G  3.2G  28% /var
/dev/mapper/vmit-home  5.9G  2.5G  3.1G  45% /home

In my case I am using ext4 filesystems and using resize2fs I can increase their size online, that is, while they're mounted. If you have a different filesystem (or possibly older kernel) online resizing may not be possible; you may need to unmount the filesystems first.

sgordon@guest:~$ sudo resize2fs /dev/vmit/home
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/vmit/home is mounted on /home; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
Performing an on-line resize of /dev/vmit/home to 5463040 (4k) blocks.
The filesystem on /dev/vmit/home is now 5463040 blocks long.

sgordon@guest:~$ sudo resize2fs /dev/vmit/var
resize2fs 1.42 (29-Nov-2011)
Filesystem at /dev/vmit/var is mounted on /var; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vmit/var to 3842048 (4k) blocks.
The filesystem on /dev/vmit/var is now 3842048 blocks long.

Note that the resize takes some time. Finally, check the new sizes:

sgordon@guest:~$ df -h
Filesystem             Size  Used Avail Use% Mounted on
/dev/mapper/vmit-root  4.7G  1.8G  2.7G  40% /
udev                   728M  4.0K  728M   1% /dev
tmpfs                  293M  252K  293M   1% /run
none                   5.0M     0  5.0M   0% /run/lock
none                   732M     0  732M   0% /run/shm
/dev/vda1              189M   24M  155M  14% /boot
/dev/mapper/vmit-tmp   4.7G  198M  4.3G   5% /tmp
/dev/mapper/vmit-var    15G  1.3G   13G   9% /var
/dev/mapper/vmit-home   21G  2.5G   18G  13% /home

I now have a 15GB /var filesystem and a 21GB /home filesystem.