Categories
CloudStack Ubuntu

CloudStack Attach Volumes On Ubuntu Not Working on KVM

Adding new volumes to an Ubuntu Lucid guest instance running under KVM hypervisor seems to not work automatically. The newly attached device is visible to the Ubuntu guest OS only after a reboot.

To replicate the issue…

  1. Create a Ubuntu Lucid guest instance on KVMScreen Shot 2013-12-22 at 1.44.47 pm
  2. Create a new data diskScreen Shot 2013-12-22 at 1.46.25 pm
  3. Attach the new data disk to the Ubuntu guest OSScreen Shot 2013-12-22 at 1.47.35 pm
  4. Run fdisk -l /dev/vdb on the Ubuntu KVM Guest OS. No disks found
    shanu@ubuntu3:~$ sudo fdisk -l /dev/vdb|wc -l
    0
    
  5. Reboot and run fdisk again. Disk is now available as /dev/vdb
    shanu@ubuntu3:~$ sudo fdisk -l /dev/vdb
    
    Disk /dev/vdb: 5368 MB, 5368709120 bytes
    16 heads, 63 sectors/track, 10402 cylinders, total 10485760 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x00000000
    
    Disk /dev/vdb doesn't contain a valid partition table
    

The tests were performed on Apache CloudStack 4.2.0+CentOS 6.5 KVM hypervisor hosts and Ubuntu 12.04 Lucid as guest operating system.

The same steps work correctly on XenServer guest operating systems (tested on Ubuntu,Linux and FreeBSD) and CentOS guest on KVM. Since its working automatically without a reboot in CentOS/KVM, it didn’t seem to be a CloudStack issue per-se.

Shanker Balan

Shanker Balan is a devops and infrastructure freelancer with over 14 years of industry experience in large scale Internet systems. He is available for both short term and long term projects on contract. Please use the Contact Form for any enquiry.

More Posts - Website

Follow Me:
TwitterLinkedIn