Adding new volumes to an Ubuntu Lucid guest instance running under KVM hypervisor seems to not work automatically. The newly attached device is visible to the Ubuntu guest OS only after a reboot.
To replicate the issue…
- Create a Ubuntu Lucid guest instance on KVM
- Create a new data disk
- Attach the new data disk to the Ubuntu guest OS
- Run fdisk -l /dev/vdb on the Ubuntu KVM Guest OS. No disks found
shanu@ubuntu3:~$ sudo fdisk -l /dev/vdb|wc -l 0
- Reboot and run fdisk again. Disk is now available as /dev/vdb
shanu@ubuntu3:~$ sudo fdisk -l /dev/vdb Disk /dev/vdb: 5368 MB, 5368709120 bytes 16 heads, 63 sectors/track, 10402 cylinders, total 10485760 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/vdb doesn't contain a valid partition table
The tests were performed on Apache CloudStack 4.2.0+CentOS 6.5 KVM hypervisor hosts and Ubuntu 12.04 Lucid as guest operating system.
The same steps work correctly on XenServer guest operating systems (tested on Ubuntu,Linux and FreeBSD) and CentOS guest on KVM. Since its working automatically without a reboot in CentOS/KVM, it didn’t seem to be a CloudStack issue per-se.