Creating oVirt ISO domain: Glusterised


oVirt ISO Storage Domain


In oVirt, since version 4 there is a new version type introduced "POSIX Compliant File System Storage".

This means that now you can use standard linux file structure such ext4 to map storage to the cluster.

This sounds like a vSphere now.
 Remember how you can have combination of vSAN Datastore and Local Datastore available to ESXi?

Does this means that we can now create partition on oVirt Node (KVM) an mount this as storage domain to the cluster? - Not so quick.
oVirt is made for High Availability and builds on that concept. All hosts in the cluster MUST be able to see the storage domain.

Even if you manage to add a local storage domain to oVirt, lets say ISO Storage Domain which makes perfect sense, you'll most likely end-up with orphaned cluster. All hosts unable to see the new storage domain will be kicked out of the Cluster.

Now, since we've seen that POSIX Compliant File System Storage does not really make much sense unless you have a linux fileserver running around lets go back to our more traditional options: NFS, iSCSI, FC or Gluster Storage Domain

In this article we'll use oVirt Nodes local storage to create new ISO domain.
This again, takes back to Gluster.
I mean FC, NFS, iSCSI are all great $hoices but when it comes to being efficient, robust and cost-effective all roads lead to Gluster.

We have three hosts running in a Cluster.
Each host received 30GB SSD for ISO Storage Domain.

Lets get down to business:




oVirt Node 1, 2 & 3

Create new gluster brick:

lsblk /dev/sdc

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 30G 0 disk

pvcreate /dev/sdc

vgcreate vg_iso /dev/sdc

lvcreate -l 100%FREE --name iso /dev/vg_iso

mkfs.xfs /dev/vg_iso/iso

mkdir -p /brick/brick2/brick_iso


Check if the newly created logical volume is present and looks as expected:

vdisplay /dev/vg_iso
  --- Logical volume ---
  LV Path                /dev/vg_iso/iso
  LV Name                iso
  VG Name                vg_iso
  LV UUID                i1RTGr-tc2X-u7nx-MqZ0-RdN7-OQDX-Fz29Hr
  LV Write Access        read/write
  LV Creation host, time node01.infotron.com.au, 2017-06-01 21:06:43 +1000
  LV Status              available
  # open                 1
  LV Size                30.00 GiB
  Current LE             7679
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:14



Add in your fstab:

/dev/vg_iso/iso /bricks/brick2 xfs defaults 0 0


Test and add the mount point:

mount /bricks/brick2


Check if mounted successfully:
df -H 


Allow the communication between the nodes:

iptables -I INPUT -p all -s 172.18.1.14 -j ACCEPT
iptables -I INPUT -p all -s 172.18.1.13 -j ACCEPT
iptables -I INPUT -p all -s 172.18.1.12 -j ACCEPT
iptables-save




So far we were gust creating a pretty plain standard linux filesystem that can be used for creating the shared Gluster storage.





oVirt Node1

Peer the gluster nodes:

gluster peer probe node02gluster.infotron.com.au
gluster peer probe node03gluster.infotron.com.au



Check the peer status:

gluster peer status


Create the volume:

gluster volume create iso1 node01gluster.infotron.com.au:/bricks/brick2/brick_iso node02gluster.infotron.com.au:/bricks/brick2/brick_iso node03gluster.infotron.com.au:/bricks/brick2/brick_iso


Start the volume:
gluster volume start iso1


Verify the new volume:

gluster volume status iso1

Status of volume: iso1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node01gluster.infotron.com.au:/bricks
/brick2/brick_iso                           49154     0          Y       11987
Brick node02gluster.infotron.com.au:/bricks
/brick2/brick_iso                           49154     0          Y       26989
Brick node03gluster.infotron.com.au:/bricks
/brick2/brick_iso                           49154     0          Y       27953

Task Status of Volume iso1
------------------------------------------------------------------------------
There are no active volume tasks



gluster volume info iso1

Volume Name: iso1
Type: Distribute
Volume ID: 84a8b572-b7c5-4bc2-9269-1a0b284118d8
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01gluster.infotron.com.au:/bricks/brick2/brick_iso
Brick2: node02gluster.infotron.com.au:/bricks/brick2/brick_iso
Brick3: node03gluster.infotron.com.au:/bricks/brick2/brick_iso
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on



At this point we are ready to take on oVirt.



oVirt 

Login to oVirt and add the new ISO Domain:
 









If it fails with permission error hange ownership for volume so oVirt can access it:

gluster volume set iso1 storage.owner-uid 36
gluster volume set iso1 storage.owner-gid 36



Verify new ownership is active:

gluster volume info iso1

Volume Name: iso1
Type: Distribute
Volume ID: 84a8b572-b7c5-4bc2-9269-1a0b284118d8
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01gluster.infotron.com.au:/bricks/brick2/brick_iso
Brick2: node02gluster.infotron.com.au:/bricks/brick2/brick_iso
Brick3: node03gluster.infotron.com.au:/bricks/brick2/brick_iso
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on



Try adding again and success!


Verify you can upload an ISO image to the new storage domain (ovirt does not comes with graphical datastore browser like vSphere and image upload is done via CLI - cool!):

engine-iso-uploader list
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
ISO Storage Domain Name   | ISO Domain Status
iso1                      | ok


Than comes the image transfer.
The unfortunate truth is that the image transfer is way to complex for what it is.
First you need to move all your images onto directory on the ovirt server and then use specific command line tool to upload them to a given ISO volume.

I usually use scp to transfer my images locally onto oVirt.


The upload goes like this:

engine-iso-uploader upload -i iso1 Server2016_EN-US.ISO
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):

Uploading, please wait...
ERROR: mount.nfs: requested NFS version or transport protocol is not supported


This failed because oVirt can ONLY upload to ISO domain via NFS. Tricky little thing to know which saves hips of time!
To workaround this issue we need to enable NFS on the Gluster ISO domain:

gluster volume set iso1 nfs.disable OFF




Check if NFS is active:

gluster volume info iso1

Volume Name: iso1
Type: Distribute
Volume ID: 84a8b572-b7c5-4bc2-9269-1a0b284118d8
Status: Started
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node01gluster.infotron.com.au:/bricks/brick2/brick_iso
Brick2: node02gluster.infotron.com.au:/bricks/brick2/brick_iso
Brick3: node03gluster.infotron.com.au:/bricks/brick2/brick_iso
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
performance.readdir-ahead: on
 
nfs.disable: OFF



Try again:

engine-iso-uploader upload -i iso1 Server2016_EN-US.ISO
Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to abort):
Uploading, please wait...
INFO: Start uploading Server2016_EN-US.ISO
Uploading: [#############                           ] 33%



Now, you might be desperate after reading this but lose no hope, oVirt is improving.

There is a partially implemented feature which will give us a full blown GUI for uploading images from most likely next oVirt release.

The feature is called Virt Image I/O and is discussed here:   Virt Image I/O 


The image is than available for creating VMs:



Final step is attaching the HD (vmdk) for the VM which was created in advance:



And creation:










Thats it.

An ISO domain. A hell lot of a work for an ISO domain :)

Nonetheless we must be thankful to the community behind this project providing us a such a capable datacenter platform for free.


Stay tuned for more.

Comments

Most Popular

KVM on CentOS: Hyperconverged nested oVirt Cluter with Gluster vSAN

ESXi 6.5 on KVM

oVirt: Creating a VM

VyOS ultra basic quick start guide

Installing .NET 3.5 on Windows Server 2012 / 2012R2

MSTeams: Powershell for Linux

MSTeams User Direct routing number

Contact me by email

Name

Email *

Message *