Posts Tagged ‘LVM’

Resize KVM .img QCOW Image file and Create new LVM partition and ext4 filesystem inside KVM Virtual Machine

Friday, November 10th, 2023

LVM-add-space-to-RHEL-Linux-on-KVM_Virtual_machine-howto

Part of migration project for a customer I'm working on is migration of a couple of KVM based Guest virtual machine servers. The old machines has a backup solution stratetegy using IBM's TSM and the new Machines should use the Cheaper solution adopted by the Customer company using the CommVault backup solution (an enterprise software thath is used for data backup and recovery not only to local Tape Library / Data blobs on central backup servers infra but also in Cloud infrastructure.

To install the CommVault software on the Redhat Linux-es, the official install documentation (prepared by the team who prepared the CommVault) infrastructure for the customer recommends to have a separate partition for the CommVault backups under /opt directory  (/opt/commvault) and the partition should be as a minumum at least 10 Gigabytes of size. 

Unfortunately on our new prepared KVM VM guest machines, it was forgotten to have the separate /opt of 10GB prepared in advanced. And we ended up with Virtual Machines that has a / (root directory) of 68GB size and a separate /var and /home LVM parititons. Thus to correct the issues it was required to find a way to add another separate LVM partition inside the KVM VirtualMachine.img (QCOW Image file). 

This seemed to be an easy task at first as that might be possible with simple .img partition mount with losetup command kpartx and simple lvreduce command in some way such as

# mount /dev/loop0 /mnt/test/

# kpartx -a /dev/loop0
# kpartx -l /dev/loop0
# ls -al /dev/mapper/*

… 

# lvreduce 

etc. however unfortunately kpartx though not returning error did not provided the new /dev/mapper devices to be used with LVM tools and this approach seems to not be possible on RHEL 8.8 as the kpartx couldn't list.

 

A colleague of mine Mr. Paskalev suggested that we can perhaps try to mount the partition with default KVM tool to mount .img partitions which is guestmount but unfortunately
with a command like:
 

# guestmount -a /kvm/VM.img -i –rw /mnt/test/

But unfortunately this mounted the filesystem in fuse filesystem and the LVM /dev/mapper of the VM can't be seen so we decided to abondon this method.

After some pondering with Dimitar Paskalev and Dimitar Hristov, thanks to joint efforts we found the way to do it, below are the steps we followed to succeed in creating new LVM ext4 partition required.
One would wonder how many system
 

1. Check enough space is available on the HV machine

 

The VMs are held under /kvm so in this case:

[root@hypervisor-host ~]# df -h|grep -i /kvm
/dev/mapper/vg00-vmprivate  206G   27G  169G  14% /kvm

 

2. Shutdown the running VM and make sure it is stopped
 

[root@hypervisor-host ~]# virsh shutdown vm-host

 

[root@hypervisor-host ~]# virsh list –all
 Id   Name       State
————————–
 4    lpdkv01f   running
 5    vm-host   shut off

 

3. Check current Space status of VM

 

[root@hypervisor-host ~]# qemu-img info /kvm/vm-host.img       
image: /kvm/vm-host.img
file format: qcow2
virtual size: 100 GiB (107374182400 bytes)
disk size: 8.62 GiB
cluster_size: 65536
Format specific information:
    compat: 1.1
    compression type: zlib
    lazy refcounts: true
    refcount bits: 16
    corrupt: false
    extended l2: false

 

4. Resize (extend VM) with whatever size you want    
 

[root@hypervisor-host ~]# qemu-img resize /kvm/vm-host.img +10G

 

5. Start VM    
 

[root@hypervisor-host ~]# virsh start vm-host


7. Check the LVM and block devices on HVs (not necessery but good for an overview)
 

[root@hypervisor-host ~]# pvs
  PV         VG   Fmt  Attr PSize   PFree 
  /dev/sda2  vg00 lvm2 a–  277.87g 19.87g
  
[root@hypervisor-host ~]# vgs
  VG   #PV #LV #SN Attr   VSize   VFree 
  vg00   1  11   0 wz–n- 277.87g 19.87g

 

[root@hypervisor-host ~]# lsblk 
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0 278.9G  0 disk 
├─sda1               8:1    0     1G  0 part /boot
└─sda2               8:2    0 277.9G  0 part 
  ├─vg00-root      253:0    0    15G  0 lvm  /
  ├─vg00-swap      253:1    0     1G  0 lvm  [SWAP]
  ├─vg00-var       253:2    0     5G  0 lvm  /var
  ├─vg00-spool     253:3    0     2G  0 lvm  /var/spool
  ├─vg00-audit     253:4    0     3G  0 lvm  /var/log/audit
  ├─vg00-opt       253:5    0     2G  0 lvm  /opt
  ├─vg00-home      253:6    0     5G  0 lvm  /home
  ├─vg00-tmp       253:7    0     5G  0 lvm  /tmp
  ├─vg00-log       253:8    0     5G  0 lvm  /var/log
  ├─vg00-cache     253:9    0     5G  0 lvm  /var/cache
  └─vg00-vmprivate 253:10   0   210G  0 lvm  /vmprivate

  
8 . Check logical volumes on Hypervisor host
 

[root@hypervisor-host ~]# lvdisplay 
  — Logical volume —
  LV Path                /dev/vg00/swap
  LV Name                swap
  VG Name                vg00
  LV UUID                3tNa0n-HDVw-dLvl-EC06-c1Ex-9jlf-XAObKm
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 2
  LV Size                1.00 GiB
  Current LE             256
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:1
   
  — Logical volume —
  LV Path                /dev/vg00/var
  LV Name                var
  VG Name                vg00
  LV UUID                JBerim-fxVv-jU10-nDmd-figw-4jVA-8IYdxU
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:2
   
  — Logical volume —
  LV Path                /dev/vg00/spool
  LV Name                spool
  VG Name                vg00
  LV UUID                nFlmp2-iXg1-tFxc-FKaI-o1dA-PO70-5Ve0M9
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:45 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:3
   
  — Logical volume —
  LV Path                /dev/vg00/audit
  LV Name                audit
  VG Name                vg00
  LV UUID                e6H2OC-vjKS-mPlp-JOmY-VqDZ-ITte-0M3npX
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                3.00 GiB
  Current LE             768
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:4
   
  — Logical volume —
  LV Path                /dev/vg00/opt
  LV Name                opt
  VG Name                vg00
  LV UUID                oqUR0e-MtT1-hwWd-MhhP-M2Y4-AbRo-Kx7yEG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:46 +0200
  LV Status              available
  # open                 1
  LV Size                2.00 GiB
  Current LE             512
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:5
   
  — Logical volume —
  LV Path                /dev/vg00/home
  LV Name                home
  VG Name                vg00
  LV UUID                ehdsH7-okS3-gPGk-H1Mb-AlI7-JOEt-DmuKnN
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:6
   
  — Logical volume —
  LV Path                /dev/vg00/tmp
  LV Name                tmp
  VG Name                vg00
  LV UUID                brntSX-IZcm-RKz2-CP5C-Pp00-1fA6-WlA7lD
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:7
   
  — Logical volume —
  LV Path                /dev/vg00/log
  LV Name                log
  VG Name                vg00
  LV UUID                ZerDyL-birP-Pwck-yvFj-yEpn-XKsn-sxpvWY
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:47 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:8
   
  — Logical volume —
  LV Path                /dev/vg00/cache
  LV Name                cache
  VG Name                vg00
  LV UUID                bPPfzQ-s4fH-4kdT-LPyp-5N20-JQTB-Y2PrAG
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:9
   
  — Logical volume —
  LV Path                /dev/vg00/root
  LV Name                root
  VG Name                vg00
  LV UUID                mZr3p3-52R3-JSr5-HgGh-oQX1-B8f5-cRmaIL
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-08-07 13:47:48 +0200
  LV Status              available
  # open                 1
  LV Size                15.00 GiB
  Current LE             3840
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0
   
  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                LxNRWV-le3h-KIng-pUFD-hc7M-39Gm-jhF2Aj
  LV Write Access        read/write
  LV Creation host, time hypervisor-host, 2023-09-18 11:54:19 +0200
  LV Status              available
  # open                 1
  LV Size                210.00 GiB
  Current LE             53760
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:10

9. Check Hypervisor existing partitions and space
 

[root@hypervisor-host ~]# fdisk -l
Disk /dev/sda: 278.9 GiB, 299439751168 bytes, 584843264 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0581e6e2

Device     Boot   Start       End   Sectors   Size Id Type
/dev/sda1  *       2048   2099199   2097152     1G 83 Linux
/dev/sda2       2099200 584843263 582744064 277.9G 8e Linux LVM


Disk /dev/mapper/vg00-root: 15 GiB, 16106127360 bytes, 31457280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-swap: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-var: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-spool: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-audit: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-opt: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-home: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-tmp: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-log: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-cache: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/vg00-vmprivate: 210 GiB, 225485783040 bytes, 440401920 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

10. List block devices on VM
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:2    0   10G  0 lvm  /home
│ └─vg00-var       253:3    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 

 

 

11. Create new LVM partition with fdisk or cfdisk
 

If there is no cfdisk new resized space with qemu-img could be setup with a fdisk, though I personally always prefer to use cfdisk

[root@vm-host ~]# fdisk /dev/vda
# > p (print)
# > m (manfile)
# > n
# … follow on screen instructions to select start and end blocks
# > t (change partition type)
# > select and set to 8e
# > w (write changes)

[root@vm-host ~]# cfdisk /dev/vda


Setup new partition from Free space as [ primary ] partition and Choose to be of type LVM


12. List partitions to make sure new LVM partition is present
 

[root@vm-host ~]# fdisk -l
Disk /dev/vda: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe7b2d9fd

Device     Boot     Start       End   Sectors Size Id Type
/dev/vda1  *         2048   2099199   2097152   1G 83 Linux
/dev/vda2         2099200 186646527 184547328  88G 8e Linux LVM
/dev/vda3       186646528 188743679   2097152   1G 82 Linux swap / Solaris
/dev/vda4       188743680 209715199  20971520  10G 8e Linux LVM

The extra added 10 Giga is seen under /dev/vda4.
  — Physical volume —
  PV Name               /dev/vda4
  VG Name               vg01
  PV Size               10.00 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              2559
  Free PE               0
  Allocated PE          2559
  PV UUID               yvMX8a-sEka-NLA7-53Zj-fFdZ-Jd2K-r0Db1z
   
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8
   
[root@vm-host ~]# 

 

13. List LVM Physical Volumes
 

[root@vm-host ~]# pvdisplay 
  — Physical volume —
  PV Name               /dev/vda2
  VG Name               vg00
  PV Size               <88.00 GiB / not usable 3.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              22527
  Free PE               0
  Allocated PE          22527
  PV UUID               i4UpGr-h9Cd-iKBu-KqEI-15vK-CGc1-DwRPj8

 


  
  Notice the /dev/vda4 is not seen in pvdisplay (Physical Volume display command) because not created yet, so lets create it.
 

14. Initialize new Physical Volume to be available for use by LVM
 

[root@vm-host ~]# pvcreate /dev/vda4


15. Inform the OS for partition table changes
 

If partprobe is not available as command on the host, below obscure command should do the trick.
 

[root@vm-host ~]# echo "- – -" | tee /sys/class/scsi_host/host*/scan

However usually, better to use partprobe to inform the Operating System of partition table changes

[root@vm-host ~]# partprobe


16. Use lsblk again to see the new /dev/vda4 LVM is listed into "vda" root block device
 

[root@vm-host ~]# 
[root@vm-host ~]# lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0            11:0    1 1024M  0 rom  
vda           252:0    0  100G  0 disk 
├─vda1        252:1    0    1G  0 part /boot
├─vda2        252:2    0   88G  0 part 
│ ├─vg00-root 253:0    0   68G  0 lvm  /
│ ├─vg00-home 253:1    0   10G  0 lvm  /home
│ └─vg00-var  253:2    0   10G  0 lvm  /var
├─vda3        252:3    0    1G  0 part [SWAP]
└─vda4        252:4    0   10G  0 part 
[root@vm-host ~]# 


17. Create new Volume Group (VG) on /dev/vda4 block device
 

Before creating a new VG, list what kind of VG is on the machine to be sure the new created one will not be already present.
 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Knwz-utO7-Aw8c60

vg00 is existing only, so we can use vg01 as a Volume Group name for the new volume group where the fresh 10GB LVM partition will lay

[root@vm-host ~]# vgcreate vg01 /dev/vda4
  Volume group "vg01" successfully created

 

18. Create new Logical Volume (LV) and extend it to occupy the full space available on Volume Group vg01

 

 

[root@vm-host ~]# lvcreate -n commvault -l 100%FREE vg01
  Logical volume "commvault" created.

  An alternative way to create the same LV is by running:

lvcreate -n commvault -L 10G vg01


19. Relist block devices with lsblk to make sure the new created Logical Volume commvault is really present and seen, in case of it missing re-run again partprobe cmd
 

[root@vm-host ~]# lsblk 
NAME               MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sr0                 11:0    1 1024M  0 rom  
vda                252:0    0  100G  0 disk 
├─vda1             252:1    0    1G  0 part /boot
├─vda2             252:2    0   88G  0 part 
│ ├─vg00-root      253:0    0   68G  0 lvm  /
│ ├─vg00-home      253:1    0   10G  0 lvm  /home
│ └─vg00-var       253:2    0   10G  0 lvm  /var
├─vda3             252:3    0    1G  0 part [SWAP]
└─vda4             252:4    0   10G  0 part 
  └─vg01-commvault 253:3    0   10G  0 lvm  

 

As it is not mounted yet, the VG will be not seen in df free space but will be seen as a volume group with vgdispaly
 

[root@vm-host ~]# df -h
Filesystem                  Size  Used Avail Use% Mounted on
devtmpfs                    2.8G     0  2.8G   0% /dev
tmpfs                       2.8G   33M  2.8G   2% /dev/shm
tmpfs                       2.8G   17M  2.8G   1% /run
tmpfs                       2.8G     0  2.8G   0% /sys/fs/cgroup
/dev/mapper/vg00-root        67G  2.4G   61G   4% /
/dev/mapper/vg00-var        9.8G 1021M  8.3G  11% /var
/dev/mapper/vg00-home       9.8G   24K  9.3G   1% /home
/dev/vda1                   974M  242M  665M  27% /boot
tmpfs                       569M     0  569M   0% /run/user/0

 

[root@vm-host ~]# vgdisplay 
  — Volume group —
  VG Name               vg01
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <10.00 GiB
  PE Size               4.00 MiB
  Total PE              2559
  Alloc PE / Size       2559 / <10.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               nYP0tv-IbFw-fBVT-slBB-H1hF-jD0h-pE3V0S
   
  — Volume group —
  VG Name               vg00
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <88.00 GiB
  PE Size               4.00 MiB
  Total PE              22527
  Alloc PE / Size       22527 / <88.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               oyo1oY-saSm-0IKk-gZnf-Snwz-utO7-Aw8c60
  


20. Create new ext4 filesystem on the just created vg01-commvault   
 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 

[root@vm-host ~]# mkfs.ext4 /dev/mapper/vg01-commvault 
mke2fs 1.45.6 (20-Mar-2020)
Discarding device blocks: done                            
Creating filesystem with 2620416 4k blocks and 655360 inodes
Filesystem UUID: 1491d8b1-2497-40fe-bc40-5faa6a2b2644
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 


21. Mount vg01-commvault into /opt directory
 

[root@vm-host ~]# mkdir -p /opt/

[root@vm-host ~]# mount /dev/mapper/vg01-commvault /opt/


22. Check mount is present on VM guest OS
 

[root@vm-host ~]# mount|grep -i /opt
/dev/mapper/vg01-commvault on /opt type ext4 (rw,relatime)
[root@vm-host ~]# 

[root@vm-host ~]# df -h|grep -i opt
/dev/mapper/vg01-commvault  9.8G   24K  9.3G   1% /opt
[root@vm-host ~]# 
 

23. Add vg01-commvault to be auto mounted via /etc/fstab on next Virtual Machine reboot
 

[root@vm-host ~]# echo '/dev/mapper/vg01-commvault /opt         ext4            defaults        1        2' >> /etc/fstab

[root@vm-host ~]# rpm -ivh commvault-fs.Instance001-11.0.0-80.240.0.3589820.240.4083067.el8.x86_64.rpm

[root@vm-host ~]# systemctl status commvault
● commvault.Instance001.service – commvault Service
   Loaded: loaded (/etc/systemd/system/commvault.Instance001.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-11-10 15:13:59 CET; 27s ago
  Process: 9972 ExecStart=/opt/commvault/Base/Galaxy start direct -focus Instance001 (code=exited, status=0/SUCCESS)
    Tasks: 54
   Memory: 155.5M
   CGroup: /system.slice/commvault.Instance001.service
           ├─10132 /opt/commvault/Base/cvlaunchd
           ├─10133 /opt/commvault/Base/cvd
           ├─10135 /opt/commvault/Base/cvfwd
           └─10137 /opt/commvault/Base/ClMgrS

Nov 10 15:13:57 vm-host.ffm.de.int.atosorigin.com systemd[1]: Starting commvault Service…
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Cleaning up /opt/commvault/Base/Temp …
Nov 10 15:13:58 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: Starting Commvault services for Instance001 …
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com Galaxy[9972]: [22B blob data]
Nov 10 15:13:59 vm-host.ffm.de.int.atosorigin.com systemd[1]: Started commvault Service.
[root@vm-host ~]# 

 

24. Install Commvault backup client RPM in new mounted LVM under /opt

[root@vm-host ~]#  rpm -ivh commvault.rpm

Simple bash shell Script to easy and quickly automate deploy RHEL Linux Virtual Machines on KVM Virtualization

Friday, November 3rd, 2023

how-to-install-a-kvm-guest-os-from-the-commandline-easily-with-script-virt-install-logo

Earlier I've blogged in about howto build KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg as one of the Projects i was involved in my work duty as system administrator, we had the task to build
a KVM virtual machines and build a High Availability Linux  PCS Corosync / Pacemaker / haproxy cluster on it to save some money for the company from purchasing VMWare licenses.

The setup of the KVM Virtual machine on a first glimpse is relatively simple and I thought, this can be done just for 2 / 3 days but it turned out to take up 2 weeks or so to properly prepare the kickstarter file and learn a bit about basic virt-install KVM options and experiment with them until
we can produce a noral working Virtual machines. 

The original developed simple script that I used to bring up a new KVM virtual machines in a bit easier way virt-install-kvm.sh looked like so:
 

#!/bin/sh
# Script to build a new VM based on a kickstart.cfg file template
# with virt-install
# Author Georgi Georgiev hip0
# hipo@Pc-freak.net

KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='70';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"

 

The script basicly did what it was aimed for but modifying the kickstarter ks.cfg every time with multiple additional parameters, like network configurations as well some additional modifications to ks.cfg for VM parameters was annoying.

Recently as we had to repeat the same task again in order to migrate old customer containing a Linux OpenVZ Virtual machines to a newer  OS installed RHEL versionb with KVM virtual machines, i've took some hours and scripted a small script that would easiy our task to build new KVM Virtual Machines
from scratch relatively easy, if we have to repeat the Linux OS build operation again and again.

Before proceeding to use the script of course one has to use LVM and setup the partitions on the Host which will be the Hypervisor where the KVM VMs will be installed as well you need to download an ISO image of Redhat Enterprise Linux / CentOS / Fedora or whatever kind of RPM based Linux
you would like to setup as well as do the basic configurations regarding the emulated Hardware node "Power" of the new Virtual Machine (CPU / Memory / Disk Partition) as well as choose a proper meaningful VM name (preferrably following some good meaningful crafted naming convention, that will talk a bit about what is inside the KVM VM container).

The script that automates a bit the KVM VM installation which you find below you can also download from here virt-install.sh

 

#!/bin/bash
virt_install_path=$(which virt-install);
vm_host='vmname01';
boot_iso='/vmprivate/rhel-8.7-x86_64-dvd.iso';
os_version='rhel8.7';
vm_cpus_count='4';
vm_ram='6144';
vm_img_location='/vmprivate/NAME-of-VM.img';
vm_machine_description='Name Production system';
ks_file_location_template='/root/ks.cfg.templ';
ks_file_location='/root/ks.cfg';
ks_vm_read_loc=$(echo $ks_file_location |sed -e 's#root/##g');
vm_main_ip='192.168.23.52';
vm_netmask='255.255.255.192';
vm_gateway='192.168.23.33';
vm_nameserver='172.30.50.2';
#–ip=192.168.233.52 –netmask=255.255.255.192 –gateway=192.168.233.33 –nameserver=172.20.88.2

echo "Checking if defined $vm_host image $vm_img_location is not already present";
if [ ! -f $vm_img_location ]; then
echo
else
echo 'Exiting $vm_img_location is present';
echo "To destroy exiting VM image $vm_img_location run manually cmds:";
echo 'virsh list; virsh destroy $vm_host; virsh undefine $vm_host –remove-all-storage';
exit 1;
fi

alias cp='cp -f';
/usr/bin/cp -rpf $ks_file_location_template $ks_file_location;

echo "Setting  VM Main IP: $vm_main_ip";
echo "Setting  VM netmask: $vm_netmask";
echo "Setting  VM Gateway: $vm_gateway";
echo "Setting  VM Nameserver $vm_nameserver";
/usr/bin/perl -pi -e "s/ip=a1.a2.a3.a4/ip=$vm_main_ip/" $ks_file_location
/usr/bin/perl -pi -e "s/netmask=b1.b2.b3.b4/netmask=$vm_netmask/" $ks_file_location
/usr/bin/perl -pi -e "s/gateway=c1.c2.c3.c4/nameserver=$vm_nameserver/" $ks_file_location
/usr/bin/perl -pi -e "s/nameserver=d1.d2.d3.d4/gateway=$vm_gateway/" $ks_file_location
/usr/bin/perl -pi -e "s/vm-hostname/$vm_host/" $ks_file_location

echo "Running VM install:";
echo $virt_install_path -n $vm_host –description "$vm_machine_description" –os-type=Linux –os-variant=$os_version –ram=$vm_ram –vcpus=$vm_cpus_count –location=$boot_iso –disk path=$vm_img_location,bus=virtio,size=90 –graphics none –initrd-inject=$ks_file_location –extra-args "console=ttyS0 ks=file:$ks_vm_read_loc"

$virt_install_path -n $vm_host –description "$vm_machine_description" –os-type=Linux –os-variant=$os_version –ram=$vm_ram –vcpus=$vm_cpus_count –location=$boot_iso –disk path=$vm_img_location,bus=virtio,size=90 –graphics none –initrd-inject=$ks_file_location –extra-args "console=ttyS0 ks=file:$ks_vm_read_loc"
 

 

Modify the script to insert the required parameters of the new VM in the script header session, you will have to provide below options.

 

vm_host='vmname01';
boot_iso='/vmprivate/rhel-8.7-x86_64-dvd.iso';
os_version='rhel8.7';
vm_cpus_count='4';
vm_ram='6144';
vm_img_location='/vmprivate/NAME-of-VM.img';
vm_machine_description='Name Production system';
ks_file_location_template='/root/ks.cfg.templ';
ks_file_location='/root/ks.cfg';
ks_vm_read_loc=$(echo $ks_file_location |sed -e 's#root/##g');
vm_main_ip='192.168.23.52';
vm_netmask='255.255.255.192';
vm_gateway='192.168.23.33';
vm_nameserver='172.30.50.2';

The automatic Build virtual machine script is tested and works with Redhat Enterprise Linux and of course is pretty primitive as there is so much available online that would do similar, but still I like it because it works for me 🙂

You will need also the ks.cfg.templ file which has the basic kickstarter configuration that will bring up the Virtual machines according to predefined configurations.
Here is the ks.cfg.templ and ks.cfg files as well, you will have to place them under /root/ or some other directory.
Of course this script is just a basic one, it can be easily updated to accept its variable options as arguments if you need to bring up a multitude of virtual machines relatively quickly with few minor modifications. 

Hope the script is helpful to some sysadmin out there. If so don't forget to donate me for a beer in my Patreon account found in the widgets section 🙂

How to extend LVM full partition to bigger size on Linux Virtual machine Guest running in VMware vSphere

Tuesday, September 20th, 2022

lvm-filesystem-extend-on-linux-virtual-machine-vmware-physical-group-volume-group-logical-volume-partitions-picture

Lets say you have to resize a partition that is wrongly made by some kind of automation like ansible or puppet,
because the Linux RHEL family OS template was prepared with a /home (or other partition with some very small size)  on VMware Vsphere Hypervisor hosting the Guest linux VM and the partition got quickly out of space.

To resolve the following question comes for the sysadmin

I. How to extend the LVM parititon that run out of space (without rebooting the VM Guest Linux Host)

II. how to add new disk partition space to the vSphere hypervisor OS. 

In below article i'll shortly describe that trivials steps to take to achieve that. Article won't show anything new original but I wrote it,
because I want it to have it logged for myself in case I need to LVM extend the space of my own Virtual machines and 
cause hopefully that might be of help to someone else from the Linux community that has to complete the same task.
 

I . Extending a LVM parititon that run out of space on a Linux Guest VM
 
1. Check the current parititon size that you want to extend

[root@linux-hostname home]# df -h /home/
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/vg00-home
                       4.7G  4.5G     0 100% /home

2. Check the Virtualization platform

[root@vm-hostname ~]# lshw |head -3
linux-hostname
    description: Computer
    product: VMware Virtual Platform

3. Check the Operating System Linux OS type and version 

In this specific case this is a bit old Redhat -like CentOS 6.9 Linux
 

[root@vmware-host ~]# cat /etc/*release*
CentOS release 6.9 (Final)
CentOS release 6.9 (Final)
CentOS release 6.9 (Final)
cpe:/o:centos:linux:6:GA

4. Find out the type of target filesystem is EXT3, EXT4 or XFS etc.?

[root@vm-hostname ~]# grep home /proc/mounts
/dev/mapper/vg00-home /home ext3 rw,relatime,errors=continue,user_xattr,acl,barrier=1,data=ordered 0 0


Filesystem is handled by LVM thus

5. Check the size of the LVM partition we want to exchange

[root@vm-hostname ~]# lvs |grep home
home vg00 -wi-ao—- 5.00g

6. Check whether free space is available space in the volume group ?

[root@vm-hostname ~]# vgdisplay vg00
  — Volume group —
  VG Name               vg00
  System ID
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  15
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                10
  Open LV               10
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               128.99 GiB
  PE Size               4.00 MiB
  Total PE              33022
  Alloc PE / Size       30976 / 121.00 GiB
  Free  PE / Size       2046 / 7.99 GiB
  VG UUID               1F89PB-nIP2-7Hgu-zEVR-5H0R-7GdB-Lfj7t4


Extend VMWare space configured for additional hard disk on Hypervisor (if necessery)

In order for to extend the LVM of course you need to have a pre-existing additional hard-drive on VM (sdb,sdc etc. attached )

– If you need to extend on Vmware Vsphere Hypervisor:
Extend additional harddrive by entering the new size and Validate.

If you have previously extended the size of the Virtual Disk from VMWare to make the Linux guest vm find out about the change
you have to rerun rescan for the respective device that was grown on the HV.

7. Rescan on Linux VM host for changes in disk size from Hypervisor

Rescan disk for new size :

[root@vm-hostname ~]# echo 1> /sys/block/sdX/device/rescan

(where sdX is the extended additional harddrive)

8. Resize LVM physical volume

[root@vm-hostname ~]# pvresize /dev/sdX

9. Enlarge Logical Volume size 

[root@vm-hostname ~]# lvextend -L+5G /dev/mapper/vg00-home
     Extending logical volume LogVol00 to 10.77 GiB
     Logical volume LogVol00 successfully resized

10. Enlarge LVM hosted filesystem size

Filesystem is ext3 or ext4 :

[root@vm-hostname ~]# resize2fs /dev/mapper/vg00-home

– If the filesystem is not ext3 / ext4 but XFS you have to use xfs_growfs to let the FS know about the change.

Filesystem is XFS :
 

[root@vm-hostname ~]# xfs_growfs /dev/mapper/vg00-home

11. Check the additional filespace is already active on the Linux Guest VM

[root@vm-hostname ~]# df -h /home/
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/mapper/vg_cloud-LogVol00
                        10G  4.2G  4.9G  48% /home


12. Verify  the extension of filesystem completed without errors


Check of system log:

[root@vm-hostname ~]# grep -i error /var/log/messages

Check if filesystem is writable.

[root@vm-hostname ~]# touch /home/test

[root@vm-hostname ~]# ls -al /home/test
-rw-r—– 1 root root 0 Sep 20 13:39 /home/test
[root@vm-hostname ~]# rm -f /home/test


II.  How to add additional sdb drive to a Linux host from vSPhere HV lets say (sdb)


1.  On VSphere GUI  interface

-> Select New hard drive and click Add

Enter the desired size for the new disk then unpack the disk parameters to choose Thin provision. Validate and Apply the recommendations.

basic-lvm-create-volume_group-diagram-on-linux-explained

2. On Linux system VM guest host to detect the new added sdb available space

Discover new disk :

[root@vm-hostname ~]# echo "- – -"> /sys/class/scsi_host/host2/scan && echo "- – -"> /sys/class/scsi_host/host1/scan && echo "- – -"> /sys/class/scsi_host/host0/scan

See  if discovered disk is found in /var/log/messages :

[…]
Nov 8 17:33:26 bict4004s kernel: scsi 2:0:2:0: Direct-Access VMware Virtual disk 1.0 PQ: 0 ANSI: 2
Nov 8 17:33:26 bict4004s kernel: scsi target2:0:2: Beginning Domain Validation
Nov 8 17:33:26 bict4004s kernel: scsi target2:0:2: Domain Validation skipping write tests
Nov 8 17:33:26 bict4004s kernel: scsi target2:0:2: Ending Domain Validation
Nov 8 17:33:26 bict4004s kernel: scsi target2:0:2: FAST-40 WIDE SCSI 80.0 MB/s ST (25 ns, offset 127)
Nov 8 17:33:26 bict4004s kernel: sd 2:0:2:0: Attached scsi generic sg3 type 0
Nov 8 17:33:26 bict4004s kernel: sd 2:0:2:0: [sdb] 2097152 512-byte logical blocks: (1.07 GB/1.00 GiB)
Nov 8 17:33:26 bict4004s kernel: sd 2:0:2:0: [sdb] Write Protect is off
Nov 8 17:33:26 bict4004s kernel: sd 2:0:2:0: [sdb] Cache data unavailable
Nov 8 17:33:26 bict4004s kernel: sd 2:0:2:0: [sdb] Assuming drive cache: write through
Nov 8 17:33:26 bict4004s kernel: sd 2:0:2:0: [sdb] Attached SCSI disk
[…]

3. Create new LVM Physical Volume

[root@vm-hostname ~]# pvcreate /dev/sdb

4. Enlarge LVM Volume Group to the max available size of /dev/sdb

[root@vm-hostname ~]# vgextend vg00 /dev/sdb

Enlarge LVM Logical Volume

[root@vm-hostname ~]# lvextend -L+10G /dev/mapper/vg00-home

5. Enlarge filesystem to max size of just created LVM

If Filesystem is ext3 or ext4 :

[root@vm-hostname ~]# resize2fs /dev/mapper/vg00-home


Again if we work with XFS additionally do:

[root@vm-hostname ~]# xfs_growfs /dev/mapper/vg00-home

6. Checking filesystem extension completed correct

 [root@vm-hostname ~]# df -h /home


7. Check filesystem is writtable and no errors produced in logs

Check of system log:

[root@vm-hostname ~]# grep -i error /var/log/messages


Check if filesystem is writable.

[root@vm-hostname ~]# touch /home/test

How to mount LVM partition volume on Linux

Wednesday, August 22nd, 2018

lvm-logical-volume-logical-volume-groups

(LVM) = Logical Volume Manager is a device mapper offering logical volume management for the Linux kernel. Virtually all modern GNU / Linux distributions has support for it and using LVM is used among almost all Hosting Providers on (dedicated) backend physical and Virtual XEN / VMWare etc. servers because it provides the ability to merge a number of disks into virtual volumes (for example you have a number of SSD Hard Drives on a server that are under a separate /dev/sda1 /dev/sda2 /dev/sdb3 /dev/sdb4 etc. and you want all the HDDs to appear as a single file system this is managed by Linux LVM.

Logical-volume-manager-linux-explained-diagram.svg

Picture sources Wikipedia

The use of LVM is somewhat similar to RAID 0 disk arrays, where the good about it it allows the removal and addition of hard disks in real time (broken hard disks) on servers to be replaced without service downtime as well as dynamic HDD volume resizal is possible. LVM allows also relatively easy encryption of multiple HDD volumes
with single password.

Discs can be organized in volume groups (so lets say 2 of the server Attached conventional Hard Disks, SCSI or SSDs can be attached to LVM1 and another 3 Hard Drives could be attached to LVM2 group etc.

LVM has been an integral part of Linux kernel since 1998.

lvm is available for install via apt, yum, dhf etc. under a package called lvm2, so to install it on Debian / Ubuntu Fedora Linux (if it is not already installed on the servers with).

 

– Install LVM2 On Debian / Ubuntu
 

debian:~# apt-get install –yes lvm2

 


– Install LVM2 on Fedora / CentOS (Redhat RPM based distros)

 

[root@centos ~]:# yum install -y lvm2


or

[root@fedora ~]:#  dhf install -y lvm2


I. Mounting LVM2 on Linux server after broken DISK change part of a LVM Volume

For example the HDD faileddue to bad sectors and physical HDD head damage damage  – the easiest way to figure that out if the server is running smartd or via a simple HDD test check from BIOS  ( as the ROOT partition is on a LVM it fails to boot properly. You have changed the broken HDD with a brand new and you need to remount the LVM either physically on the server console or remotely via some kind of BIOS KVM interface).

In my experience working for Santrex this was a common sysadmin job, as many of the Virtual Client servers as well as others irons situated in various DataCenters, were occasionally failing to boot and the monitoring system was reporting about the issues and we had to promptly react and bring the servers up.

Here is shortly how we managed to re-mount the LVM after the SSDs / HDDS were substituted:

    1.1. Execute fdisk, vgscan / lvdisplay command

fdisk-lvm-screenshot

lvdisplay-screenshot

vgscan scans all supported LVM block devices in the system for VGs (Virtual Groups)

vgscan-vgchange-screenshot-lvm

    1.2. Next issue vgchange command to activate volume
 

vgchange -ay


   
    1.3. Type lvs command to get information about logical volumes

 

lvs


   
    1.4. Create a mount point using the mkdir command

      That's because we wanted to check the LVM will get properly mounted on next server reboot).
   
     1.5. Mount an LVM volume using
 

server:~# mount /dev/mapper/DEVICE /path/to/mount_point

 

     1.6. To check the size of the LVM (mount points, mounted LVM /dev/names sizes and the amount of free space on each of them use)
 

server:~# df -T

How to check /dev/ partition disk labeling in Debian GNU / Linux

Thursday, December 8th, 2011

The usual way that one is supposed to check a certain partition let’s say /dev/sda1 disk UUID (Universal Unique Identifier) label is through a command:
vol_id /dev/sda1

For reason however Debian does not include vol_id command. To check the UUID assigned disk labels on Debian one should use another command called blkid (part of util-linux deb package).

blkid will list all block device attributes so it doesn’t specifically, passing any partition as argument.
Here is an example output of blkid :

server:/root# blkid
/dev/sda1: UUID="cdb1836e-b7a2-4cc7-b666-8d2aa31b2da4" SEC_TYPE="ext2" TYPE="ext3"
/dev/sda5: UUID="c67d6d43-a48f-43ff-9d65-7c707a57dfe6" TYPE="swap"
/dev/sdb1: UUID="e324ec28-cf04-4e2e-8953-b6a8e6482425" TYPE="ext2"
/dev/sdb5: UUID="1DWe0F-Of9d-Sl1J-8pXW-PLpy-Wf9s-SsyZfQ" TYPE="LVM2_member"
/dev/mapper/computer-root: UUID="fbdfc19e-6ec8-4000-af8a-cde62926e395" TYPE="ext3"
/dev/mapper/computer-swap_1: UUID="e69100ab-9ef4-45df-a6aa-886a981e5f26" TYPE="swap"
/dev/mapper/computer-home: UUID="2fe446da-242d-4cca-8b2c-d23c76fa27ec" TYPE="ext3"