Archive for the ‘System Administration’ Category

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

kvm_virtualization-logo-redhat-8.3-install-howto-with-kickstart

If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."


Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
 

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0

 

  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time lpgblu01f.ffm.de.int.atosorigin.com, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB


Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab

 

2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor


In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:

NM_CONTROLED=NO

Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM


Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

#version=RHEL8
#install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda

# Use network installation
#url --url=http://hostname.com/rhel/8/BaseOS
##url --url=http://171.23.8.65/rhel/8/os/BaseOS

# Use text mode install
text
#graphical

# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8

# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted

# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain

# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint

# skipx
skipx

# Firewall configuration
firewall --disabled

# System timezone
timezone Europe/Berlin

# Clear the Master Boot Record
##zerombr

# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream
#repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS
#repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/
##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/
##repo --name=os-baseos В  В  В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/
##repo --name=os-appstream В  --baseurl=http://172.54.8.65/rhel/8/os/AppStream/
#repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS

# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
zerombr
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"

# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug

# reboot automatically
reboot

###

%packages
@standard
python3
pam_ssh_agent_auth
-nmap-ncat
#-plymouth
#-bpftool
-cockpit
#-cryptsetup
-usbutils
#-kmod-kvdo
#-ledmon
#-libstoragemgmt
#-lvm2
#-mdadm
-rsync
#-smartmontools
-sos
-subscription-manager-cockpit
# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf

# Disable kdump
systemctl disable kdump.service
%end

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0

Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
 

6. Install the new VM with virt-install cmd


Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:
 

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation


For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
 

#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"


A copy of virt-install.sh script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
—————————
 2    
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


redhat8-login-prompt

One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:
 

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂

 

 

Enjoy !

Configure rsyslog buffering on Linux to avoid message lost to Central Logging server

Wednesday, January 13th, 2021

rsyslog-Centralized-Logging-System-using-Rsyslog_logo

1. Rsyslog Buffering

One of the best practice about logs management is to send syslog to a central server. However, a logging system should be capable of avoiding message loss in situations where the server is not reachable. To do so, unsent data needs to be buffered at the client when central server is not available. You might have recently noticed that many servers forwarding logs messages to a central server do not have buffering functionalities activated. Thus I strongly advise you to have look to this documentation to know how to check your configuration: http://www.rsyslog.com/doc/rsyslog_reliable_forwarding.html

Rsyslog buffering with TCP/UDP configured

In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that "the action is not ready", which means the remote server is offline. This can be detected with plain TCP syslog and RELP, but not with UDP. So you need to use either of the two. In this howto, we use plain TCP syslog.

– Version requirement

Please note that we are using rsyslog-specific features. The are required on the client, but not on the server. So the client system must run rsyslog (at least version 3.12.0), while on the server another syslogd may be running, as long as it supports plain tcp syslog.

How To Setup rsyslog buffering on Linux

First, you need to create a working directory for rsyslog. This is where it stores its queue files (should need arise). You may use any location on your local system. Next, you need to do is instruct rsyslog to use a disk queue and then configure your action. There is nothing else to do. With the following simple config file, you forward anything you receive to a remote server and have buffering applied automatically when it goes down. This must be done on the client machine.

# Example:
# $ModLoad imuxsock             # local message reception
# $WorkDirectory /rsyslog/work  # default location for work (spool) files
# $ActionQueueType LinkedList   # use asynchronous processing
# $ActionQueueFileName srvrfwd  # set file name, also enables disk mode
# $ActionResumeRetryCount -1    # infinite retries on insert failure
# $ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
# *.*       @@server:port

Short SSL generate new and self-signed certificates PEM, view and convert to and from PKCS12 to java key store cookbook commands cheat sheet

Tuesday, January 12th, 2021

OpenSSL-logo

Below is a short compilation of common used openssl commands (a kind of cookbook) helpful for sysadmins who has to commonly deal with OpenSSL certificates.

Lets say you have to generate new certificate / key and a PEM files, prepare self-signed certificates, show CSR / PEM or KEY ssl file contents, get information about certificate such as expiry date a type of encryption algorithm or sign certificate with self-signed authority convert PEM to PKCS12, convert from PKCS12 file format to .PEM, convert java X509 to java key store SSL encryptionor convert java key store format certificate to PKCS12, then below will be of use to you.

1. Generate Private RSA Key with 2048 bits

# openssl genrsa -out $ (hostname -f) .key 2048

2. Create CSR file

# openssl req -new -key $ (hostname -f) .key -out $ (hostname -f) .csr

3. Create a Self Certified Certificate:

# openssl x509 -req -days 30 -in $ (hostname -f) .csr -signkey $ (hostname -f) .key -out $ (hostname -f) .crt
Enter password:

# openssl rsa -in key.pem -out newkey.pem


4. Show CSR file content

# openssl req -in newcsr.csr -noout -text


5. Get Certificate version / serial number / signature algorithm / RSA key lenght / modulus / exponent etc.

# openssl x509 -in newcert.pem -noout -text


6. Server certificate as CA self signeded

# openssl ca -in newcert.csr -notext -out newcert.pem


7. Generate a certificate signing request based on an existing certificate

# openssl x509 -x509toreq -in certificate.crt -out CSR.csr -signkey privateKey.key


8. Convert .pem / .key / .crt file format to pkcs12 format
 

# openssl pkcs12 -export -in newcert.pem -inkey newkey.key -certfile ca.crt -out newcert.p12


9. Convert pkcs12 pfx to common .pem

# openssl pkcs12 -in mycert.pfx -out mycert.pem


10. The Formats available

# openssl x509 -inform the -in certificate.cer -out certificate.crt


11. Convert a pkcs # 7 certificate into PEM format

# openssl pkcs7 -in cert.p7c -inform DER -outform PEM -out certificate.p7b
# openssl pkcs7 -print_certs -in certificate.p7b -out certificate.pem


12. Convert X509 to java keystore file

# java -cp not-yet-commons-ssl-0.3.11.jar org.apache.commons.ssl.KeyStoreBuilder pass_for_new_keystore key.key certificate.crt

13. Convert java keystore file to pkcs12

# keytool -importkeystore -srckeystore keystore.jks -destkeystore intermediate.p12 -deststoretype PKCS12

Listing installed RPMs by vendor installed on CentOS / RedHat Linux

Friday, January 8th, 2021

Listing installed RPMs by vendor installed on CentOS / RedHat Linux

Listing installed RPMs by vendor is useful sysadmin stuff if you have third party software installed that is not part of official CentOS / RedHat Linux and you want to only list this packages, here is how this is done

 

[root@redhat ~]# rpm -qa –qf '%{NAME} %{VENDOR} %{PACKAGER} \n' | grep -v 'CentOS' | sort

criu Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
gskcrypt64 IBM IBM
gskssl64 IBM IBM
ipxe-roms-qemu Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libevent (none) (none)
libguestfs-appliance Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libguestfs-tools-c Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libguestfs Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlcommon Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlsdk-python Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlsdk Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlxmlmodel Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libtcmu Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvcmmd Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-client Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-config-nwfilter Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-interface Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-network Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-nodedev Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-nwfilter Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-qemu Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-storage-core Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-storage Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-kvm Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-libs Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-python Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvzctl Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvzevent Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
openvz-logos Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
p7zip-plugins Fedora Project Fedora Project
ploop-lib Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
ploop Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
prlctl Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
prl-disk-tool Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
prl-disp-service Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
python2-lockfile Fedora Project Fedora Project
python2-psutil Fedora Project Fedora Project
python-daemon Fedora Project Fedora Project
python-subprocess32 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qemu-img-vz Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qemu-kvm-common-vz Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qemu-kvm-vz Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qt Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
rkhunter Fedora Project Fedora Project
seabios-bin Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
seavgabios-bin Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
spfs Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
TIVsm-API64 IBM (none)
TIVsm-APIcit IBM (none)
TIVsm-BAcit IBM (none)
TIVsm-BA IBM (none)
vcmmd Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vmauth Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vzctl Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vzkernel Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vzkernel Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt_checker Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt_checker Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt-lib Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
zabbix-agent (none) (none)

 


That instructs rpm to output each package's name and vendor, then we exclude those from "Red Hat, Inc." (which is the exact string Red Hat conveniently uses in the "vendor" field of all RPMs they pacakge).

By default, rpm -qa uses the format '%{NAME}-%{VERSION}-%{RELEASE}', and it's nice to see version and release, and on 64-bit systems, it's also nice to see the architecture since both 32- and 64-bit packages are often installed. Here's how I did that:

[root@redhat ~]# rpm -qa –qf '%{NAME}-%{VERSION}-%{RELEASE}.%{ARCH} %{VENDOR} %{PACKAGER} \n' | grep -v 'CentOS' | sort

criu-3.10.0.23-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
gskcrypt64-8.0-55.17.x86_64 IBM IBM
gskssl64-8.0-55.17.x86_64 IBM IBM
ipxe-roms-qemu-20170123-1.git4e85b27.1.vz7.5.noarch Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libevent-2.0.22-1.rhel7.x86_64 (none) (none)
libguestfs-1.36.10-6.2.vz7.12.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libguestfs-appliance-1.36.10-6.2.vz7.12.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libguestfs-tools-c-1.36.10-6.2.vz7.12.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlcommon-7.0.162-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlsdk-7.0.226-2.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlsdk-python-7.0.226-2.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libprlxmlmodel-7.0.80-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libtcmu-1.2.0-16.2.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvcmmd-7.0.22-3.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-client-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-config-nwfilter-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-interface-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-network-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-nodedev-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-nwfilter-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-qemu-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-storage-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-driver-storage-core-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-daemon-kvm-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-libs-3.9.0-14.vz7.38.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvirt-python-3.9.0-1.vz7.1.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvzctl-7.0.506-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
libvzevent-7.0.7-5.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
openvz-logos-70.0.13-1.vz7.noarch Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
p7zip-plugins-16.02-10.el7.x86_64 Fedora Project Fedora Project
ploop-7.0.137-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
ploop-lib-7.0.137-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
prlctl-7.0.164-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
prl-disk-tool-7.0.43-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
prl-disp-service-7.0.925-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
python2-lockfile-0.11.0-17.el7.noarch Fedora Project Fedora Project
python2-psutil-5.6.7-1.el7.x86_64 Fedora Project Fedora Project
python-daemon-1.6-4.el7.noarch Fedora Project Fedora Project
python-subprocess32-3.2.7-1.vz7.5.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qemu-img-vz-2.10.0-21.7.vz7.67.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qemu-kvm-common-vz-2.10.0-21.7.vz7.67.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qemu-kvm-vz-2.10.0-21.7.vz7.67.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
qt-4.8.7-2.vz7.2.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
rkhunter-1.4.6-2.el7.noarch Fedora Project Fedora Project
seabios-bin-1.10.2-3.1.vz7.3.noarch Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
seavgabios-bin-1.10.2-3.1.vz7.3.noarch Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
spfs-0.09.0010-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
TIVsm-API64-8.1.11-0.x86_64 IBM (none)
TIVsm-APIcit-8.1.11-0.x86_64 IBM (none)
TIVsm-BA-8.1.11-0.x86_64 IBM (none)
TIVsm-BAcit-8.1.11-0.x86_64 IBM (none)
vcmmd-7.0.160-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vmauth-7.0.10-2.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vzctl-7.0.194-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vzkernel-3.10.0-862.11.6.vz7.64.7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vzkernel-3.10.0-862.20.2.vz7.73.29.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt-7.0.63-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt_checker-7.0.2-1.vz7.i686 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt_checker-7.0.2-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
vztt-lib-7.0.63-1.vz7.x86_64 Virtuozzo Virtuozzo (http://www.virtuozzo.com/support/)
zabbix-agent-3.2.11-1.el7.x86_64 (none) (none)

How to check Linux server power supply state is Okay / How to find out a Linux Power Supply is broken

Wednesday, January 6th, 2021

2U-power-supplies-get-status-if-Power-supply-broken-information-linux-ipmitool

If you're a sysadmin and managing remotely Linux servers, every now and then if a machine is hanging without a reason it useful to check the server Power Supply state. I say that because often if the machine is mysteriously hanging and a standard Root Cause Analysis (RCA) on /var/log/messages /var/log/dmesg /var/log/boot etc. did not bring you to any different conclusion. The next step after you send a technician to reboot the machine is to check on Linux OS level whether Power Supply Unit (PSU) hardware on the machine does not have some issues.
As blogged earlier on how to use ipmitool to manage remote ILO remote boards etc. the ipmitool can also be used to check status of Server PSUs.

Below is example output of 2 PSU server whose Power Supplies are functioning normally.
 

[root@linux-server ~]# ipmitool sdr type "Power Supply"

PS Heavy Load    | 2Bh | ok  | 19.1 | State Deasserted
Power Supply 1   | 70h | ok  | 10.1 | Presence detected
Power Supply 2   | 71h | ok  | 10.2 | Presence detected
PS Configuration | 72h | ok  | 19.1 |
PS 1 Therm Fault | 75h | ok  | 10.1 | Transition to OK
PS 2 Therm Fault | 76h | ok  | 10.2 | Transition to OK
PS1 12V OV Fault | 77h | ok  | 10.1 | Transition to OK
PS2 12V OV Fault | 78h | ok  | 10.2 | Transition to OK
PS1 12V UV Fault | 79h | ok  | 10.1 | Transition to OK
PS2 12V UV Fault | 7Ah | ok  | 10.2 | Transition to OK
PS1 12V OC Fault | 7Bh | ok  | 10.1 | Transition to OK
PS2 12V OC Fault | 7Ch | ok  | 10.2 | Transition to OK
PS1 12Vaux Fault | 7Dh | ok  | 10.1 | Transition to OK
PS2 12Vaux Fault | 7Eh | ok  | 10.2 | Transition to OK
Power Unit       | 7Fh | ok  | 19.1 | Fully Redundant

Now if you have a server lets say on an old ProLiant DL360e Gen8 whose Power Supply is damaged, you will get an from ipmitool similar to:

[root@linux-server  systemd]# ipmitool sdr type "Power Supply"
Power Supply 1   | 30h | ok  | 10.1 | 100 Watts, Presence detected
Power Supply 2   | 31h | ok  | 10.2 | 0 Watts, Presence detected, Failure detected, Power Supply AC lost
Power Supplies   | 33h | ok  | 10.3 | Redundancy Lost


If you don't have ipmitool installed due to security or whatever but you have the hardware detection software dmidecode you can use it too to get the Power Supply state

[root@linux-server  systemd]# dmidecode -t chassis
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.

 

Handle 0x0300, DMI type 3, 21 bytes
Chassis Information
        Manufacturer: HP
        Type: Rack Mount Chassis
        Lock: Not Present
        Version: Not Specified
        Serial Number: CZJ38201ZH
        Asset Tag:
        Boot-up State: Critical
        Power Supply State: Critical

        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0

To find only Power Supply info status on a server with dmideode.

# dmidecode –type 39

monitoring-power-supply-hardware-information-linux-ipmitool

Plug between the power supply and the mainboard voltage / coms ATX specification

This can also be used on a normal Linux desktop PCs which usually have only 1U (one power supply) on many of Ubuntus and Linux desktops where lshw (list hardaware information) is installed to get the machine PSUs status with lshw 

 root@ubuntu:~# lshw -c power
  *-battery               
       product: 45N1111
       vendor: SONY
       physical id: 1
       slot: Front
       capacity: 23200mWh
       configuration: voltage=11.1V
        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0


Finally to get an extensive information on the voltages of the Power Supply you can use the good old lm_sensors.

# apt-get install lm-sensors
# sensors-detect 
# service kmod start

# sensors
# watch sensors


As manually monitoring Power Supplies and other various data is dubious, finally you might want to use some centralized monitoring. For one example on that you might want to check my prior Zabbix to Monitor Hardware Hard Drive / Temperature and Disk with lm_sensors / smartd on Linux with Zabbix.

Hack: Using ssh / curl or wget to test TCP port connection state to remote SSH, DNS, SMTP, MySQL or any other listening service in PCI environment servers

Wednesday, December 30th, 2020

using-curl-ssh-wget-to-test-tcp-port-opened-or-closed-for-web-mysql-smtp-or-any-other-linstener-in-pci-linux-logo

If you work on PCI high security environment servers in isolated local networks where each package installed on the Linux / Unix system is of importance it is pretty common that some basic stuff are not there in most cases it is considered a security hole to even have a simple telnet installed on the system. I do have experience with such environments myself and thus it is pretty daunting stuff so in best case you can use something like a simple ssh client if you're lucky and the CentOS / Redhat / Suse Linux whatever distro has openssh-client package installed.
If you're lucky to have the ssh onboard you can use telnet in same manner as netcat or the swiss army knife (nmap) network mapper tool to test whether remote service TCP / port is opened or not. As often this is useful, if you don't have access to the CISCO / Juniper or other (networ) / firewall equipment which is setting the boundaries and security port restrictions between networks and servers.

Below is example on how to use ssh client to test port connectivity to lets say the Internet, i.e.  Google / Yahoo search engines.
 

[root@pciserver: /home ]# ssh -oConnectTimeout=3 -v google.com -p 23
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 23.
debug1: connect to address 172.217.169.206 port 23: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 23.
debug1: connect to address 2a00:1450:4017:80b::200e port 23: Cannot assign requested address
ssh: connect to host google.com port 23: Cannot assign requested address
root@pcfreak:/var/www/images# ssh -oConnectTimeout=3 -v google.com -p 80
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:807::200e] port 80.
debug1: connect to address 2a00:1450:4017:807::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80
ssh_exchange_identification: Connection closed by remote host
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=3
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: connect to address 172.217.169.206 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80b::200e] port 80.
debug1: connect to address 2a00:1450:4017:80b::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [142.250.184.142] port 80.
debug1: connect to address 142.250.184.142 port 80: Connection timed out
debug1: Connecting to google.com [2a00:1450:4017:80c::200e] port 80.
debug1: connect to address 2a00:1450:4017:80c::200e port 80: Cannot assign requested address
ssh: connect to host google.com port 80: Cannot assign requested address
root@pcfreak:/var/www/images# ssh google.com -p 80 -v
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to google.com [172.217.169.206] port 80.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type 0
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2
debug1: ssh_exchange_identification: HTTP/1.0 400 Bad Request

 


debug1: ssh_exchange_identification: Content-Type: text/html; charset=UTF-8


debug1: ssh_exchange_identification: Referrer-Policy: no-referrer


debug1: ssh_exchange_identification: Content-Length: 1555


debug1: ssh_exchange_identification: Date: Wed, 30 Dec 2020 14:13:25 GMT


debug1: ssh_exchange_identification:


debug1: ssh_exchange_identification: <!DOCTYPE html>

debug1: ssh_exchange_identification: <html lang=en>

debug1: ssh_exchange_identification:   <meta charset=utf-8>

debug1: ssh_exchange_identification:   <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">

debug1: ssh_exchange_identification:   <title>Error 400 (Bad Request)!!1</title>

debug1: ssh_exchange_identification:   <style>

debug1: ssh_exchange_identification:     *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 10
debug1: ssh_exchange_identification: 0% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.g
debug1: ssh_exchange_identification: oogle.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0
debug1: ssh_exchange_identification: % 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_
debug1: ssh_exchange_identification: color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}

debug1: ssh_exchange_identification:   </style>

debug1: ssh_exchange_identification:   <a href=//www.google.com/><span id=logo aria-label=Google></span></a>

debug1: ssh_exchange_identification:   <p><b>400.</b> <ins>That\342\200\231s an error.</ins>

debug1: ssh_exchange_identification:   <p>Your client has issued a malformed or illegal request.  <ins>That\342\200\231s all we know.</ins>

ssh_exchange_identification: Connection closed by remote host

 

Here is another example on how to test remote host whether a certain service such as DNS (bind) or telnetd is enabled and listening on remote local network  IP with ssh

[root@pciserver: /home ]# ssh 192.168.1.200 -p 53 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 53.
debug1: connect to address 192.168.1.200 port 53: Connection timed out
ssh: connect to host 192.168.1.200 port 53: Connection timed out

[root@server: /home ]# ssh 192.168.1.200 -p 23 -v -oConnectTimeout=5
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1g  21 Apr 2020
debug1: Connecting to 192.168.1.200 [192.168.1.200] port 23.
debug1: connect to address 192.168.1.200 port 23: Connection timed out
ssh: connect to host 192.168.1.200 port 23: Connection timed out


But what if Linux server you have tow work on is so paranoid that you even the ssh client is absent? Well you can use anything else that is capable of doing a connectivity to remote port such as wget or curl. Some web servers or application servers usually have wget or curl as it is integral part for some local shell scripts doing various operation needed for proper services functioning or simply to test locally a local or remote listener services, if that's the case we can use curl to connect and get output of a remote service simulating a normal telnet connection like this:

host:~# curl -vv 'telnet://remote-server-host5:22'
* About to connect() to remote-server-host5 port 22 (#0)
*   Trying 10.52.67.21… connected
* Connected to aflpvz625 (10.52.67.21) port 22 (#0)
SSH-2.0-OpenSSH_5.3

Now lets test whether we can connect remotely to a local net remote IP's Qmail mail server with curls telnet simulation mode:

host:~#  curl -vv 'telnet://192.168.0.200:25'
* Expire in 0 ms for 6 (transfer 0x56066e5ab900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x56066e5ab900)
* Connected to 192.168.0.200 (192.168.0.200) port 25 (#0)
220 This is Mail Pc-Freak.NET ESMTP

Fine it works, lets now test whether a remote server who has MySQL listener service on standard MySQL port TCP 3306 is reachable with curl

host:~#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5601fafae900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5601fafae900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0
root@pcfreak:/var/www/images#  curl -vv 'telnet://192.168.0.200:3306'
* Expire in 0 ms for 6 (transfer 0x5598ad008900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5598ad008900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
Warning: Binary output can mess up your terminal. Use "–output -" to tell
Warning: curl to output it to your terminal anyway, or consider "–output
Warning: <FILE>" to save to a file.
* Failed writing body (0 != 107)
* Closing connection 0

As you can see the remote connection is returning binary data which is unknown to a standard telnet terminal thus to get the output received we need to pass curl suggested arguments.

host:~#  curl -vv 'telnet://192.168.0.200:3306' –output –
* Expire in 0 ms for 6 (transfer 0x55b205c02900)
*   Trying 192.168.0.200…
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x55b205c02900)
* Connected to 192.168.0.200 (192.168.0.200) port 3306 (#0)
g


The curl trick used to troubleshoot remote port to remote host from a Windows OS host which does not have telnet installed by default but have curl instead.

Also When troubleshooting vSphere Replication, it is often necessary to troubleshoot port connectivity as common Windows utilities are not available.
As Curl is available in the VMware vCenter Server Appliance command line interface.

On servers where curl is not there but you have wget is installed you can use it also to test a remote port

 

# wget -vv -O /dev/null http://google.com:554 –timeout=5
–2020-12-30 16:54:22–  http://google.com:554/
Resolving google.com (google.com)… 172.217.169.206, 2a00:1450:4017:80b::200e
Connecting to google.com (google.com)|172.217.169.206|:554… failed: Connection timed out.
Connecting to google.com (google.com)|2a00:1450:4017:80b::200e|:554… failed: Cannot assign requested address.
Retrying.

–2020-12-30 16:54:28–  (try: 2)  http://google.com:554/
Connecting to google.com (google.com)|172.217.169.206|:554… ^C

As evident from output the port 554 is filtered in google which is pretty normal.

If curl or wget is not there either as a final alternative you can either install some perl, ruby, python or bash script etc. that can opens a remote socket to the remote IP.

How to set up dsmc client Tivoli ( TSM ) release version and process check monitoring with Zabbix

Thursday, December 17th, 2020

zabbix-monitor-dsmc-client-monitor-ibm-tsm-with-zabbix-howto

As a part of Monitoring IBM Spectrum (the new name of IBM TSM) if you don't have the money to buy something like HP Open View monitoring or other kind of paid monitoring system but you use Zabbix open source solution to monitor your Linux server infrastructure and you use Zabbix as a main Services and Servers monitoring platform you will want to monitor at least whether the running Tivoli dsmc backup clients run fine on each of the server (e.g. the dsmc client) runs normally as a backup solution with its common /usr/bin/dsmc process service that connects towards remote IBM TSM server where the actual Data storage is kept.

It might be a kind of weird monitoring to setup to have the tsm version frequently reported to a Zabbix server on a first glimpse, but in reality this is quite useful especially if you want to have a better overview of your multiple servers environment IBM (Spectrum Protect) Storage manager backup solution actual release.
 
So the goal is to have reported dsmc interactive storage manager version as reported from
 

[root@server ~]# dsmc

IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 11.0
  Client date/time: 12/17/2020 15:59:32
(c) Copyright by IBM Corporation and other(s) 1990, 2020. All Rights Reserved.

Node Name: P01VWEBTEVI1.FFM.DE.INT.ATOSORIGIN.COM
Session established with server TSM_SERVER: AIX
  Server Version 8, Release 1, Level 10.000
  Server date/time: 12/17/2020 15:59:34  Last access: 12/17/2020 13:28:01

 

into zabbix and set reports in case if your sysadmins have changed version of a IBM TSM to a newer version. Thus for non sysadmins and less technical persons as Service Delivery Managers (SDMs) it is much easier to track changes of multiple servers Tivoli version to a newer one.

Enough talk let me next show you how to setup the required with a small UserParameter one liner bash shell script.
 

1. Create TSM Userparameter script


With Userparameter key and content as below:

[root@server ~]# vim /etc/zabbix/zabbix_agentd.d/userparameter_TSM.conf

 

UserParameter=dsmc.version,cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13}'


The script output of TivSM version will be reported as so:

[root@server ~]# cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9 " " $10 " " $11 " " $12 " " $13}'
Client Version 8, Release 1, Level 11.0


 

If you want to get only a major version report from dsmc:

UserParameter=dsmc.version,cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9}'


The output as a major version you will get is

[root@server ~]# cat /var/tsm/sched.log | grep Clie | tail -n 1 | awk '{print $7 " " $8 " " $9}'
Client Version 8,

 

2. Restart the zabbix agent to load userparam script

To load above configured Userparameter script we need to restart zabbix-agent client

[root@server ~]# systemctl restart zabbix-agent

[root@server ~]#  systemctl status zabbix-agent
● zabbix-agent.service – Zabbix Agent
   Loaded: loaded (/usr/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2020-07-22 16:17:17 CEST; 4 months 26 days ago
 Main PID: 7817 (zabbix_agentd)
   CGroup: /system.slice/zabbix-agent.service
           ├─7817 /usr/sbin/zabbix_agentd -c /etc/zabbix/zabbix_agentd.conf
           ├─7818 /usr/sbin/zabbix_agentd: collector [idle 1 sec]
           ├─7819 /usr/sbin/zabbix_agentd: listener #1 [waiting for connection]
           ├─7820 /usr/sbin/zabbix_agentd: listener #2 [waiting for connection]
           ├─7821 /usr/sbin/zabbix_agentd: listener #3 [waiting for connection]
           └─7822 /usr/sbin/zabbix_agentd: active checks #1 [idle 1 sec]

 

3. Create template for TSM Service check and TSM Version


You will need to create 1 Trigger and 2 Items for the Service check and for TSM version reporting

tsm-service-version-screenshot-zabbix
As you see necessery names / keys to create are:

Name / Key: TSM – Service State proc.num{dsmcad}

Name / key: TSM version dmsc.version

 

3.1 Create the trigger


Now lets create the trigger that will report the Service State

tsm-service-state-zabbix-screenshot

 

Linux TSM:proc.num[dsmcad].last()}=0

 

3.2 Create the Items


zabbix-dsmc-proc-num-item-setting-screenshot-linux

 

Name: dsmcad
Key: proc.num{dsmcad}

 

tsm-version-item-zabbix-screenshot
 

Update interval: 1d
History Storage period: 90d
Applications: TSM


3.3 Create Zabbix Action

As usual if you want to receive some Email Alerting or lets say send SMS in case of Trigger is matched create the necessery Action with
instructions on how to solve the problem if there is a Standard Operation Procedure ( SOP ) as often called in the corporate world for that.

That's all folks ! 🙂

 

How to Configure Nginx as a Reverse Proxy Load Balancer on Debian, CentOS, RHEL Linux

Monday, December 14th, 2020

set-up-nginx-reverse-proxy-howto-linux-logo

What is reverse Proxy?

Reverse Proxy (RP) is a Proxy server which routes all incoming traffic to secondary Webserver situated behind the Reverse Proxy site.

Then all incoming replies from secondary webserver (which is not visible) from the internet gets routed back to Reverse Proxy service. The result is it seems like all incoming and outgoing HTTP requests are served from Reverse Proxy host where in reality, reverse proxy host just does traffic redirection. Problem with reverse proxies is it is one more point of failure the good side of it can protect and route only certain traffic to your webserver, preventing the behind reverse proxy located server from crackers malicious HTTP requests.

Treverse proxy, which accepts all traffic and forwards it to a specific resource, like a server or container.  Earlier I've blogged on how to create Apache reverse Proxy to Tomcat.
Having a reverse proxy with apache is a classical scenarios however as NGINX is taking lead slowly and overthrowing apache's use due to its easiness to configure, its high reliability and less consumption of resources.


One of most common use of Reverse Proxy is to use it as a software Load Balancer for traffic towards another webserver or directly to a backend server. Using RP as a to mitigate DDoS attacks from hacked computers Bot nets (coming from a network range) is very common Anti-DDoS protection measure.
With the bloom of VM and Contrainerizations technology such as docker. And the trend to switch services towards micro-services, often reverse proxy is used to seamessly redirect incoming requests traff to multiple separate OS docker running containers etc.


Some of the other security pros of using a Reverse proxy that could be pointed are:

  • Restrict access to locations that may be obvious targets for brute-force attacks, reducing the effectiveness of DDOS attacks by limiting the number of connections and the download rate per IP address. 
  • Cache pre-rendered versions of popular pages to speed up page load times.
  • Interfere with other unwanted traffic when needed.

 


what-is-reverse-proxy-explained-proxying-tubes

 

1. Install nginx webserver


Assuming you have a Debian at hand as an OS which will be used for Nginx RP host, lets install nginx.
 

[hipo@proxy ~]$ sudo su – root

[root@proxy ~]#  apt update

[root@proxy ~]# apt install -y nginx


Fedora / CentOS / Redhat Linux could use yum or dnf to install NGINX
 

[root@proxy ~]# dnf install -y nginx
[root@proxy ~]# yum install -y nginx

 

2. Launch nginx for a first time and test


Start nginx one time to test default configuration is reachable
 

systemctl enable –now nginx


To test nginx works out of the box right after install, open a browser and go to http://localhost if you have X or use text based browser such as lynx or some web console fetcher as curl to verify that the web server is running as expected.

nginx-test-default-page-centos-linux-screenshot
 

3. Create Reverse proxy configuration file

Remove default Nginx package provided configuration

As of 2020 by default nginx does load configuration from file /etc/nginx/sites-enabled/default on DEB based Linuxes and in /etc/nginx/nginx.conf on RPM based ones, therefore to setup our desired configuration and disable default domain config to be loaded we have to unlink on Debian

[root@proxy ~]# unlink /etc/nginx/sites-enabled/default

or move out the original nginx.conf on Fedora / CentOS / RHEL:
 

[root@proxy ~]# mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf-distro

 

Lets take a scenario where you have a local IP address that's NAT-ted ot DMZ-ed and and you want to run nginx to server as a reverse proxy to accelerate traffic and forward all traffic to another webserver such as LigHTTPD / Apache or towards java serving Application server Jboss / Tomcat etc that listens on the same host on lets say on port 8000 accessible via app server from /application/.

To do so prepare new /etc/nginx/nginx.conf to look like so
 

[root@proxy ~]# mv /etc/nginx/nginx.conf /etc/nginx.conf.bak
[root@proxy ~]# vim /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
worker_rlimit_nofile 10240;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

 

events {
#       worker_connections 768;
        worker_connections 4096;
        multi_accept on;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        #keepalive_timeout 65;
        keepalive_requests 1024;
        client_header_timeout 30;
        client_body_timeout 30;
        keepalive_timeout 30;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/domain.com/access.log;
        error_log /var/log/nginx/domain.com/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

 include /etc/nginx/default.d/*.conf;

upstream myapp {
    server 127.0.0.1:8000 weight=3;
    server 127.0.0.1:8001;
    server 127.0.0.1:8002;
    server 127.0.0.1:8003;
# Uncomment and use Load balancing with external FQDNs if needed
#  server srv1.example.com;
#   server srv2.example.com;
#   server srv3.example.co

}

#mail {
#       # See sample authentication script at:
#       # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#       # auth_http localhost/auth.php;
#       # pop3_capabilities "TOP" "USER";
#       # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#       server {
#               listen     localhost:110;
#               protocol   pop3;
#               proxy      on;
#       }
#
#       server {
#               listen     localhost:143;
#               protocol   imap;
#               proxy      on;
#       }
#}

 

In the example above, there are 3 instances of the same application running on 3 IPs on different ports, just for the sake to illustrate Fully Qualified Domain Names (FQDNs) Load balancing is also supported you can see the 3 commented application instances srv1-srv3.
 When the load balancing method is not specifically configured, it defaults to round-robin. All requests are proxied to the server group myapp1, and nginx applies HTTP load balancing to distribute the requests.Reverse proxy implementation in nginx includes load balancing for HTTP, HTTPS, FastCGI, uwsgi, SCGI, memcached, and gRPC.
To configure load balancing for HTTPS instead of HTTP, just use “https” as the protocol.


To download above nginx.conf configured for High traffic servers and supports Nginx virtualhosts click here.

Now lets prepare for the reverse proxy nginx configuration a separate file under /etc/nginx/default.d/ all files stored there with .conf extension are to be red by nginx.conf as instructed by /etc/nginx/nginx.conf :

We'll need prepare a sample nginx

[root@proxy ~]# vim /etc/nginx/sites-available/reverse-proxy.conf

server {

        listen 80;
        listen [::]:80;


 server_name domain.com www.domain.com;
#index       index.php;
# fallback for index.php usually this one is never used
root        /var/www/domain.com/public    ;
#location / {
#try_files $uri $uri/ /index.php?$query_string;
#}

        location / {
                    proxy_pass http://127.0.0.1:8080;
  }

 

location /application {
proxy_pass http://domain.com/application/ ;

proxy_http_version                 1.1;
proxy_cache_bypass                 $http_upgrade;

# Proxy headers
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        "upgrade";
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host  $host;
proxy_set_header X-Forwarded-Port  $server_port;

# Proxy timeouts
proxy_connect_timeout              60s;
proxy_send_timeout                 60s;
proxy_read_timeout                 60s;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

}

##listen 443 ssl;
##    ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
##    ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
##    include /etc/letsencrypt/options-ssl-nginx.conf;
##    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

}

Get above reverse-proxy.conf from here

As you see config makes all incoming traffic towards root ( / ) NGINX directory for domain http://domain.com on port 80 on Nginx Webserver to be passed on http://127.0.0.1:8000/application.

      location / {
                    proxy_pass http://127.0.0.1:8000;
  }


Another set of configuration has configuration domain.com/application to reverse proxy to Webserver on Port 8080 /application.

 

location /application {
proxy_pass http://domain.com/application/ ;

proxy_http_version                 1.1;
proxy_cache_bypass                 $http_upgrade;

# Proxy headers
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        "upgrade";
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host  $host;
proxy_set_header X-Forwarded-Port  $server_port;

# Proxy timeouts
proxy_connect_timeout              60s;
proxy_send_timeout                 60s;
proxy_read_timeout                 60s;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

}

– Enable new configuration to be active in NGINX

 

[root@proxy ~]# ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf

 

4. Test reverse proxy nginx config for syntax errors

 

[root@proxy ~]# nginx -t

 

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Test connectivity to listen external IP address

 

5. Enable nginx SSL letsencrypt certificates support

 

[root@proxy ~]# apt-get update
[root@proxy ~]# apt-get install software-properties-common

[root@proxy ~]# apt-get update
[root@proxy ~]# apt-get install python-certbot-nginx

 

6. Generate NGINX Letsencrypt certificates

 

[root@proxy ~]# certbot –nginx

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: your.domain.com
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for your.domain.com
Waiting for verification…
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/reverse-proxy.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: No redirect – Make no further changes to the webserver configuration.
2: Redirect – Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/reverse-proxy.conf

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Congratulations! You have successfully enabled https://your.domain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=your.domain.com
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

 

7. Set NGINX Reverse Proxy to auto-start on Linux server boot

On most modern Linux distros use systemctl for legacy machines depending on the Linux distribution use the addequate runlevel /etc/rc3.d/ symlink link on Debian based distros on Fedoras / CentOS / RHEL and other RPM based ones use chkconfig RedHat command.

 

[root@proxy ~]# systemctl start nginx
[root@proxy ~]# systemctl enable nginx

 

8. Fixing weird connection permission denied errors


If you get a weird permission denied errors right after you have configured the ProxyPass on Nginx and you're wondering what is causing it you have to know by default on CentOS 7 and RHEL you'll get this errors due to automatically enabled OS selinux security hardening.

If this is the case after you setup Nginx + HTTPD or whatever application server you will get errors in  /var/log/nginx.log like:

2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:02 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"


The solution to proxy_pass weird permission denied errors is to turn off selinux

[root@proxy ~]# setsebool -P httpd_can_network_connect 1

To permanently allow nginx and httpd

[root@proxy ~]# cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M mynginx
[root@proxy ~]# semodule -i mynginx.pp

 

[root@proxy ~]# cat /var/log/audit/audit.log | grep httpd | grep denied | audit2allow -M myhttpd
[root@proxy ~]# semodule -i myhttpd.pp


Then to let know nginx and httpd (or whatever else app you run) be aware of new settings restart both

[root@proxy ~]# systemctl restart nginx
[root@proxy ~]# systemctl restart httpd

Add Zabbix time synchronization ntp userparameter check script to Monitor Linux servers

Tuesday, December 8th, 2020

Zabbix-logo-how-to-make-ntpd-time-server-monitoring-article

 

How to add Zabbix time synchronization ntp userparameter check script to Monitor Linux servers?

We needed to set on some servers at my work an elementary check with Zabbix monitoring to check whether servers time is correctly synchronized with ntpd time service as well report if the ntp daemon is correctly running on the machine. For that a userparameter script was developed called userparameter_ntp.conf the script is simplistic and few a lines of bash shell scripting 
stuff is based on gresping information required from ntpq and ntpstat common ntp client commands to get information about the status of time synchronization on the servers.
 

[root@linuxserver ]# ntpstat
synchronised to NTP server (10.80.200.30) at stratum 3
   time correct to within 47 ms
   polling server every 1024 s

 

[root@linuxserver ]# ntpq -c peers
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+timeserver1 10.26.239.41     2 u  319 1024  377   15.864    1.270   0.262
+timeserver2 10.82.239.41     2 u  591 1024  377   16.287   -0.334   1.748
*timeserver3 10.82.239.43     2 u   47 1024  377   15.613   -0.553   0.251
 timeserver4 .INIT.          16 u    – 1024    0    0.000    0.000   0.000


Below is Zabbix UserParameter script that does report us 3 important values we monitor to make sure time server synchronization works as expected the zabbix keys we set are ntp.offset, ntp.sync, ntp.exact in attempt to describe what we're fetching from ntp client:

[root@linuxserver ]# cat /etc/zabbix/zabbix-agent.d/userparameter_ntp.conf

UserParameter=ntp.offset,(/usr/sbin/ntpq -pn | /usr/bin/awk 'BEGIN { offset=1000 } $1 ~ /\*/ { offset=$9 } END { print offset }')
#UserParameter=ntp.offset,(/usr/sbin/ntpq -pn | /usr/bin/awk 'FNR==4{print $9}')
UserParameter=ntp.sync,(/usr/bin/ntpstat | cut -f 1 -d " " | tr -d ' \t\n\r\f')
UserParameter=ntp.exact,(/usr/bin/ntpstat | /usr/bin/awk 'FNR==2{print $5,$6}')

In Zabbix the monitored ntpd parameters set-upped looks like this:

 

ntp_time_synchronization_check-zabbix-screenshot.

 

!Note that in above userparameter example, the commented userparameter script is a just another way to do an ntpd offset returned value which was developed before the more sophisticated with more regular expression checks from the /usr/sbin/ntpd via ntpq, perhaps if you want to extend it you can also use another script to report more verbose information to Zabbix if that is required like ouput from ntpq -c peers command:
 

UserParameter=ntp.verbose,(/usr/sbin/ntpq -c peers)

Of course to make the Zabbix fetch necessery data from monitored hosts, we need to set-up further new Zabbix Template with the respective Trigger and Items.

Below are few screenshots including the triggers used.

ntpd_server-time_synchronization_check-zabbix-screenshot-triggers

  • ntpd.trigger

{NTP:net.udp.service[ntp].last(0)}<1

  • NTP Synchronization trigger

{NTP:ntp.sync.iregexp(unsynchronised)}=1

 

 

As you can see from history we have setup our items to Store history of reported data to Zabbix from parameter script for 90 days and update our monitor check, every 30 seconds from the monitored hosts to which Tempate is applied.

Well that's all folks, time synchronization issues we'll be promptly triggering a new Alarm in Zabbix !

How to fix rkhunter checking dev for suspisiocus files, solve rkhunter checking if SSH root access is allowed warning

Friday, November 20th, 2020

rkhunter-logo

On a server if you have a rkhunter running and you suddenly you get some weird Warnings for suspicious files under dev, like show in in the screenshot and you're puzzled how comes this happened as so far it was not reported before the regular package patching update conducted …

root@haproxy-server ~]# rkhunter –check

rkhunter-warn-screenshot

To investigate further I've checked rkhunter produced log /var/log/rkhunter.log for a verobose message and found more specifics there on what is the exact files which rkhunter finds suspicious.
To further investigate what exactly are this suspicious files for or where, they're used for something on the system or in reality it is a hacker who hacked our supposibly PCI compliant system,
I've used the good old fuser command which is capable to show which system process is actively using a file. To have fuser report for each file from /var/log/rkhunter.log with below shell loop:

[root@haproxy-server ~]#  for i in $(tail -n 50 /var/log/rkhunter/rkhunter.log|grep -i /dev/shm|awk '{ print $2 }'|sed -e 's#:##g'); do fuser -v $i; done
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1851-27-f1sTlC/qb-request-cpg-header:
                     root       1783 ….m corosync
                     hacluster   1851 ….m attrd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-event-quorum-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-event-quorum-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-response-quorum-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-response-quorum-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-request-quorum-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-request-quorum-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-event-cpg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-event-cpg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-response-cpg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-response-cpg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-request-cpg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-request-cpg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-event-cfg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-event-cfg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-response-cfg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-response-cfg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-request-cfg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-request-cfg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd


As you see from the output all the /dev/shm/qb/ files in question are currently opened by the corosync / pacemaker and necessery for proper work of the haproxy cluster processes running on the machines.
 

How to solve the /dev/ suspcisios files rkhunter warning?

To solve we need to tell rkhunter not check against this files this is done via  /etc/rkhunter.conf first I thought this is done by EXISTWHITELIST= but then it seems there is  a special option for rkhunter whitelisting /dev type of files only ALLOWDEVFILE.

Hence to resolve the warning for the upcoming planned early PCI audit and save us troubles we had to add on running OS which is CentOS Linux release 7.8.2003 (Core) in /etc/rkhunter.conf

ALLOWDEVFILE=/dev/shm/qb-*/qb-*

Re-run

# rkhunter –check

and Voila, the warning should be no more.

rkhunter-check-output

Another thing is on another machine the warnings produced by rkhunter were a bit different as rkhunter has mistakenly detected the root login is enabled where in reality PermitRootLogin was set to no in /etc/ssh/sshd_config

rkhunter-warning

As the problem was experienced on some machines and on others it was not.
I've done the standard boringconfig comparison we sysadmins do to tell
why stuff differs.
The result was on first machine where we had everything working as expected and
PermitRootLogin no was recognized the correct configuration was:

— SNAP —
#ALLOW_SSH_ROOT_USER=no
ALLOW_SSH_ROOT_USER=unset
— END —

On the second server where the problem was experienced the values was:

— SNAP —
#ALLOW_SSH_ROOT_USER=unset
ALLOW_SSH_ROOT_USER=no
— END —

Note that, the warning produced regarding the rsyslog remote logging is allowed is perfectly fine as, we had enabled remote logging to a central log server on the machines, this is done with:

This is done with config options under /etc/rsyslog.conf

# Configure Remote rsyslog logging server
*.* @remote-logging-server.com:514
*.* @remote-logging-server.com:514