Posts Tagged ‘use’

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

Reading Time: 6minutes


If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."

Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0


  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB

Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab


2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor

In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:


Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway
[root@redhat ~]# nmcli connection modify br0 ipv4.dns
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM

Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda

# Use network installation
#url --url=
##url --url=

# Use text mode install

# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8

# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted

# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip= --netmask= --gateway= --nameserver= --device=eth0 --noipv6 --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain

# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint

# skipx

# Firewall configuration
firewall --disabled

# System timezone
timezone Europe/Berlin

# Clear the Master Boot Record

# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=
#repo --name=baseos --baseurl=
#repo --name=inst.stage2 --baseurl= ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=
##repo --name=rhsm-appstream --baseurl=
##repo --name=os-baseos В  В  В --baseurl=
##repo --name=os-appstream В  --baseurl=
#repo --name=inst.stage2 --baseurl=

# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"

# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug

# reboot automatically


# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf

# Disable kdump
systemctl disable kdump.service

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test

Note: passing -1will generate an MD5 password, -5 a SHA256 encryption and -6SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg

6. Install the new VM with virt-install cmd

Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation

For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:

VM_DESCR='CentOS 8.3 Virtual Machine';
# size is in Gigabytes

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"

A copy of script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂



Enjoy !

Configure rsyslog buffering on Linux to avoid message lost to Central Logging server

Wednesday, January 13th, 2021

Reading Time: 2minutes


1. Rsyslog Buffering

One of the best practice about logs management is to send syslog to a central server. However, a logging system should be capable of avoiding message loss in situations where the server is not reachable. To do so, unsent data needs to be buffered at the client when central server is not available. You might have recently noticed that many servers forwarding logs messages to a central server do not have buffering functionalities activated. Thus I strongly advise you to have look to this documentation to know how to check your configuration:

Rsyslog buffering with TCP/UDP configured

In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that "the action is not ready", which means the remote server is offline. This can be detected with plain TCP syslog and RELP, but not with UDP. So you need to use either of the two. In this howto, we use plain TCP syslog.

– Version requirement

Please note that we are using rsyslog-specific features. The are required on the client, but not on the server. So the client system must run rsyslog (at least version 3.12.0), while on the server another syslogd may be running, as long as it supports plain tcp syslog.

How To Setup rsyslog buffering on Linux

First, you need to create a working directory for rsyslog. This is where it stores its queue files (should need arise). You may use any location on your local system. Next, you need to do is instruct rsyslog to use a disk queue and then configure your action. There is nothing else to do. With the following simple config file, you forward anything you receive to a remote server and have buffering applied automatically when it goes down. This must be done on the client machine.

# Example:
# $ModLoad imuxsock             # local message reception
# $WorkDirectory /rsyslog/work  # default location for work (spool) files
# $ActionQueueType LinkedList   # use asynchronous processing
# $ActionQueueFileName srvrfwd  # set file name, also enables disk mode
# $ActionResumeRetryCount -1    # infinite retries on insert failure
# $ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
# *.*       @@server:port

How to check Linux server power supply state is Okay / How to find out a Linux Power Supply is broken

Wednesday, January 6th, 2021

Reading Time: 4minutes


If you're a sysadmin and managing remotely Linux servers, every now and then if a machine is hanging without a reason it useful to check the server Power Supply state. I say that because often if the machine is mysteriously hanging and a standard Root Cause Analysis (RCA) on /var/log/messages /var/log/dmesg /var/log/boot etc. did not bring you to any different conclusion. The next step after you send a technician to reboot the machine is to check on Linux OS level whether Power Supply Unit (PSU) hardware on the machine does not have some issues.
As blogged earlier on how to use ipmitool to manage remote ILO remote boards etc. the ipmitool can also be used to check status of Server PSUs.

Below is example output of 2 PSU server whose Power Supplies are functioning normally.

[root@linux-server ~]# ipmitool sdr type "Power Supply"

PS Heavy Load    | 2Bh | ok  | 19.1 | State Deasserted
Power Supply 1   | 70h | ok  | 10.1 | Presence detected
Power Supply 2   | 71h | ok  | 10.2 | Presence detected
PS Configuration | 72h | ok  | 19.1 |
PS 1 Therm Fault | 75h | ok  | 10.1 | Transition to OK
PS 2 Therm Fault | 76h | ok  | 10.2 | Transition to OK
PS1 12V OV Fault | 77h | ok  | 10.1 | Transition to OK
PS2 12V OV Fault | 78h | ok  | 10.2 | Transition to OK
PS1 12V UV Fault | 79h | ok  | 10.1 | Transition to OK
PS2 12V UV Fault | 7Ah | ok  | 10.2 | Transition to OK
PS1 12V OC Fault | 7Bh | ok  | 10.1 | Transition to OK
PS2 12V OC Fault | 7Ch | ok  | 10.2 | Transition to OK
PS1 12Vaux Fault | 7Dh | ok  | 10.1 | Transition to OK
PS2 12Vaux Fault | 7Eh | ok  | 10.2 | Transition to OK
Power Unit       | 7Fh | ok  | 19.1 | Fully Redundant

Now if you have a server lets say on an old ProLiant DL360e Gen8 whose Power Supply is damaged, you will get an from ipmitool similar to:

[root@linux-server  systemd]# ipmitool sdr type "Power Supply"
Power Supply 1   | 30h | ok  | 10.1 | 100 Watts, Presence detected
Power Supply 2   | 31h | ok  | 10.2 | 0 Watts, Presence detected, Failure detected, Power Supply AC lost
Power Supplies   | 33h | ok  | 10.3 | Redundancy Lost

If you don't have ipmitool installed due to security or whatever but you have the hardware detection software dmidecode you can use it too to get the Power Supply state

[root@linux-server  systemd]# dmidecode -t chassis
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.


Handle 0x0300, DMI type 3, 21 bytes
Chassis Information
        Manufacturer: HP
        Type: Rack Mount Chassis
        Lock: Not Present
        Version: Not Specified
        Serial Number: CZJ38201ZH
        Asset Tag:
        Boot-up State: Critical
        Power Supply State: Critical

        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0

To find only Power Supply info status on a server with dmideode.

# dmidecode –type 39


Plug between the power supply and the mainboard voltage / coms ATX specification

This can also be used on a normal Linux desktop PCs which usually have only 1U (one power supply) on many of Ubuntus and Linux desktops where lshw (list hardaware information) is installed to get the machine PSUs status with lshw 

 root@ubuntu:~# lshw -c power
       product: 45N1111
       vendor: SONY
       physical id: 1
       slot: Front
       capacity: 23200mWh
       configuration: voltage=11.1V
        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0

Finally to get an extensive information on the voltages of the Power Supply you can use the good old lm_sensors.

# apt-get install lm-sensors
# sensors-detect 
# service kmod start

# sensors
# watch sensors

As manually monitoring Power Supplies and other various data is dubious, finally you might want to use some centralized monitoring. For one example on that you might want to check my prior Zabbix to Monitor Hardware Hard Drive / Temperature and Disk with lm_sensors / smartd on Linux with Zabbix.

SEO: Best day and time to write new articles and tweet to get more blog reads – Social Network Timing

Tuesday, June 17th, 2014

Reading Time: 5minutes


I'm trying to regularly blog – as this gives me a roadmap what I'm into and how I spent my time. When have free time,  I blog almost daily except on weekends (as in weekends I'm trying to stay away from computers). So if you want to attract more readers to your blog the interesting question arises

What time is best to hit publish on your posts?

Now there are different angles from where you can extract conclusions on best timing to blog post.One major thing to consider always when posting is that highest percentage of users read blogs in the morning with their morning coffee. Here are some more facts on when web content is more red:

  • 70% of users say they read blogs in the morning
  • More men read blogs at night than woman
  • Mondays are the highest traffic days for avarage blogs
  • 11 a.m. is normally the highest traffic hour for blogs
  • Usually most comments are put on Saturdays
  • Blogs with more than one post a day has higher chance of inbound links and usually get more unique visitors

As my blog is more technical oriented most of my visitors are men and therefore posting my blogs at night doesn't interfere much with my readers.
However, I've noticed that for me personally posting in time interval from 13:00 to 17:00 influence positively the amount of unique visitors the blog gets.

According to research done by Social Fresh – Thursday is the best day to publish an article if you want to get more Social SharesBest-Day-to-Blog-to-get-more-shares-in-social-networks

As a rule of thumb Thursday wins 10% more shares than all other days. In fact, 31% of the top 100 social share days in 2011 fell on Thursday.
My logical explanation on this phenomenon is that people tend to be more and more bored from their work and try to entertain more and more as the week progresses.

To get more attention on what I'm writting I use a bit of social networking but I prefer using only a micro blogging social networking.  I use Twitter to share what I'm into. When I write a new article on my blog I tweet its title with a link to my article, because this drives people attention to what I have to say.

In overall I am skeptical about social siting like Facebook and MySpace because it has negative impact on how people use their time and especially negative on youngstersOther reason why I don't like Friends Networks is because sharing what you have to say on sites like FB, Google+ or "The Russian Facebook" –  Vkontekte are not respecting privacy of your data.


You write free fresh content for their website for free and you get nothing!


Moreover by daily posting latest buzz you read / watched on Facebook etc. or simply saying what's happening with you, where you're situated now etc., you slowly get addicted to posting – yes for good or bad people tend to be maniacal).

By placing all of your pesronal or impersonal stuff online, you're making these sitesbetter index their sites into Google / Yahoo / Yandex search engines and therefore making them profitable and high ranked websites on the internet and giving out your personal time for Facebook profit? + you loose control over your data (your data is not physically on your side but situated on some remote server, somewhere on the internet).

Best avarage time to post on Tweet Facebook, Google+ and Linkedin


So What is Best Day timing to Post, Pin or Tweet?

Below is an infographic I fond on this blog (visual data is originalcompiled by SurePayRoll) and showing visualized results from some extensive research on the topic.


Here is most important facts this infographic reveals:

The avarage best time to post tweet and pin your new articles is about 15:00 h

  • Best timing to post on Twitter is on Mondays to Thursdays from 13:00 to 15:00 h
  • Best timing to post on facebook is between 13:00 and 16:00 h
  • For Linkedin it is best to place your publish between Tuesdays to Thursdays

Peak times on Facebook, Twitter and Linkedin

  • Peak times for use of Facebook is on Wednesdays about 15:00 h
  • Peak times for use of Twitter is from Monday to Thursdays from 9:00  to 15:00 h
  • Linkedin Peak time is from 17:00 to 18:00 h
  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads

Worst time (when users will probably not view your content) on FB, Twitter and Linkedin

  • Weekends before 08:00  and after 20:00 h
  • Everyday after 20:00 and Fridays after 15:00 noon
  • Mondays and Fridays from 22:00 to 06:00 morning

Facts about Google+

  • Google+ is the fastest growing demographic social network for people aged 45 to 54
  • Best time to share your posts on Google+ is from 09:00 to 10:00 in the morning
  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads

Images generate more traffic and engagement

  • Including images to your articles increases traffic, tweets with images increase visits, favorites and leads

I'm aware as every research above info on best time to tweet and post is just a generalization and according to field of information posted suggested time could be different from optiomal time for individual writer, however as a general direction, info is very useful and it gives you some idea.
Twitter engagement for brands is 17% higher on weekends according toDan Zarrella’s research.Tweets posted on Friday, Saturday and Sunday had higher CTR (Click Through Rate) than those posted in the rest of the week.


Other best day to tweet other than weekends is mid-week time Wednesday.
Whether your site or blog is using retweet to generate more traffic to website best time to retweet is said to be around 5 pm. CTR is higher

How to redirect / forward all postfix emails to one external email address?

Thursday, October 29th, 2020

Reading Time: 3minutes


Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!"

To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .**/ )
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail"

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?


In /etc/postfix/ include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp


Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual

Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.

$ postconf -n | grep virtual
virtual_alias_domains =
virtual_alias_maps = hash:/etc/postfix/virtual

The regexp /.+@.+/ applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix

If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:


Time for a test. Send a test email

Next step is to Test it mail forwarding works as expected

# echo -e "Tseting body" | mail -s "testing subject" -r ""

How to convert .CRT SSL Certificate to .PFX format (with openssl Linux command) and Import newly generated .PFX to Windows IIS Webserver

Tuesday, September 27th, 2016

Reading Time: 3minutes


1. Converting to .CRT to.PFX file format with OpenSSL tool on GNU / Linux to import in Windows (for example, IIS)

Assuming you have generated already a certificate using the openssl Linux command and you have issued the .CRT SSL Certificate issuer file
and you need to have the new .CRT SSL Certificate installed on Windows Server (lets say on Windows 2012) with IIS Webserver version 8.5, you will need a way to convert the .CRT file to .PFX, there is plenty of ways to do that including using online Web Site SSL Certificate converter or use a stand alone program on the Windows server or even use a simple perl / python / ruby script to do the conversion but anyways the best approach will be to convert the new .CRT file to IIS supported binary Certificate format .PFX on the same (Linux certificate issuer host where you have first generated the certificate issuer request .KEY (private key file used with third party certificate issuer such as Godaddy or Hostgator to receive the .CRT / PEM file).

Here is how to generate the .PFX file based on the .CRT file for an Internal SSL Certfiicate:


openssl pkcs12 -export -in server.crt -inkey server.key -out server.pfx

On the password prompt to appear use any password because otherwise the future IIS Webserver certificate import will not work.

To do a certificate chain SSL export to be accessed from the  internet.


openssl pkcs12 -export -in server.crt -inkey server.key -out server.pfx -certfile internet v2.crt

2. Import the PFX file in Windows

Run: mmc, add snap, Certificates, Computer account, Local Computer; in the

Certificates (Local Computer) > Personal > Certificates: Select All Tasks > Import File

Enter previously chosen password.
You should get further the Message "Import was successful."

You can import the PFX file by simply copying it to the server where you want it imported and double click it this will  open Windows Importwizzard.

Then select the IIS:


Site, Properties, Directory Security, Server Certificate, Replace the current certficate, select proper Certificate. Done.

Alternatively to complete the IIS Webserver certificate import within one step when a new certificate is to be imported:

In IIS Manager interface go to :

Site, Properties, Directory Security, Server Certificate, Server Certificate Wizard

Click on



import a certificate from a .pfx file, select and enter password.


3. Import the PFX file into a Java keystore

Another thing you might need if you have the IIS Webserver using a backend Java Virtual Machine on the same or a different Windows server is to import the newly generated .PFX file within the Java VM keystore.

To import with keytool command for Java 1.6 type:


keytool -importkeystore -deststorepass your_pass_here -destkeypass changeit -destkeystore keystore.jks -srckeystore server.pfx -srcstoretype PKCS12 -srcstorepass 1234 -srcalias 1 -destalias xyz

Also the .CRT file could be directly imported into the Java keystore


Import a .crt in a Java keystore

/usr/java/jre/bin/keytool -import -keystore /webdienste/java/jdk/jre/lib/security/cacerts -file certificate.crt -alias Some alias



4. Get a list of Windows locally installed certificates

To manager installed certificates on Windows 7 / 8 / 2012 Server OS is to run command via

Start -> Run





One other way to see the installed certificates on your Windows server is checking within

Internet Explorer

Go to Tools (Alt+X) → Internet Options → Content → Certificates.


To get a a complete list of installed Certificate Chain on Windows you can use PowerShell


Get-ChildItem -Recurse Cert:


That's all folks ! 🙂


Deny DHCP Address by MAC on Linux

Thursday, October 8th, 2020

Reading Time: 4minutes

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4


option domain-name "pcfreak.lan";
option domain-name-servers,,,;
max-lease-time 891200;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet netmask {
        option routers        ;
        option subnet-mask    ;
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
subclass "black-hole" 18:45:91:c3:d9:00;

The Deny DHCP Address by MAC is described on distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;

As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers,,,;

One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.


root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on


Installing usual Software Tools and Development header files and libraries on a newly installed Debian Server

Thursday, February 11th, 2010

Reading Time: 3minutes

Today I start my work as a system administrator for a new IT company.
My first duties include configuration and installation of some usual programs
used in everyday's sys admin job.
In that manner of thoughts I have long ago realized there is a common group of
tools and software I had to install on almost each and every new configured
Debian GNU / Linux running Server.
Here is a list of packages I usually install on new Debian systems,
even though this exact commands are expected to be executed on Debian (5.0) Lenny
I believe they are quite accurate for Debian Testing and Debian Testing/Unstable,
bleeding edge distributions.
Before I show you the apt-get lines with all the packages, I would advice you to install
and use netselect-apt to select the fastest Debian package mirror near you
So to install and use run the following commands;

aptitude install netselect-apt
netselect-apt -n lenny

Now as netselect-apt would have tested for the fastest mirror and created sources.list
file in your current directory, open the sources.list file and decide what should enter your
official /etc/apt/sources.list file or in other words merge the two files as you like.
Good, now as we have a fast mirror to download our packages let's continue further with the
packages to install.
Excecute the following command to install some of the basic tools and packages:

# install some basic required tools, software and header files
debian-server:~# apt-get install tcpdump mc ncurses-dev htop iftop iptraf nmap tcpdump apache2 apachetop
mysql-server-5.0 phpmyadmin vnstat rsync traceroute tcptrace e2fsprogs hddtemp finger mtr-tiny
netcat screen imagemagick flex snort mysql-server-5.0 sysstat lm-sensors alien rar unrar util-linux curl
vim lynx links elinks sudo autoconf gcc build-essential dpkg-dev webalizer awstats

Herein I'll explain just a few of the installed package and their install
purpose,as they could be unknown to some of the people out there.

apachetop - monitors apache log file in real time similar to gnu top
iftop - display bandwidth usage on selected interface interactively
vnstat - show inbound & outbound traffic usage on selected network interfaces
e2fsprogs - some general tools for creation of ext2 file systems etc.
hddtemp - Utility to monitor hard drive temperature
mtr-tiny - matt's traceroute great traceroute proggie
netcat - TCP/IP swiss army knife, quite helpful for network maintance tasks
snort - an Intrusion Detecting System
build-essential - installs basic stuff required for most applications compiled from source code
sysstat - generates statistics about server load each and every ten minutes, check man for more
lm-sensors - enables you to track your system hardware sensors information and warn in CPU heatups etc.

I believe the rest of them are no need to be explained, if you're not familiar with them check the manuals.
So far so good but this is not all I had to install, as you probably know most Apache webservers nowadays
are running PHP and are using a dozen of PHP libraries / extensions not originally bundled with PHP install
Therefore here are some more packages related to php to install that would install some more php goodies.

# install some packages required for many php enabled applications
debian-esrver:~# apt-get install php-http php-db php-mail php-net-smtp php-net-socket php-pear php-xml-parser
php5-curl php5-gd php5-imagick php5-mysql php5-odbc php5-recode php5-sybase php5-xmlrpc php5-dev

As I said that is mostly the basic stuff that is a must have on most of the Debian servers I have
configured this days, of course this is not applicable to all situations, however I hope
this would be of use to somebody out there.

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

Reading Time: 6minutes


Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt

or with pyping to tee command


echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt

But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.

1. Bash script function for logging write_log();

Perhaps the simplest method is just to use a small function routine in your shell script like this:

  while read text
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;


  •  Using the script from within itself or from external to write out to defined log file


echo "Skipping to next copy" | write_log


2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.

Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.


Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:

mkfifo /tmp/name-of-named-pipe

Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.


/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt

To write out any other script output and get logged now, any of your output with a nice currentdate command generated output write out any output content to the loggin buffer like so:


echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger

If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.


3. Logging files to system log files with logger


If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile


root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging


If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:

 logger -i -p mail.err "Output of mail processing script"

To log a normal non-error (priority message) with logger to /var/log/mail.log system log.


 logger -i -p mail.notice "Output of mail processing script"

A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:


       Valid facility names are:

              authpriv   for security information of a sensitive nature
              kern       cannot be generated from userspace process, automatically converted to user
              security   deprecated synonym for auth

       Valid level names are:

              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:


logger 'The reason to reboot the server Currently was a System security Update


So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following shell script.


for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \

# (this is all one line!)


for k in {debug,info,notice,warning,err,crit,alert,emerg}

logger -p $i.$k "Test daemon message, facility $i priority $k"



Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \

check and set the proper naming as described in logger man page.


4. Using a file descriptor that will output to a pre-set log file

Another way is to add the following code to the beginning of the script

exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3

I've initially found out about this very nice bash code from's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up

To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.