How to add colorful random ASCII art picture and a bible verse on each SSH server login, joyout sysadmins life with cowsay, fortune, caca-utils and others


November 24th, 2020

Jesus-Christ-loves-the-world-ascii-art

There are pleny of console ASCII stuff out there that can make your console sysadmin boring life a little bit more funny and cherish some memories from the old times of 8 bit computers :).

One of this as I blogged earlier is cowsay and cowthink to generate a ascii picture with a cow with your custom message.
I've earlier blogged about that in my previous article Create ASCII Art Text bannners in Linux console / terminal with figlet and toilet

One of this cool things I'm using daily on my servers  is a cowsay console goodie together with a bash shell script that does visualize a random ASCII picture from a preset of pictures on each and every ssh login to my server.
The script I use is cowrand below is code:

#!/bin/bash
# cowsay pix randomizer by hip0
# it shows random ascii from the cowsay prog during logging. :]
a=0
b=1
cowrand='/etc/cowrand';
dir='/usr/share/cowsay/cows';
var=`ls -1 $dir | wc -l | awk '{ print $1}'`
#RANGE=$var
number=$RANDOM
let "number %= $var"
var1=`ls -1 $dir | head -n $number | tail -n 1 | head -n 1`
if [ -z “$var1” ]; then
$cowrand;
else
/usr/bin/cowsay -f $var1 Welc0m3 t0 pC-fREAK … Enj0y.
fi

 

The script is set as executable under /etc/cowrand

hipo@pcfreak:~$ ls -al /etc/cowrand
-rwxr-xr-x 1 hipo hipo 432 Nov 24 19:21 /etc/cowrand*

I've set this script to my /etc/profile to auto start on every login on my Debian Linux systems right after the comments like so:

hipo@pcfreak:~$ grep -i cowrand -A 2 -B 3 /etc/profile
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), …).
echo '';
/etc/cowrand | lolcat
echo '';
#/usr/bin/verse

As you can see to make my life even more funnier, I've installed another fun command lolcat

lolcat-screenshot

hipo@pcfreak:~$ apt-cache show lolcat |grep -i desc -A 3
Description-en: colorful `cat`
 lolcat concatenates files like the UNIX `cat` program, but colors it for the
 lulz in a rainbow animation. Terminals with 256 colors and animations are
 supported.

Description-md5: 86f992d66ac74197cda39e0bbfcb549d
Homepage: https://github.com/busyloop/lolcat
Ruby-Versions: all
Section: games


You can think of lolcat as a standard cat command that has been made to print in colors, this gives a funny results.

cowrand-script-lolcat-os-release-how-to-make-your-linux-login-prompt-funnier

To add some spice to everything nice as a recipee for thethe creation of powerpuff girls, I've come up with a way to use fortune
console tool that uses to print quotes out of a database to use as a source a big database containing the Holy Bible books of Old and New Testament Books. The fortune prints me out a quote extract from the bible on each and every remote SSH login to my machine. The content of this bible database for fortune bible_quotes_fortune.tar.gz can be downloaded and used from here.

The command used to print out a verse from the holy bible is:
 

 

hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
For if thou refuse to let them go, and wilt hold them still,
        — Exodus 9:2
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
And when the queen of Sheba heard of the fame of Solomon concerning
the name of the LORD, she came to prove him with hard questions.
        — 1 Kings 10:1
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
And Shelemiah, and Nathan, and Adaiah,
        — Ezra 10:39
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
For by thee I have run through a troop: by my God have I leaped
over a wall.
        — 2 Samuel 22:30
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
Unto the place of the altar, which he had make there at the first:
and there Abram called on the name of the LORD.
        — Genesis 13:4
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
And there shall dwell in Judah itself, and in all the cities thereof
together, husbandmen, and they that go forth with flocks.
        — Jeremiah 31:24
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
And he hath put a new song in my mouth, even praise unto our God:
many shall see it, and fear, and shall trust in the LORD.
        — Psalms 40:3
hipo@pcfreak:~$ /usr/games/fortune -s /usr/local/fortune/
And Jehoshaphat made peace with the king of Israel.
        — 1 Kings 22:44
 

 

The fortune is really awesome as it reminds me often of a verses from Holy Bible I often forget, the database is using the all famous King James Bible famous as (KJB) / (KJV) from 1611 this bible version that is like a protestant standard nowadays takes its name after James VI and I (James Charles Stuart; 19 June 1566 – 27 March 1625 – King of Scotland and Ireland) who was the sponsor of KJV collection and print.

Finally after adding the /usr/games/fortune -s /usr/local/fortune/ to the beginning of /etc/profile together with cowsay and cowrand I got this beautiful and educational result that combines fun with wisdom, below is example of what you will get after you  do a remote ssh login;

 

ssh your-machine.com

cowrand-script-lolcat-os-release-how-to-make-your-linux-login-prompt-funnier_1

cowrand-script-lolcat-os-release-how-to-make-your-linux-login-prompt-funnier_2

cowrand-script-lolcat-os-release-how-to-make-your-linux-login-prompt-funnier_3

Those who have a Linux Graphical Environment desktop might also enjoy xcowsay

Another must I recommend to the text geeks is the caca-utils package which contains cool things such as aafire (cacafire)

cacaview-fire-screenshot-ascii-art

Or (Image to text converter) img2txt / cacaview (a text console picture viewer) that could give you a raw idea on how a png / jpg picture looks like (or at least the picture shapes) without a need for a GUI picture viewer such as Eye of the Gnome.

bear-for-you-picture-rose

Here is a original bear

cacaview-a-bear-for-you-picture-in-plain-text-ascii

And here is the one you'll see in cacaview 🙂
To read more about cacaview I have and its uses, check my previous article Viewing JPEG,GIF and PNG in ASCII with cacaview in Linux.
If you want to show off even more as a '1337 h4x0r' you might also show your sysadm 1337 5K!11Z to colleagues by showng them how you check weather via console (i've a separate article for how to ASCII art check colorful weather forecast via console / terminal ).

If you're too bored in your daily sys admin job, you might make some fun and take some useless effort to install ASCII Art Aquarium ASCIIQUARIUM

asciiquarium1

asciiquarium2

asciiquarium3

If you're crazy enough and want to torture your other sysadmin colleagues and a get a nice prank, you might install and set asciiquarium to auto run for their specific account on each and every login to some server until they control C or if you're a bit evil you can even set a small auto load on account login via ~/.bashrc shell script to 'Disable CTRL + C' combination 🙂
 

Of course there is plenty of other cool ASCII games and stuff. I've collected some of them by launching the Play Cool Ascii games service on my machine for ASCII art geeks to test out some ASCII games here.

 

How to fix rkhunter checking dev for suspisiocus files, solve rkhunter checking if SSH root access is allowed warning


November 20th, 2020

rkhunter-logo

On a server if you have a rkhunter running and you suddenly you get some weird Warnings for suspicious files under dev, like show in in the screenshot and you're puzzled how comes this happened as so far it was not reported before the regular package patching update conducted …

root@haproxy-server ~]# rkhunter –check

rkhunter-warn-screenshot

To investigate further I've checked rkhunter produced log /var/log/rkhunter.log for a verobose message and found more specifics there on what is the exact files which rkhunter finds suspicious.
To further investigate what exactly are this suspicious files for or where, they're used for something on the system or in reality it is a hacker who hacked our supposibly PCI compliant system,
I've used the good old fuser command which is capable to show which system process is actively using a file. To have fuser report for each file from /var/log/rkhunter.log with below shell loop:

[root@haproxy-server ~]#  for i in $(tail -n 50 /var/log/rkhunter/rkhunter.log|grep -i /dev/shm|awk '{ print $2 }'|sed -e 's#:##g'); do fuser -v $i; done
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1851-27-f1sTlC/qb-request-cpg-header:
                     root       1783 ….m corosync
                     hacluster   1851 ….m attrd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-event-quorum-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-event-quorum-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-response-quorum-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-response-quorum-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-request-quorum-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-26-Znk1UM/qb-request-quorum-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-event-cpg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-event-cpg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-response-cpg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-response-cpg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-request-cpg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-25-oCdaKX/qb-request-cpg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-event-cfg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-event-cfg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-response-cfg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-response-cfg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-request-cfg-data:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd
                     BEN.        PID ZUGR.  BEFEHL
/dev/shm/qb-1783-1844-24-GKyj3l/qb-request-cfg-header:
                     root       1783 ….m corosync
                     root       1844 ….m pacemakerd


As you see from the output all the /dev/shm/qb/ files in question are currently opened by the corosync / pacemaker and necessery for proper work of the haproxy cluster processes running on the machines.
 

How to solve the /dev/ suspcisios files rkhunter warning?

To solve we need to tell rkhunter not check against this files this is done via  /etc/rkhunter.conf first I thought this is done by EXISTWHITELIST= but then it seems there is  a special option for rkhunter whitelisting /dev type of files only ALLOWDEVFILE.

Hence to resolve the warning for the upcoming planned early PCI audit and save us troubles we had to add on running OS which is CentOS Linux release 7.8.2003 (Core) in /etc/rkhunter.conf

ALLOWDEVFILE=/dev/shm/qb-*/qb-*

Re-run

# rkhunter –check

and Voila, the warning should be no more.

rkhunter-check-output

Another thing is on another machine the warnings produced by rkhunter were a bit different as rkhunter has mistakenly detected the root login is enabled where in reality PermitRootLogin was set to no in /etc/ssh/sshd_config

rkhunter-warning

As the problem was experienced on some machines and on others it was not.
I've done the standard boringconfig comparison we sysadmins do to tell
why stuff differs.
The result was on first machine where we had everything working as expected and
PermitRootLogin no was recognized the correct configuration was:

— SNAP —
#ALLOW_SSH_ROOT_USER=no
ALLOW_SSH_ROOT_USER=unset
— END —

On the second server where the problem was experienced the values was:

— SNAP —
#ALLOW_SSH_ROOT_USER=unset
ALLOW_SSH_ROOT_USER=no
— END —

Note that, the warning produced regarding the rsyslog remote logging is allowed is perfectly fine as, we had enabled remote logging to a central log server on the machines, this is done with:

This is done with config options under /etc/rsyslog.conf

# Configure Remote rsyslog logging server
*.* @remote-logging-server.com:514
*.* @remote-logging-server.com:514

How to backup Outlook Mailbox / Export Exchange Mail backup to .pst


November 17th, 2020

pst-outlook-exchange-windows-logo

In the corporate world most of us are forced to use as a desktop environment some kind of Windows version 7 / 8 / 10  version with Outlook configured to use Microsoft Exchange MailServer mailbox set to use POP3 or IMAP account.
Sometimes for new employees for Knowledge transfer purposes having a backup copy of some employee who was laid off or as most of the times has left the company for a better position or simply due to boredom.

Even just for backup purposes in case if by mistake you have deleted some mails out of your mailbox it is useful thing to create a Mailbox backup of whole mail address data especially as with time the amount of Emails grows to many, many thousand of emails year by year and under some circumstances where you have a Mailbox data Limit to up to lets say 4 Gigabytes per mailbox it is useful to periodically clean up old mails, but for the historical reference to create a backup of old email.

Even at some times it is useful to create a whole backup of mailbox every year and then delete the content of Mail data for this year from Outlook.

Export of mail data in Outlook configured email is exported to .PST file format – [ MS-PST ]: Outlook Personal Folders.

Each Personal Folders File (.PST) represents a Message store that contains an arbitrary hierarchy of Folder objects, which contains Message objects, which can contain Attachment objects. Information about Folder objects, Message objects, and Attachment objects are stored in properties, which collectively contain all of the information about the particular item.

If you want to back up the message folders locally to work PC (in addition to keeping them on the Exchange server), you can automatically move or delete older items with AutoArchive (feature of Outlook) or export the items to .pst file that you can restore later as needed and use by importing.

So how to backup / export your Email correspondence to .PTS?

1. Select File -> Open & Export -> Import/Export

outlook-backup-emails-to-pst-file-howto-1

2. Select Export to a file, and then select Next.

outlook-backup-emails-to-pst-file-howto-2

3. Select Outlook Data File (.pst), and select Next.

outlook-backup-emails-to-pst-file-howto-3

4. Select the mail folder you want to back up and select Next.

outlook-backup-emails-to-pst-file-howto-4

5. Choose a location and name for your backup file, and then select Finish.

outlook-backup-emails-to-pst-file-howto-7

To ensure no one has access to your .pst files, after finish you'll be prompted to enter and confirm a password (or if you don't want pass leave pass field as empty), and then select OK.

The produced .pst file will be stored by default under C:\Users\Username\Documents\Outlook Files.

The messages that you keep in a .pst file are no different from other standard messages in outlook. You can forward, reply, or search through the stored messages as you do with other messages.
 

How to install and use memcached on Debian GNU / Linux to share php sessions between DNS round robined Apache webservers


November 9th, 2020

apache-load-balancing-keep-persistent-php-sessions-memcached-logo

Recently I had to come up with a solution to make A bunch of websites hosted on a machine to be high available. For the task haproxy is one of logical options to use. However as I didn't wanted to set new IP addresses and play around to build a cluster. I decided the much more simplistic approach to use 2 separate Machines each running Up-to-date same version of Apache Webserver as front end and using a shared data running on Master-to-Master MySQL replication database as a backend. For the load balancing itself I've used a simple 2 multiple DNS 'A' Active records, configured via the Bind DNS name server an Round Robin DNS load balancing for each of the domains, to make them point to the the 2 Internet IP addresses (XXX.XXX.XXX.4 and YYY.YYY.YYY.5) each configured on the 2 Linux servers eth0.

So far so good, this setup worked but immediately, I've run another issue as I found out the WordPress and Joomla based websites's PHP sessions are lost, as the connectivity by the remote client browser reaches one time on XXX…4 and one time on YYY…4 configured listerner on TCP port 80 and TCP p. 443. In other words if request comes up to Front end Apache worker webserver 1 with opened channel data is sent back to Client Browser and the next request is sent due to the other IP resolved by the DNS server to come to Apache worker webserver 2 of course webserver 2 has no idea about this previous session data and it gets confused and returns soemething like a 404 or 500 or any other error … not exciting really huh …

I've thought about work around and as I didn't wanted to involve thirty party stuff as Privoxy / Squid  / Varnish / Polipo etc. just as that would add extra complexity as if I choose to use haproxy from the beginning, after short investigation came to a reason to use memcached as a central PHP sessions storage.

php-memcached-apache-workers-webbrowser-keep-sessions-diagram
 

Why I choose memcached ?


Well it is relatively easy to configure, it doesn't come with mambo-jambo unreadable over-complicated configuration and the time to configure everything is really little as well as the configuration is much straight forward, plus I don't need to occupy more IP addresses and I don't need to do any changes to the already running 2 WebServers on 2 separate Linux hosts configured to be reachable from the Internet.
Of course using memcached is not a rock solid and not the best solution out there, as there is risk that if a memcached dies out for some reason all sessions stored in are lost as they're stored only in volatile memory, as well as there is a drawback that if a communication was done via one of the 2 webservers and one of them goes down sessions that were known by one of Apache's workers disappears.

So let me proceed and explain you the steps to take to configure memcached as a central session storage system.
 

1. Install memcached and php-memcached packages


To enable support for memcached besides installing memcached daemon, you need to have the php-memcached which will provide the memcached.so used by Apache loaded php script interpretter module.

On a Debian / Ubuntu and other deb based GNU / Linux it should be:

webserver1:~# apt-get install memcached php-memcached

TO use php-memcached I assume Apache and its support for PHP is already installed with lets say:
 

webserver1:~# apt-get install php libapache2-mod-php php-mcrypt


On CentOS / RHEL / Fedora Linux it is a little bit more complicated as you'll need to install php-pear and compile the module with pecl

 

[root@centos ~]# yum install php-pear

[root@centos ~]# yum install php-pecl-memcache


Compile memcache

[root@centos ~]# pecl install memcache

 

2. Test if memcached is properly loaded in PHP


Once installed lets check if memcached service is running and memcached support is loaded as module into PHP core.

 

webserver1:~# ps -efa  | egrep memcached
nobody   14443     1  0 Oct23 ?        00:04:34 /usr/bin/memcached -v -m 64 -p 11211 -u nobody -l 127.0.0.1 -l 192.168.0.1

root@webserver1:/# php -m | egrep memcache
memcached


To get a bit more verbose information on memcache version and few of memcached variable settings:

root@webserver1:/# php -i |grep -i memcache
/etc/php/7.4/cli/conf.d/25-memcached.ini
memcached
memcached support => enabled
libmemcached version => 1.0.18
memcached.compression_factor => 1.3 => 1.3
memcached.compression_threshold => 2000 => 2000
memcached.compression_type => fastlz => fastlz
memcached.default_binary_protocol => Off => Off
memcached.default_connect_timeout => 0 => 0
memcached.default_consistent_hash => Off => Off
memcached.serializer => php => php
memcached.sess_binary_protocol => On => On
memcached.sess_connect_timeout => 0 => 0
memcached.sess_consistent_hash => On => On
memcached.sess_consistent_hash_type => ketama => ketama
memcached.sess_lock_expire => 0 => 0
memcached.sess_lock_max_wait => not set => not set
memcached.sess_lock_retries => 5 => 5
memcached.sess_lock_wait => not set => not set
memcached.sess_lock_wait_max => 150 => 150
memcached.sess_lock_wait_min => 150 => 150
memcached.sess_locking => On => On
memcached.sess_number_of_replicas => 0 => 0
memcached.sess_persistent => Off => Off
memcached.sess_prefix => memc.sess.key. => memc.sess.key.
memcached.sess_randomize_replica_read => Off => Off
memcached.sess_remove_failed_servers => Off => Off
memcached.sess_sasl_password => no value => no value
memcached.sess_sasl_username => no value => no value
memcached.sess_server_failure_limit => 0 => 0
memcached.store_retry_count => 2 => 2
Registered save handlers => files user memcached


Make sure /etc/default/memcached (on Debian is enabled) on CentOS / RHELs this should be /etc/sysconfig/memcached

webserver1:~# cat default/memcached 
# Set this to no to disable memcached.
ENABLE_MEMCACHED=yes

As assured on server1 memcached + php is ready to be used, next login to Linux server 2 and repeat the same steps install memcached and the module and check it is showing as loaded.

Next place under some of your webservers hosted websites under check_memcached.php below PHP code
 

<?php
if (class_exists('Memcache')) {
    $server = 'localhost';
    if (!empty($_REQUEST[‘server’])) {
        $server = $_REQUEST[‘server’];
    }
    $memcache = new Memcache;
    $isMemcacheAvailable = @$memcache->connect($server);

    if ($isMemcacheAvailable) {
        $aData = $memcache->get('data');
        echo '<pre>';
        if ($aData) {
            echo '<h2>Data from Cache:</h2>';
            print_r($aData);
        } else {
            $aData = array(
                'me' => 'you',
                'us' => 'them',
            );
            echo '<h2>Fresh Data:</h2>';
            print_r($aData);
            $memcache->set('data', $aData, 0, 300);
        }
        $aData = $memcache->get('data');
        if ($aData) {
            echo '<h3>Memcache seem to be working fine!</h3>';
        } else {
            echo '<h3>Memcache DOES NOT seem to be working!</h3>';
        }
        echo '</pre>';
    }
}

if (!$isMemcacheAvailable) {
    echo 'Memcache not available';
}

?>


Launch in a browser https://your-dns-round-robined-domain.com/check_memcached.php, the browser output should be as on below screenshot:

check_memcached-php-script-website-screenshot

3. Configure memcached daemons on both nodes

All we need to set up is the listen IPv4 addresses

On Host Webserver1
You should have in /etc/memcached.conf

-l 127.0.0.1
-l 192.168.0.1

webserver1:~# grep -Ei '\-l' /etc/memcached.conf 
-l 127.0.0.1
-l 192.168.0.1


On Host Webserver2

-l 127.0.0.1
-l 192.168.0.200

 

webserver2:~# grep -Ei '\-l' /etc/memcached.conf
-l 127.0.0.1
-l 192.168.0.200

 

4. Configure memcached in php.ini

Edit config /etc/php.ini (on CentOS / RHEL) or on Debians / Ubuntus etc. modify /etc/php/*/apache2/php.ini (where depending on the PHP version you're using your php location could be different lets say /etc/php/5.6/apache2/php.ini):

If you wonder where is the php.ini config in your case you can usually get it from the php cli:

webserver1:~# php -i | grep "php.ini"
Configuration File (php.ini) Path => /etc/php/7.4/cli
Loaded Configuration File => /etc/php/7.4/cli/php.ini

 

! Note: That on on PHP-FPM installations (where FastCGI Process Manager) is handling PHP requests,path would be rather something like:
 

/etc/php5/fpm/php.ini

in php.ini you need to change as minimum below 2 variables
 

session.save_handler =
session.save_path =


By default session.save_path would be set to lets say session.save_path = "

/var/lib/php7/sessions"


To make php use a 2 central configured memcached servers on webserver1 and webserver2 or even more memcached configured machines set it to look as so:

session.save_path="192.168.0.200:11211, 192.168.0.1:11211"


Also modify set

session.save_handler = memcache


Overall changed php.ini configuration on Linux machine 1 ( webserver1 ) and Linux machine 2 ( webserver2 ) should be:

session.save_handler = memcache
session.save_path="192.168.0.200:11211, 192.168.0.1:11211"

 

Below is approximately how it should look on both :

webserver1: ~# grep -Ei 'session.save_handler|session.save_path' /etc/php.ini
;; session.save_handler = files
session.save_handler = memcache
;     session.save_path = "N;/path"
;     session.save_path = "N;MODE;/path"
;session.save_path = "/var/lib/php7/sessions"
session.save_path="192.168.0.200:11211, 192.168.0.1:11211"
;       (see session.save_path above), then garbage collection does *not*
 

 

webserver2: ~# grep -Ei 'session.save_handler|session.save_path' /etc/php.ini
;; session.save_handler = files
session.save_handler = memcache
;     session.save_path = "N;/path"
;     session.save_path = "N;MODE;/path"
;session.save_path = "/var/lib/php7/sessions"
session.save_path="192.168.0.200:11211, 192.168.0.1:11211"
;       (see session.save_path above), then garbage collection does *not*


As you can see I have configured memcached on webserver1 to listen on internal local LAN IP 192.168.0.200 and on Local LAN eth iface 192.168.0.1 on TCP port 11211 (this is the default memcached connections listen port), for security or obscurity reasons you might choose another empty one. Make sure to also set the proper firewalling to that port, the best is to enable connections only between 192.168.0.200 and 192.168.0.1 on each of machine 1 and machine 2.

loadbalancing2-php-sessions-scheme-explained
 

5. Enable Memcached for session redundancy


Next step is to configure memcached to allow failover (e.g. use both memcached on 2 linux hosts) and configure session redundancy.
Configure /etc/php/7.3/mods-available/memcache.ini or /etc/php5/mods-available/memcache.ini or respectively to the right location depending on the PHP installed and used webservers version.
 

webserver1 :~#  vim /etc/php/7.3/mods-available/memcache.ini

; configuration for php memcached module
; priority=20
; settings to write sessions to both servers and have fail over
memcache.hash_strategy=consistent
memcache.allow_failover=1
memcache.session_redundancy=3
extension=memcached.so

 

webserver2 :~# vim /etc/php/7.3/mods-available/memcache.ini

; configuration for php memcached module
; priority=20
; settings to write sessions to both servers and have fail over
memcache.hash_strategy=consistent
memcache.allow_failover=1
memcache.session_redundancy=3
extension=memcached.so

 

memcache.session_redundancy directive must be equal to the number of memcached servers + 1 for the session information to be replicated to all the servers. This is due to a bug in PHP.
I have only 2 memcached configured that's why I set it to 3.
 

6. Restart Apache Webservers

Restart on both machines webserver1 and webserver2 Apache to make php load memcached.so
 

webserver1:~# systemctl restart httpd

webserver2:~# systemctl restart httpd

 

7. Restart memcached on machine 1 and 2

 

webserver1 :~# systemctl restart memcached

webserver2 :~# systemctl restart memcached

 

8. Test php sessions are working as expected with a php script

Copy to both website locations to accessible URL a file test_sessions.php:
 

<?php  
session_start();

if(isset($_SESSION[‘georgi’]))
{
echo "Sessions is ".$_SESSION[‘georgi’]."!\n";
}
else
{
echo "Session ID: ".session_id()."\n";
echo "Session Name: ".session_name()."\n";
echo "Setting 'georgi' to 'cool'\n";
$_SESSION[‘georgi’]='cool';
}
?>

 

Now run the test to see PHP sessions are kept persistently:
 

hipo@jeremiah:~/Desktop $ curl -vL -s http://pc-freak.net/session.php 2>&1 | grep 'Set-Cookie:'
< Set-Cookie: PHPSESSID=micir464cplbdfpo36n3qi9hd3; expires=Tue, 10-Nov-2020 12:14:32 GMT; Max-Age=86400; path=/

hipo@jeremiah:~/Desktop $ curl -L –cookie "PHPSESSID=micir464cplbdfpo36n3qi9hd3" http://83.228.93.76/session.php http://213.91.190.233/session.php
Session is cool!
Session is cool!

 

Copy to the locations that is resolving to both DNS servers some sample php script such as sessions_test.php  with below content:

<?php
    header('Content-Type: text/plain');
    session_start();
    if(!isset($_SESSION[‘visit’]))
    {
        echo "This is the first time you're visiting this server\n";
        $_SESSION[‘visit’] = 0;
    }
    else
            echo "Your number of visits: ".$_SESSION[‘visit’] . "\n";

    $_SESSION[‘visit’]++;

    echo "Server IP: ".$_SERVER[‘SERVER_ADDR’] . "\n";
    echo "Client IP: ".$_SERVER[‘REMOTE_ADDR’] . "\n";
    print_r($_COOKIE);
?>

Test in a Web Opera / Firefox / Chrome browser.

You should get an output in the browser similar to:
 

Your number of visits: 15
Server IP: 83.228.93.76
Client IP: 91.92.15.51
Array
(
    [_ga] => GA1.2.651288003.1538922937
    [__utma] => 238407297.651288003.1538922937.1601730730.1601759984.45
    [__utmz] => 238407297.1571087583.28.4.utmcsr=google|utmccn=(organic)|utmcmd=organic|utmctr=(not provided)
    [shellInABox] => 467306938:1110101010
    [fpestid] => EzkIzv_9OWmR9PxhUM8HEKoV3fbOri1iAiHesU7T4Pso4Mbi7Gtt9L1vlChtkli5GVDKtg
    [__gads] => ID=8a1e445d88889784-22302f2c01b9005b:T=1603219663:RT=1603219663:S=ALNI_MZ6L4IIaIBcwaeCk_KNwmL3df3Z2g
    [PHPSESSID] => mgpk1ivhvfc2d0daq08e0p0ec5
)

If you want to test php sessions are working with text browser or from another external script for automation use something as below PHP code:
 

<?php
// save as "session_test.php" inside your webspace  
ini_set('display_errors', 'On');
error_reporting(6143);

session_start();

$sessionSavePath = ini_get('session.save_path');

echo '<br><div style="background:#def;padding:6px">'
   , 'If a session could be started successfully <b>you should'
   , ' not see any Warning(s)</b>, otherwise check the path/folder'
   , ' mentioned in the warning(s) for proper access rights.<hr>';
echo "WebServer IP:" . $_SERVER[‘SERVER_ADDR’] . "\n<br />";
if (empty($sessionSavePath)) {
    echo 'A "<b>session.save_path</b>" is currently',
         ' <b>not</b> set.<br>Normally "<b>';
    if (isset($_ENV[‘TMP’])) {
        echo  $_ENV[‘TMP’], ‘” ($_ENV[“TMP”]) ';
    } else {
        echo '/tmp</b>" or "<b>C:\tmp</b>" (or whatever',
             ' the OS default "TMP" folder is set to)';
    }    
    echo ' is used in this case.';
} else {
    echo 'The current "session.save_path" is "<b>',
         $sessionSavePath, '</b>".';
}

echo '<br>Session file name: "<b>sess_', session_id()
   , '</b>".</div><br>';
?>

You can download the test_php_sessions.php script here.

To test with lynx:

hipo@jeremiah:~/Desktop $ lynx -source 'https://pc-freak.net/test_php_sessions.php'
<br><div style="background:#def;padding:6px">If a session could be started successfully <b>you should not see any Warning(s)</b>, otherwise check the path/folder mentioned in the warning(s) for proper access rights.<hr>WebServer IP:83.228.93.76
<br />The current "session.save_path" is "<b>tcp://192.168.0.200:11211, tcp://192.168.0.1:11211</b>".<br>Session file name: "<b>sess_5h18f809b88isf8vileudgrl40</b>".</div><br>

Find largest files on AIX system root / show biggest files and directories in AIX folder howto


November 6th, 2020

ibm-aix-logo-find-largest-files-and-directories-on-system-to-free-space-if-disk-is-full

On an AIX server if you get a root directory ( / ) to be completely full problem and the AIX running services are unable to write their pid files and logs for example in /tmp /admin /home /var/tmp /var/log/ and rest of directory structure or the system is almost full with mounted filesystems which shows it is 90% or 95%+ full on main partition,  the system is either already stuck or it is on the way to stop functiononing normally. Hence the only way to recover IBM AIX machine to a normal behavior is to clean up some files (if you can't extend the partition) or add more physical Hard drive, just as we usually do on Linux.

So How can we clean up largest files on AIX?


Lets say we want to find all files on AIX larger than 1 MB.

aix-system:/ $ find / -xdev -size 2048 -ls | sort -r +6
12579 1400 -rw-r—–  1 root      security   1433534 Jun 26  2019 /etc/security/tsd/tsd.dat
 9325 20361 -rw-r—–  1 root      system    20848752 Nov  6 16:02 /etc/security/failedlogin
21862 7105 -rwxr-xr-x  1 root      system     7274915 Aug 24  2017 /sbin/zabbix_agentd
   72 7005 -rw-rw—-  1 root      system     7172962 Nov  6 16:19 /audit/stream.out
24726 2810 -rw——-  1 root      system     2876944 Feb 29  2012 /etc/syslog-ng/core
29314 2391 -r-xr-xr-x  1 root      system     2447454 Jun 25  2019 /lpp/bos/bos.rte.filesystem/7.1.5.32.save/update.16
21844 2391 -r-xr-xr-x  1 root      system     2447414 Jun 25  2019 /sbin/helpers/jfs2/logredo64
21843 2219 -r-xr-xr-x  1 root      system     2271971 Jun 25  2019 /sbin/helpers/jfs2/logredo
29313 2218 -r-xr-xr-x  1 root      system     2270835 Jun 25  2019 /lpp/bos/bos.rte.filesystem/7.1.5.32.save/update.15
22279 1800 -rw-r–r–  1 root      system     1843200 Nov  4 08:03 /root/smit.log
12577 1399 -rw-r–r–  1 root      system     1431685 Jun 26  2019 /etc/security/tsd/.tsd.bk
21837 1325 -r-xr-xr-x  1 root      system     1356340 Jun 25  2019 /sbin/helpers/jfs2/fsck64
29307 1325 -r-xr-xr-x  1 root      system     1356196 Jun 25  2019 /lpp/bos/bos.rte.filesystem/7.1.5.32.save/update.9
   12 1262 -rw——-  1 root      system     1291365 Aug  8  2011 /core

 

Above finds all files greater than 1 MB and sort them in reverse
order with the largest files first.

To search all files larger than 64 Megabytes under root ( / )

aix-system:/ $ find / -xdev -size +131072 -ls | sort -r +6
65139 97019 -rw-r–r–  1 root      system    99347181 Mar 31  2017 /admin/archive.zip


Display 10 largest directories on system

aix-system:/ $ du -a /dir | sort -n -r | head -10


Show biggest files and directories in a directory

 

aix-system:/ $ du -sk * | sort -n
4       Mail
4       liste
4       my_user
4       syslog-ng.conf
140     smit.script
180     smit.transaction
1804    smit.log

Below du display the size of all files and directories in the current directory with the biggest being at the bottom.

 

List all largest files in dir decrasingly. If a directory is matches show all sub-dirs largest files.

aix-system:/ $ ls -A . | while read name; do du -sk $name; done | sort -nr

Below ls + while loop command sorts disk usage for all files in the current directory by size, in decreasing order. If the file we suspect happens to be a directory, we can then change into that directory, and re-run the preceding command to determine what is taking up space within that directory.

Continue these steps until you find the desired file or files, at which point you can take appropriate actions.

If the bottom-most item is a directory, then cd into that directory and run the du command again. Keep drilling down until you find the biggest files on your system and get rid of them to save some space.

Clean up /var/cache/yum constantly filling file system in CentOS and RHEL Linux


November 3rd, 2020

At work recently we hit one of those common system admin problem on a server installed by some colleagues who have weirdly set a very small /var/cache/yum separate partitioned spaced at only 10Gb on a Redhat Enterprise Linux 7.8 Linux. The OS install guys original thinking was that this machine is to be used just for a very simple cluster and to run a Web Hosting services, a Haproxy Clustered service with PCS + Heartbeat and Corosync, but they did not estimated properly, that redhat comes its own rhnsd and updates gets pushed frequently to the machine and the lack of cleaning up the updated / installed rpms would soon leave the system into a full /var/cache/yum partition.

The gradual increase with time even though with few megabytes each and every month is normal behavior as the cache would increase its size based on the frequency of syncing with the yum server.

On this specific machines besides the rhnsd there is also a spacewalk (Free and Open Source systems management) which usually is installed to orchestrate multiple installed systems to centralize installation updates and monitoring of rpm packages.
spacewalk
also has its own way to produce its own files in /var/cache/yum with creating a repository clones of defined repository directories.

Linux-Register-clients-with-SpaceWalk-Server-System-Overview

The normal way to clean up space as we all know is then to do something with yum:

[test@rhel ~]$ yum clean
Loaded plugins: fastestmirror, langpacks
Error: clean requires an option: headers, packages, metadata, dbcache, plugins, expire-cache, rpmdb, all
[test@rhel ~]$


Usually to clean up a bit of space and get rid of partition full problem you can do:

[test@rhel ~]# yum clean metadata

This cleans up any xml metadata that may have been cached from any enabled repository.

[test@rhel ~]# yum clean all

This should clean up all rpms and metadata.

In cases where this works you should frequently re-run it on the machine or set it on a cron job to periodically reduce space occupied by useless downloaded .rpms.

However sometimes this simple solutions does not work

How to redirect / forward all postfix emails to one external email address?


October 29th, 2020

Postfix_mailserver-logo-howto-forward-email-with-regular-expression-or-maildrop

Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!someothermail@domain2.com"
}


To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .*linus@mail.example.com.*/ )
{
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail linuxbox@collector.example.com"
        }
}

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your Email-Server.com are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?

 

In main.cf /etc/postfix/main.cf include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp

/.+@.+/ external-forward-email@gmail.com

Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )
 

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual


Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.
 

$ postconf -n | grep virtual
virtual_alias_domains = mydomain.com myanotherdomain.com
virtual_alias_maps = hash:/etc/postfix/virtual


The regexp /.+@.+/ external-forward-email@gmail.com applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix


If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:

/.+@exclude-domain1.com/ @exclude-domain1.com
/.+@exclude-domain2.com/ @exclude-domain2.com

Time for a test. Send a test email


Next step is to Test it mail forwarding works as expected
 

# echo -e "Tseting body" | mail -s "testing subject" -r "testing@test.com" whatevertest-user@mail-recipient-domain.com

Postfix copy every email to a central mailbox (send a copy of every mail sent via mail server to a given email)


October 28th, 2020

Postfix-logo-always-bcc-email-option-send-all-emails-to-a-single-address-with-postfix.svg

Say you need to do a mail server migration, where you have a local configured Postfix on a number of Linux hosts named:

Linux-host1
Linux-host2
Linux-host3

etc.


all configured to send email via old Email send host (MailServerHostOld.com) in each linux box's postfix configuration's /etc/postfix/main.cf.
Now due to some infrastructure change in the topology of network or anything else, you need to relay Mails sent via another asumably properly configured Linux host relay (MailServerNewHost.com).

Usually such a migrations has always a risk that some of the old sent emails originating from local running scripts on Linux-host1, Linux-Host2 … or some application or anything else set to send via them might not properly deliver emails to some external Internet based Mailboxes via the new relayhost MailServerNewHost.com.

E.g. in /etc/postfix/main.cf Linux-Host* machines, you have below config after the migration:

relayhost = [MailServerNewHost.com]

Lets say that you want to make sure, that you don't end up with lost emails as you can't be sure whether the new email server will deliver correctly to the old repicient emails. What to do then?

To make sure will not end up in undelivered state and get lost forever after a week or so (depending on the mail queue configuration retention period made on Linux sent MTAs and mailrelay MailServerNewHost.com, it is a very good approach to temprorary set all email communication that will be sent via MailServerNewHost.com a BCC emaills (A Blind Carbon Copy) of each sent mail via relay that is set on your local configured Postfix-es on Linux-Host*.

In postfix to achieve that it is very easy all you have to do is set on your MailServerNewHost.com a postfix config variable always_bcc smartly included by postfix Mail Transfer Agent developers for cases exactly like this.

To forward all passed emails via the mail server just place in the end of /etc/postfix/mail.conf after login via ssh on MailServerNewHost.com

always_bcc=All-Emails@your-diresired-redirect-email-address.com


Now all left is to reload the postfix to force the new configuration to get loaded on systemd based hosts as it is usually today do:

# systemctl reload postfix


Finally to make sure all works as expected and mail is sent do from do a testing via local MTAs. 
 

Linux-Host:~# echo -e "Testing body" | mail -s "testing subject" -r "testing@test.com" georgi.stoyanov@remote-user-email-whatever-address.com

Linux-Host:~# echo -e "Testing body" | mail -s "testing subject" -r "testing@test.com" georgi.stoyanov@sample-destination-address.com


As you can see I'm using the -r to simulate a sender address, this is a feature of mailx and is not available on older Linux Os hosts that are bundled with mail only command.
Now go to and open the All-Emails@your-diresired-redirect-email-address.com in Outlook (if it is M$ Office 365 MX Shared mailbox), Thunderbird or whatever email fetching software that supports POP3 or IMAP (in case if you configured the common all email mailbox to be on some other Postfix / Sendmail / Qmail MTA). and check whether you started receiving a lot of emails 🙂

That's all folks enjoy ! 🙂

Saint Markianos and Martyrios a church reader and sub-deacon holy martyrs for Christ – The feast of Sub-deacons


October 25th, 2020

saint-Markian-and-Saint-Martirios-cleargymen-church-martyrs-3rd-century
Saint Markianos (Saint Markian) and Martyrios are little known saints in the Western realm and there is too little of information in English about this two early martyrs who lived circa year 340. What is special about them is that besides being a strong confessors of the True Eastern Orthodox faith, they served in the Church as simple 'reader' and 'sub-deacon'. This two designations were very much respected in the early Church as sub-deacons were usually the ones who have served in the Church inseparable as a Church service helpers to the patriarchs or some high clergy as Metropolitans and Bishops. We have many saints in the Church that are from a simple warriors as Saint Georg and Saint Dimitrios the Wonderworker (The MyrhBringer) to monks, bishops, patriarchs and pretty much all kind of people from the society from the begger to the richest and most famous kings and queens. However it is rare to meet in the ( Act of the Martyrs – latin: Acta Martyrum), to find  canonized saints that were in the lowest step in Church hierarchy as a simple 'psalm' and holy writtings reader or a sub-deacon. A Sub-deacon for those who don't know is a pearon that is a like a servant helper to the priest or bishop) that has been responsible for helping with the Church service and resolution of material and administrative needs of the christian community.
Usually in the Eastern Orthodox Church, the church reader or sub-deacons were and asre still called hipodeacon or "ipodiakon" in Greek / Slavonic church language), they didn't have the right at that early ages of christianity to publicly teach on faith matters or do apologetics (defendings of faith), however this 2 saintly man Markianos and Martyrios seem to have been a burning with the power of the spirit of God in their heart and the situation they were put in when the Church was under persecution and the patriarch Paul of Constantinople I (was patriarch from 340 ~ 350 AD). Saint Paul removed from his Church headship sent to Exile in Armenia and some time after drawned. He is commemorated in the Church on 6th of November. Hence considering situation St. Markian and Martyrius had to either defend and die for the faith or be scared and run away far in the caves or distant places of the empire such as villages on the outskits far away from the center city Rome …

The Heresy of Arius has been the most modern and the new modified faith claiming Christianity gathering followers in a viral way, and due to that the Arians have been in position where most of the public authorities in the Roman empire has been on their side against the Orthodox Christians.

Marcian_and_Martyrius_the_notaries_of_Constantinople-circa-355AD. 

Due to that in the church communities in near and distant lands of empire, the Arians were fiercely persecuting the Orthodox, and for a time even Emperor Saint Constantine The Great were deceived by their hypocrisy. It was terrible times for true confessors of faith. But not only Arians were persecuting Christians, as paganism were still deeply rooted in many of the lands and the Edict of Mediolan who gave equal rights to the religion in AD 313 was not strictly followed and senators of Roman regions with Paganist beliefs, were also harshly raising persucutions against their enemies the Christians who according to them are destroying the ancient culture and beautfy of paganism, not venerating the old pagan gods and against the wicked debauchery customs who were followed by pagans in 3rd / 4th century.

beheading-of-saint-Martyrios

Practically everyone who have admitted publicly Jesus Christ as a Creator of the World and a Son of God one hipostasys of the Holy Trinity God The Father, The Son and the Holy Spirit, were captured put to prison and quickly executed, if they don't turn out from their christian beliefs.

Arians has taken a lead even more with the set on the throne of Emperor Constantius II the son of Constantine I-st, as he has also fallen in the Arianism* heresy and who has taken in the court as a close advisory Eusebius and Philip who due to their half-pagan half-arian half superstitious understanding of the world have led a fierce war against Christianity and did a lot of evils to Christ Church.

* Arianism – believes that Jesus Christ is the Son of God, who was begotten by God the Father, and is distinct from the Father (therefore subordinate to him), but the Son is also God the Son but not co-eternal with God the Father. Arian theology was first attributed to Arius (c. AD 256–336), a Christian presbyter in Alexandria of Egypt.
saint-Markian-and-Saint-Martirios-cleargymen-church-martyrs-3rd-century.jpg
Until dethronment of Patriarch Paul I, St. Markianos and St. Martyrios have been a notaries of St. Paul (a typist to the patriarch and a kind of personal secretaries of the Patriarch) besides serving as Church reader and sub-deacon. They were famous for their time with their warm preaching of the Words of God – the Gospel of the Christ following the example of the apostles. Due to the raising heresies they also take an active part in writting many documents against the heretical "arians" and so called "macedonians" who teached anti-christian teachings who were newly invented and unknown to the ancient church teachings. They've had a special gift from God to be able to speak in a way to defend the faith so noone with his knowledge or high-education couldn't stand overcome them in disputes on church matters and many times they have disputed with Arian heretics exposing their fallacy (delusions) putting them to shame.

After the exile of Patriarch Paul heresy-archs arians turned their poisonous hatred against the patriarch two pupils Markianos and Martyrios. Craftly acting they acted slyly with a craftul lie and promised them a lot of gold a good place in the emperor's court, to raise them in the church hierarcy (in the part of the church which was already confessing arian heresy) and give them a lot of privileges from the king with the condition to accept, support and confess arianism.

But God's servents despised everything from this world, rejected the offered golden gifts, preferred eternal Heavenly honors than short and vain worldly and even laughed at them.

As Arians saw nothing can't convince them to their malice teaching, heretics condemned them to death, which was desired by the confessors (which remembered well the exile and the manly martyrdom of their teacher St. Patriarch Paul) and with all their being desired to be with Christ in the Eternal prepared palaces, where life will be without end in never ending bliss as promised by Christ in the Holy Scriptures. They preferred Christ more than the temporary life enjoyments.

saint_Markian-and_Martirios-orthodox-icon

When brought to the place of the execution of their false made accusement and sentence for being blasphemers of Christ, two saints asked for a small time
to pray. Brough up their eyes to the heaven and prayed with the words:

" – Oh Lord, who have unseenly created our hearts, who arrange all our deeds – "He formed the hearts of them all; he understands everything they do." (Psalm 33:15), receive with peace the souls of your servents, because we're mortified for your name – "Yet for Your sake we are killed all day long; We are accounted as sheep for the slaughter." (Psalm 44:22). We're joyful that you give us such a death, we depart from this life because of your name. Let us to participate in the eternal life in You, the source and giver of life."

Praying with this words, they bowed their holy heads and under sword and was killed by beheading by the unfortunate arians because of their confession of the divinity of Christ as true uncreated Son of God who existed before all ages before the creation of the world as we Christians believe to this date.

Some of the Christians took their holy relics and buried them outside the Melandissia Gate of the Constantinople. Later Saint John of Chrysostom built a church in their name over the place of their miracle-working relics. There the sick for many ages received divine healings  of different incurable diseases by the prayers of the holy martyrs of God, Praised in Trinity in all ages.

By the prayers of your Holy Martyrs St. Markianos and Martyrios Lord Jesus Christ have mercy on us !

Set Domain multiple alias (Aliases) in IIS on Windows server howto


October 24th, 2020

https://pc-freak.net/images/microsoft-iis-logo

On Linux as mentioned in my previous article it is pretty easy to use the VirtualHost Apache directive etc. to create ServerName and ServerAlias but on IIS Domain multiple alias add too me a while to google.

<VirtualHost *>
ServerName whatevever-server-host.com
ServerAlias www.whatever-server-host.com whatever-server-host.com
</VirtualHost>


In click and pray environments as Winblows, sometimes something rather easy to be done can be really annoying if you are not sure what to do and where to click and you have not passed some of the many cryptic microsoft offered ceritification programs offer for professional sysadmins, I'll name a few of them as to introduce UNIX guys like me to what you might ask a M$ admin during an interview if you want to check his 31337 Windows Sk!lls 🙂

 

  • Microsoft Certified Professional (MCP)
  • Microsoft Technology Associate (MTA) –
  • Microsoft Certified Solutions Expert (MCSE)-
  • Microsoft Specialist (MS) etc. –

A full list of Microsoft Certifed Professsional program here

Ok enough of  balling.

Here is  how to  create a domain alias in IIS on Windows server.

Login to your server and click on the START button then ‘Run’¦’, and then type ‘inetmgr.exe’.

Certainly you can go and click trough the Administrative tools section to start ISS manager, but for me this is fastest and easiest way.

create-domain-alias-on-windows-server-1a
 

Now expand the (local computer), then ‘Web Sites’ and locate the site for which you want to add alias (here it is called additional web site identification).

Right click on the domain and choose ‘Properties’ option at the bottom.

This will open the properties window where you have to choose ‘Web Site’ and then to locate ‘Website identification‘ section. Click on the ‘Advanced’¦’ button which stands next to the IP of the domain.

create-domain-alias-on-windows-server-2a
Advanced Web site identification window (Microsoft likes to see the word ‘Advanced’ in all of the management menus) will be opened, where we are going to add a new domain alias.

create-domain-alias-on-windows-server-3a.png

Click on the ‘Add’¦’ button and ‘Add/Edit website (alias)identification’ window will appear.

create-domain-alias-on-windows-server-4a.png

Make sure that you will choose the same IP address from the dropdown menu, then set the port number on which your web server is running (the default is 80), write the domain you want, and click ‘OK’ to create the new domain alias.

Actually click ‘OK’ until you have ‘Advanced Web site identification’ and the domain properties windows closed.

Right click on the domain again and ‘Stop’ and ‘Start’ the service.
This will be enough the IIS domain alias to start working.

create-domain-alias-on-windows-server-5a


Another useful thing for novice IIS admins that come from UNIX is a domain1 to domain2 redirect, this is done with writting an IIS rule which is an interesting but long topic for a limited post as like this, but just for the reference of fun to let you know this exist.

Domain 1 to Domain 2 Redirect
This rule comes handy when you change the name of your site or may be when you need to catch and alias and direct it to your main site. If the new and the old URLs share some elements, then you could just use this rule to have the matching pattern together with the redirect target being

domain1-to-domain2-redirect-iis

That's all folks, if you enjoyed the clicking laziness you're ready to retrain yourself to become a successful lazy Windows admin who calls Microsoft Support everyday as many of the errors and problems Windows sysadmins experience as I heard from a friend can only be managed by M$ Support (if they can be managed at all). 

Yes that's it the great and wonderful life of the avarage sysadmin. Long live computing … it's great! Everyday something broken needs to get fixed everyday something to rethink / relearn / reupdate and restructure or migrate a never ending story of weirdness.

A remark to  make, the idea for this post is originated based on a task I had to do long time ago on IIS, the images and the description behind them are taking from a post originally written on Domain Aliasing in IIS originally written by Anthony Gee unfortunately his blog is not available anymore so credits goes to him.