Posts Tagged ‘copy’

Fix Oracle virtualbox Virtual Machine inacessible error in Linux / Windows Host OS

Monday, November 2nd, 2020

Reading Time: 3minutes

Fix Oracle virtualbox Virtual Machine inacessible error in Linux / Windows Host OS

Say you have installed a Virtual Machine on a Host OS be it Windows 7 / 10 or some GNU / Linux / FreeBSD OS and suddenly after a PC shutdown / restart the Virtual machine shows as it couldn't be run by Ora Virtualbox anymore with a message of VM Machine "inaccessible", what is causing this and how to fix it?

This error is usually caused by the Guest Operating System's forceful behavior or an OS system crash such as sudden hang due to kernel error or due to sudden electricity drop. Whatever the reason if improper sending of termination process signals to Oracle Vbox are not properly handled this is causing the VBox to not recreate its own files and kill (close) the application of Oracle Virtualbox program in gracious way.

On Windows Virtualization Host OS the common files that needs to be tampered by Virtualbox are found under Virtualbox VMs\Name-of-VM:


For example if you have a Ubuntu installed

C:\Users\\VirtualBox VMs\Ubuntu

If you have CentOS7 installed instead it will be under

C:\Users\\VirtualBox VMs\CentOS7

Usually under this dir you will find files which are with .vbox extension.

The virtual box files with extension .vbox contain metadata the virtualbox hypervisor requires to resolve the guest virtual OS' configuration.
If the main .vbox file is corrupted (i.e. reporting that it is empty) then use the backup .vbox-prev file to recover the contents of the original file.

How to solve the Virtualbox inacessible weird error:

Usually under the Virtualbox VMs\VM_name you'll find a directory / filestructure like:




1. Considering .vbox is empty Rename the empty .vbox files a temporary name (e.g. rename VM_Name.vbox to VM_Name-empty.vbox). If you're unsure whether it is empty you can check by opening the file in a text editor and make sure it is empty.

2. Then make a copy of the backup file VM_Name.vbox-prev, where the copy will have the same name as the original but with the word "copy" appended to it (i.e. VM_Name.vbox-prev is renamed to VM_Name_copy.vbox-prev) as well as copy VM_Name.vbox-tmp to VM_Name_copy.vbox-tmp.

! Note that it is important to retain the original backup .vbox-prev file it should not be altered or itself renamed.

3. Now go rename the copy of the newly created .vbox-prev file VM_Name.vbox-prev to the name of the empty .vbox file (VM_Name.vbox).

Now that this is done you may add the .vbox file (guest os) back into the VBOX hypervisor.

If for some reason this did not work and you have a proper working copy of the Virtual Machine under VM_Name.vbox-tmp another approach to try is:

1.  Go to your Virtualbox folder i.e. C:\Users\hipo\VirtualBox VMs\Ubuntu

2. Check for file extensions files Ubuntu.vbox-tmp or Ubuntu.vbox-prev needed are there.

3. Overwrite Ubuntu.vbox file with the -tmp one, Just rename file from Ubuntu.vbox-tmp to Ubuntu.vbox

4.  Exit from Virtual Machine and Power it on again.

Hopefully this should recovered the state and snapshot of the "inaccessible" guest VM and
you should now see error gone away and VM will work as before.


How to redirect / forward all postfix emails to one external email address?

Thursday, October 29th, 2020

Reading Time: 3minutes


Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!"

To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .**/ )
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail"

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?


In /etc/postfix/ include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp


Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual

Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.

$ postconf -n | grep virtual
virtual_alias_domains =
virtual_alias_maps = hash:/etc/postfix/virtual

The regexp /.+@.+/ applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix

If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:


Time for a test. Send a test email

Next step is to Test it mail forwarding works as expected

# echo -e "Tseting body" | mail -s "testing subject" -r ""

Reinstall all Debian packages with a copy of apt deb package list from another working Debian Linux installation

Wednesday, July 29th, 2020

Reading Time: 11minutes


Few days ago, in the hurry in the small hours of the night, I've done something extremely stupid. Wanting to move out a .tar.gz binary copy of qmail installation to /var/lib/qmail with all the dependent qmail items instead of extracting to admin user /root directory (/root), I've extracted it to the main Operating system root /directrory.
Not noticing this, I've quickly executed rm -rf var with the idea to delete all directory tree under /root/var just 3 seconds later, I've realized I'm issuing the rm -rf var with the wrong location WITH a root user !!!! Being scared on what I've done, I've quickly pressed CTRL+C to immedately cancel the deletion operation of my /var.


But as you can guess, since the machine has an Slid State Drive drive and SSD memory drive are much more faster in I/O operations than the classical ATA / SATA disks. I was not quick enough to cancel the operation and I've noticed already some part of my /var have been R.I.P-pped in the heaven of directories.

This was ofcourse upsetting so for a while I rethinked the situation to get some ideas on what I can do to recover my system ASAP!!! and I had the idea of course to try to reinstall All my installed .deb debian packages to restore system closest to the normal, before my stupid mistake.

Guess my unpleasent suprise when I have realized dpkg and respectively apt-get apt and aptitude package management tools cannot anymore handle packages as Debian Linux's package dependency database has been damaged due to missing dpkg directory 




Oh man that was unpleasent, especially since I've installed plenty of stuff that is custom on my Mate based desktop and, generally reinstalling it updating the sytem to the latest Debian security updates etc. will be time consuming and painful process I wanted to omit.

So of course the logical thing to do here was to try to somehow recover somehow a database copy of /var/lib/dpkg  if that was possible, that of course led me to the idea to lookup for a way to recover my /var/lib/dpkg from backup but since I did not maintained any backup copy of my OS anywhere that was not really possible, so anyways I wondered whether dpkg does not keep some kind of database backups somewhere in case if something goes wrong with its database.
This led me to this nice Ubuntu thred which has pointed me to the part of my root rm -rf dpkg db disaster recovery solution.
Luckily .deb package management creators has thought about situation similar to mine and to give the user a restore point for /var/lib/dpkg damaged database

/var/lib/dpkg is periodically backed up in /var/backups

A typical /var/lib/dpkg on Ubuntu and Debian Linux looks like so:

hipo@jeremiah:/var/backups$ ls -l /var/lib/dpkg
total 12572
drwxr-xr-x 2 root root    4096 Jul 26 03:22 alternatives
-rw-r–r– 1 root root      11 Oct 14  2017 arch
-rw-r–r– 1 root root 2199402 Jul 25 20:04 available
-rw-r–r– 1 root root 2199402 Oct 19  2017 available-old
-rw-r–r– 1 root root       8 Sep  6  2012 cmethopt
-rw-r–r– 1 root root    1337 Jul 26 01:39 diversions
-rw-r–r– 1 root root    1223 Jul 26 01:39 diversions-old
drwxr-xr-x 2 root root  679936 Jul 28 14:17 info
-rw-r—– 1 root root       0 Jul 28 14:17 lock
-rw-r—– 1 root root       0 Jul 26 03:00 lock-frontend
drwxr-xr-x 2 root root    4096 Sep 17  2012 parts
-rw-r–r– 1 root root    1011 Jul 25 23:59 statoverride
-rw-r–r– 1 root root     965 Jul 25 23:59 statoverride-old
-rw-r–r– 1 root root 3873710 Jul 28 14:17 status
-rw-r–r– 1 root root 3873712 Jul 28 14:17 status-old
drwxr-xr-x 2 root root    4096 Jul 26 03:22 triggers
drwxr-xr-x 2 root root    4096 Jul 28 14:17 updates

Before proceeding with this radical stuff to move out /var/lib/dpkg/info from another machine to /var mistakenyl removed oned. I have tried to recover with the well known:

  • extundelete
  • foremost
  • recover
  • ext4magic
  • ext3grep
  • gddrescue
  • ddrescue
  • myrescue
  • testdisk
  • photorec

Linux file deletion recovery tools from a USB stick loaded with a Number of LiveCD distributions, i.e. tested recovery with:

  • Debian LiveCD
  • Ubuntu LiveCD
  • SystemRescueCD
  • Trinity Rescue Kit
  • Ultimate Boot CD

but unfortunately none of them couldn't recover the deleted files … 

The reason why the standard file recovery tools could not recover ?

My assumptions is after I've done by rm -rf var; from sysroot,  issued the sync (- if you haven't used it check out man sync) command – that synchronizes cached writes to persistent storage and did a restart from the poweroff PC button, this should have worked, as I've recovered like that in the past) in a normal Sys V System with a normal old fashioned blocks filesystem as EXT2 . or any other of the filesystems without a journal, however as the machine run a EXT4 filesystem with a journald and journald, this did not work perhaps because something was not updated properly in /lib/systemd/systemd-journal, that led to the situation all recently deleted files were totally unrecoverable.

1. First step was to restore the directory skele of /var/lib/dpkg

# mkdir -p /var/lib/dpkg/{alternatives,info,parts,triggers,updates}


2. Recover missing /var/lib/dpkg/status  file

The main file that gives information to dpkg of the existing packages and their statuses on a Debian based systems is /var/lib/dpkg/status

# cp /var/backups/dpkg.status.0 /var/lib/dpkg/status


3. Reinstall dpkg package manager to make package management working again

Say a warm prayer to the Merciful God ! and do:

# apt-get download dpkg
# dpkg -i dpkg*.deb


4. Reinstall base-files .deb package which provides basis of a Debian system

Hopefully everything will be okay and your dpkg / apt pair will be in normal working state, next step is to:

# apt-get download base-files
# dpkg -i base-files*.deb


5. Do a package sanity and consistency check and try to update OS package list

Check whether packages have been installed only partially on your system or that have missing, wrong or obsolete control  data  or  files.  dpkg  should suggest what to do with them to get them fixed.

# dpkg –audit

Then resynchronize (fetch) the package index files from their sources described in /etc/apt/sources.list

# apt-get update

Do apt db constistency check:

#  apt-get check

check is a diagnostic tool; it updates the package cache and checks for broken dependencies.

Take a deep breath ! …

Do :

ls -l /var/lib/dpkg
and compare with the above list. If some -old file is not present don't worry it will be there tomorrow.

Next time don't forget to do a regular backup with simple rsync backup script or something like Bacula / Amanda / Time Vault or Clonezilla.

6. Copy dpkg database from another Linux system that has a working dpkg / apt Database

Well this was however not the end of the story … There were still many things missing from my /var/ and luckily I had another Debian 10 Buster install on another properly working machine with a similar set of .deb packages installed. Therefore to make most of my programs still working again I have copied over /var from the other similar set of package installed machine to my messed up machine with the missing deleted /var.

To do so …
On Functioning Debian 10 Machine (Working Host in a local network with IP, I've archived content of /var:

linux:~# tar -czvf var_backup_debian10.tar.gz /var

Then sftped from Working Host towards the /var deleted broken one in my case this machine's hostname is jericho and luckily still had SSHD and SFTP running processes loaded in memory:

jericho:~# sftp root@
sftp> get var_backup_debian10.tar.gz

Now Before extracting the archive it is a good idea to make backup of old /var remains somewhere for example somewhere in /root 
just in case if we need to have a copy of the dpkg backup dir /var/backups

jericho:~# cp -rpfv /var /root/var_backup_damaged

jericho:~# tar -zxvf /root/var_backup_debian10.tar.gz 
jericho:/# mv /root/var/ /

Then to make my /var/lib/dpkg contain the list of packages from my my broken Linux install I have ovewritten /var/lib/dpkg with the files earlier backupped before  .tar.gz was extracted.

jericho:~# cp -rpfv /var /root/var_backup_damaged/lib/dpkg/ /var/lib/


7. Reinstall All Debian  Packages completely scripts


I then tried to reinstall each and every package first using aptitude with aptitude this is done with

# aptitude reinstall '~i'

However as this failed, tried using a simple shell loop like below:

for i in $(dpkg -l |awk '{ print $2 }'); do echo apt-get install –reinstall –yes $i; done

Alternatively, all .deb package reninstall is also possible with dpkg –get-selections and with awk with below cmds:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log;
awk '$1=$1' ORS=' ' list.log > newlist.log
apt-get install –reinstall $(cat newlist.log)

It can also be run as one liner for simplicity:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log; awk '$1=$1' ORS=' ' list.log > newlist.log; apt-get install –reinstall $(cat newlist.log)

This produced a lot of warning messages, reporting "package has no files currently installed" (virtually for all installed packages), indicating a severe packages problem below is sample output produced after each and every package reinstall … :

dpkg: warning: files list file for package 'iproute' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'brscan-skey' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libapache2-mod-php7.4' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-readline' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'linux-headers-4.19.0-5-amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libbonoboui2-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'gksu' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblogging-stdlog0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'python-gconf' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-cli' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package '' missing; assuming package has no files currently installed

After some attempts I found a way to be able to work around the warning message, for each package by simply reinstalling the package reporting the issue with

apt –reinstall $package_name

Though reinstallation started well and many packages got reinstalled, unfortunately some packages such as apache2-mod-php5.6 and other php related ones  started failing during reinstall ending up in unfixable states right after installation of binaries from packages was successfully placed in its expected locations on disk. The failures occured during the package setup stage( dpkg –configure $packagename) …

The logical thing to do is a recovery attempt with something like the usual well known by any Debian admin:

apt-get install –fix-missing

As well as Manual requesting to reconfigure (issue re-setup) of all installed packages also did not produced a positive result

dpkg –configure -a

But many packages were still failing due to dpkg inability to execute some post installation scripts from respective .deb files.
To work around that and continue installing the rest of packages I had to manually delete all files related to the failing package located under directory 


For example to omit the post installation failure of libapache2-mod-php5.6 and have a succesful install of the package next time I tried reinstall, I had to delete all /var/lib/dpkg/info/libapache2-mod-php5.6.postrm, /var/lib/dpkg/info/libapache2-mod-php5.6.postinst scripts and even sometimes everything like libapache2-mod-php5.6* that were present in /var/lib/dpkg/info dir.

The problem with this solution, however was the package reporting to install properly, but the post install script hooks were still not in placed and important things as setting permissions of binaries after install or applying some configuration changes right after install was missing leading to programs failing to  fully behave properly or even breaking up even though showing as finely installed …

The final solution to this problem was radical.
I've used /var/lib/dpkg database (directory) from ther other working Linux machine with dpkg DB OK found in var_backup_debian10.tar.gz (linux:~# host with a working dpkg database) and then based on the dpkg package list correct database responding on jericho:~# to reinstall each and every package on the system using Debian System Reinstaller script taken from the internet.
Debian System Reinstaller works but to reinstall many packages, I've been prompted again and again whether to overwrite configuration or keep the present one of packages.
To Omit the annoying [Y / N ] text prompts I had made a slight modification to the script so it finally looked like this:

# Debian System Reinstaller
# Copyright (C) 2015 Albert Huang
# Copyright (C) 2018 Andreas Fendt

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# GNU General Public License for more details.

# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <>.

# —
# This script assumes you are using a Debian based system
# (Debian, Mint, Ubuntu, #!), and have sudo installed. If you don't
# have sudo installed, replace "sudo" with "su -c" instead.

pkgs=`dpkg –get-selections | grep -w 'install$' | cut -f 1 |  egrep -v '(dpkg|apt)'`

for pkg in $pkgs; do
    echo -e "\033[1m   * Reinstalling:\033[0m $pkg"    

    apt-get –reinstall -o Dpkg::Options::="–force-confdef" -o Dpkg::Options::="–force-confold" -y install $pkg || {
        echo "ERROR: Reinstallation failed. See reinstall.log for details."
        exit 1
done working modified version of Albert Huang and Andreas Fendt script  can be also downloaded here.

Note ! Omitting the text confirmation prompts to install newest config or keep maintainer configuration is handled by the argument:


-o Dpkg::Options::="–force-confold

I however still got few NCurses Console selection prompts during the reinstall of about 3200+ .deb packages, so even with this mod the reinstall was not completely automatic.

Note !  During the reinstall few of the packages from the list failed due to being some old unsupported packages this was ejabberd, ircd-hybrid and a 2 / 3 more.
This failure was easily solved by completely purging those packages with the usual

# dpkg –purge $packagename

and reruninng on each of the failing packages.

Note ! The failing packages were just old ones left over from Debian 8 and Debian 9 before the apt-get dist-upgrade towards 10 Duster.
Eventually I got a success by God's grance, after few hours of pains and trials, ending up in a working state package database and a complete set of freshly reinstalled packages.

The only thing I had to do finally is 2 hours of tampering why GNOME did not automatically booted after the system reboot due to failing gdm
until I fixed that I've temprary used ligthdm (x-display-manager), to do I've

dpkg –reconfigure gdm3


 to work around this I had to also reinstall few libraries, reinstall the xorg-server, reinstall gdm and reinstall the meta package for GNOME, using below set of commands:

apt-get install –reinstall libglw1-mesa libglx-mesa0
apt-get install –reinstall libglu1-mesa-dev
apt install –reinstallgsettings-desktop-schemas
apt-get install –reinstall xserver-xorg-video-intel
apt-get install –reinstall xserver-xorg
apt-get install –reinstall xserver-xorg-core
apt-get install –reinstall task-desktop
apt-get install –reinstall task-gnome-desktop


As some packages did not ended re-instaled on system because on the original host from where /var/lib/dpkg db was copied did not have it I had to eventually manually trigger reinstall for those too:


apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes thunderbird
apt-get install –reinstall –yes audacity
apt-get install –reinstall –yes gajim
apt-get install –reinstall –yes slack remmina
apt-get install –yes k3b
pt-get install –yes gbgoffice
pt-get install –reinstall –yes skypeforlinux
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes libcurl3-gnutls libcurl3-nss
apt-get install –yes virtualbox-5.2
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes alsa-tools-gui
apt-get install –reinstall –yes gftp
apt install ./teamviewer_15.3.2682_amd64.deb –yes


Note that some of above packages requires a properly configured third party repositories, other people might have other packages that are missing from the dpkg list and needs to be reinstalled so just decide according to your own case of left aside working system present binaries that doesn't belong to any dpkg installed package.

After a bit of struggle everything is back to normal Thanks God! 🙂 !


Nginx increase security by putting websites into Linux jails howto

Monday, August 27th, 2018

Reading Time: 5minutes


If you're sysadmining a large numbers of shared hosted websites which use Nginx Webserver to interpret PHP scripts and serve HTML, Javascript, CSS … whatever data.

You realize the high amount of risk that comes with a possible successful security breach / hack into a server by a malicious cracker. Compromising Nginx Webserver by an intruder automatically would mean that not only all users web data will get compromised, but the attacker would get an immediate access to other data such as Email or SQL (if the server is running multiple services).

Nowadays it is not so common thing to have a multiple shared websites on the same server together with other services, but historically there are many legacy servers / webservers left which host some 50 or 100+ websites.

Of course the best thing to do is to isolate each and every website into a separate Virtual Container however as this is a lot of work and small and mid-sized companies refuse to spend money on mostly anything this might be not an option for you.

Considering that this might be your case and you're running Nginx either as a Load Balancing, Reverse Proxy server etc. , even though Nginx is considered to be among the most secure webservers out there, there is absolutely no gurantee it would not get hacked and the server wouldn't get rooted by a script kiddie freak that just got in darknet some 0day exploit.

To minimize the impact of a possible Webserver hack it is a good idea to place all websites into Linux Jails.


For those who hear about Linux Jail for a first time,
chroot() jail is a way to isolate a process / processes and its forked children from the rest of the *nix system. It should / could be used only for UNIX processes that aren't running as root (administrator user), because of the fact the superuser could break out (escape) the jail pretty easily.

Jailing processes is a concept that is pretty old that was first time introduced in UNIX version 7 back in the distant year 1979, and it was first implemented into BSD Operating System ver. 4.2 by Bill Joy (a notorious computer scientist and co-founder of Sun Microsystems). Its original use for the creation of so called HoneyPot – a computer security mechanism set to detect, deflect, or, in some manner, counteract attempts at unauthorized use of information systems that appears completely legimit service or part of website whose only goal is to track, isolate, and monitor intruders, a very similar to police string operations (baiting) of the suspect. It is pretty much like а bait set to collect the fish (which in this  case is the possible cracker).


BSD Jails nowadays became very popular as iPhones environment where applications are deployed are inside a customly created chroot jail, the principle is exactly the same as in Linux.

But anyways enough talk, let's create a new jail and deploy set of system binaries for our Nginx installation, here is the things you will need:

1. You need to have set a directory where a copy of /bin/ls /bin/bash /bin/,  /bin/cat … /usr/bin binaries /lib and other base system Linux system binaries copy will reside.


server:~# mkdir -p /usr/local/chroot/nginx


2. You need to create the isolated environment backbone structure /etc/ , /dev, /var/, /usr/, /lib64/ (in case if deploying on 64 bit architecture Operating System).


server:~# export DIR_N=/usr/local/chroot/nginx;
server:~# mkdir -p $DIR_N/etc
server:~# mkdir -p $DIR_N/dev
server:~# mkdir -p $DIR_N/var
server:~# mkdir -p $DIR_N/usr
server:~# mkdir -p $DIR_N/usr/local/nginx
server:~# mkdir -p $DIR_N/tmp
server:~# chmod 1777 $DIR_N/tmp
server:~# mkdir -p $DIR_N/var/tmp
server:~# chmod 1777 $DIR_N/var/tmp
server:~# mkdir -p $DIR_N/lib64
server:~# mkdir -p $DIR_N/usr/local/


3. Create required device files for the new chroot environment


server:~# /bin/mknod -m 0666 $D/dev/null c 1 3
server:~# /bin/mknod -m 0666 $D/dev/random c 1 8
server:~# /bin/mknod -m 0444 $D/dev/urandom c 1 9


mknod COMMAND is used instead of the usual /bin/touch command to create block or character special files.

Once create the permissions of /usr/local/chroot/nginx/{dev/null, dev/random, dev/urandom} have to be look like so:


server:~# ls -l /usr/local/chroot/nginx/dev/{null,random,urandom}
crw-rw-rw- 1 root root 1, 3 Aug 17 09:13 /dev/null
crw-rw-rw- 1 root root 1, 8 Aug 17 09:13 /dev/random
crw-rw-rw- 1 root root 1, 9 Aug 17 09:13 /dev/urandom


4. Install nginx files into the chroot directory (copy all files of current nginx installation into the jail)

If your NGINX webserver installation was installed from source to keep it latest
and is installed in lets say, directory location /usr/local/nginx you have to copy /usr/local/nginx to /usr/local/chroot/nginx/usr/local/nginx, i.e:


server:~# /bin/cp -varf /usr/local/nginx/* /usr/local/chroot/nginx/usr/local/nginx


5. Copy necessery Linux system libraries to newly created jail

NGINX webserver is compiled to depend on various libraries from Linux system root e.g. /lib/* and /lib64/* therefore in order to the server work inside the chroot-ed environment you need to transfer this libraries to the jail folder /usr/local/chroot/nginx

If you are curious to find out which libraries exactly is nginx binary dependent on run:

server:~# ldd /usr/local/nginx/usr/local/nginx/sbin/nginx (0x00007ffe3e952000) => /lib/x86_64-linux-gnu/ (0x00007f2b4762c000) => /lib/x86_64-linux-gnu/ (0x00007f2b473f4000) => /lib/x86_64-linux-gnu/ (0x00007f2b47181000) => /usr/local/lib/ (0x00007f2b46ddf000) => /lib/x86_64-linux-gnu/ (0x00007f2b46bc5000) => /lib/x86_64-linux-gnu/ (0x00007f2b46826000)
        /lib64/ (0x00007f2b47849000) => /lib/x86_64-linux-gnu/ (0x00007f2b46622000)

The best way is to copy only the libraries in the list from ldd command for best security, like so:


server: ~# cp -rpf /lib/x86_64-linux-gnu/ /usr/local/chroot/nginx/lib/*
server: ~# cp -rpf library chroot_location



However if you're in a hurry (not a recommended practice) and you don't care for maximum security anyways (you don't worry the jail could be exploited from some of the many lib files not used by nginx and you don't  about HDD space), you can also copy whole /lib into the jail, like so:


server: ~# cp -rpf /lib/ /usr/local/chroot/nginx/usr/local/nginx/lib


NOTE! Once again copy whole /lib directory is a very bad practice but for a time pushing activities sometimes you can do it …

6. Copy /etc/ some base files and , prelink.conf.d directories to jail environment


server:~# cp -rfv /etc/{group,prelink.cache,services,adjtime,shells,gshadow,shadow,hosts.deny,localtime,nsswitch.conf,nscd.conf,prelink.conf,protocols,hosts,passwd,,,resolv.conf,host.conf}  \


server:~#cp -avr /etc/{,prelink.conf.d} /usr/local/chroot/nginx/nginx/etc

7. Copy HTML, CSS, Javascript websites data from the root directory to the chrooted nginx environment


server:~# nice -n 10 cp -rpf /usr/local/websites/ /usr/local/chroot/nginx/usr/local/

This could be really long if the websites are multiple gigabytes and million of files, but anyways the nice command should reduce a little bit the load on the server it is best practice to set some kind of temporary server maintenance page to show on the websites index in order to prevent the accessing server clients to not have interrupts (that's especially the case on older 7200 / 7400 RPM non-SSD HDDs.)


8. Stop old Nginx server outside of Chroot environment and start the new one inside the jail

a) Stop old nginx server

Either stop the old nginx using it start / stop / restart script inside /etc/init.d/nginx (if you have such installed) or directly kill the running webserver with:


server:~# killall -9 nginx


b)Test the chrooted nginx installation is correct and ready to run inside the chroot environment


server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx -t
server:~# /usr/sbin/chroot /usr/local/chroot/nginx /usr/local/nginx/nginx/sbin/nginx


c) Restart the chrooted nginx webserver – when necessery later


server:~# /usr/sbin/chroot /nginx /usr/local/chroot/nginx/sbin/nginx -s reload


d) Edit the chrooted nginx conf

If you need to edit nginx configuration, be aware that the chrooted NGINX will read its configuration from /usr/local/chroot/nginx/nginx/etc/conf/nginx.conf (i'm saying that if you by mistake forget and try to edit the old config that is usually under /usr/local/nginx/conf/nginx.conf



How to Copy large data directories between 2 Linux / Unix servers without direct ssh / ftp access between server1 and server2 other by using SSH, TAR and Unix pipes

Monday, April 27th, 2015

Reading Time: 3minutes


In a Web application data migration project, I've come across a situation where I have to copy / transfer 500 Gigabytes of data from Linux server 1 (host A) to Linux server 2 (host B). However the two machines doesn't have direct access to each other (via port 22) for security reasons and hence I cannot use sshfs to mount remotely host dir via ssh and copy files like local ones.

As this is a data migration project its however necessery to migrate the data finding a way … Normal way companies do it is to copy the data to External Hard disk storage and send it via some Country Post services or some employee being send in Data center to attach the SAN to new server where data is being migrated However in my case this was not possible so I had to do it different.

I have access to both servers as they're situated in the same Corporate DMZ network and I can thus access both UNIX machines via SSH.

Thanksfully there is a small SSH protocol + TAR archiver and default UNIX pipe's capabilities hack that makes possible to transfer easy multiple (large) files and directories. The only requirement to use this nice trick is to have SSH client installed on the middle host from which you can access via SSH protocol Server1 (from where data is migrated) and Server2 (where data will be migrated).

If the hopping / jump server from which you're allowed to have access to Linux  servers Server1 and Server2 is not Linux and you're missing the SSH client and don't have access on Win host to install anything on it just use portable mobaxterm (as it have Cygwin SSH client embedded )

Here is how:

jump-host:~$ ssh server1 "tar czf – /somedir/" | pv | ssh server2 "cd /somedir/; tar xf

As you can see from above command line example an SSH is made to server1  a tar is used to archive the directory / directories containing my hundred of gigabytes and then this is passed to another opened ssh session to server 2  via UNIX Pipe mechanism and then TAR archiver is used second time to unarchive previously passed archived content. pv command which is in the middle is not obligitory though it is a nice way to monitor status about data transfer like below:

500GB 0:00:01 [10,5MB/s] [===================================================>] 27%

P.S. If you don't have PV installed install it either with apt-get on Debian:


debian:~# apt-get install –yes pv


Or on CentOS / Fedora / RHEL etc.


[root@centos ~]# yum -y install pv


Below is a small chunk of PV manual to give you better idea of what it does:

       pv – monitor the progress of data through a pipe

       pv [OPTION] [FILE]…
       pv [-h|-V]

       pv  allows  a  user to see the progress of data through a pipeline, by giving information such as time elapsed, percentage
       completed (with progress bar), current throughput rate, total data transferred, and ETA.

       To use it, insert it in a pipeline between two processes, with the appropriate options.  Its standard input will be passed
       through to its standard output and progress will be shown on standard error.

       pv  will  copy  each  supplied FILE in turn to standard output (- means standard input), or if no FILEs are specified just
       standard input is copied. This is the same behaviour as cat(1).

       A simple example to watch how quickly a file is transferred using nc(1):

              pv file | nc -w 1 3000

       A similar example, transferring a file from another process and passing the expected size to pv:

              cat file | pv -s 12345 | nc -w 1 3000

Note that with too big file transfers using PV will delay data transfer because everything will have to pass through another 2 pipes, however for file transfers up to few gigabytes its really nice to include it.

If you only need to transfer huge .tar.gz archive and you don't bother about traffic security (i.e. don't care whether transferred traffic is going through encrypted SSH tunnel and don't want to put an overhead to both systems for encrypting the data and you have some unfiltered ports between host 1 and host 2 you can run netcat on host 2 to listen for connections and forward .tar.gz content via netcat's port like so:

linux2:~$ nc -l -p 12345 > /path/destinationfile
linux2:~$ cat /path/sourcfile | nc desti.nation.ip.address 12345

Another way to transfer large data without having connection with server1 and server2 but having connection to a third host PC is to use rsync and good old SSH Tunneling, like so:

jump-host:~$ ssh -R 2200:Linux-server1:22 root@Linux-server2 "rsync -e 'ssh -p 2200' –stats –progress -vaz /directory/to/copy root@localhost:/copy/destination/dir"

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

Reading Time: 4minutes


If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. Theibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditionsibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total

2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p

| Database           |
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |

Next step is to get some understanding about how many existing InnoDB tables are present within Database server:


mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
| EngineCount | engine |
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:

mysqladmin -u root -p'Your-Server-Password' var | grep innodb

Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> WHERE engine = 'innodb';

| table_schema | table_name               |
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |

3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total


If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p



DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!!DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf

Delete files taking up too much space –ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql


You should get no running MySQL instance (processes), so above ps command should return blank.

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
mysql> SOURCE /root/dump.sql;


If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.

Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂

How to make a mirror of website on GNU / Linux with wget / Few tips on wget site mirroring

Wednesday, February 22nd, 2012

Reading Time: 4minutes


Everyone who used Linux is probably familiar with wget or has used this handy download console tools at least thousand of times. Not so many Desktop GNU / Linux users like Ubuntu and Fedora Linux users had tried using wget to do something more than single files download.
Actually wget is not so popular as it used to be in earlier linux days. I've noticed the tendency for newer Linux users to prefer using curl (I don't know why).

With all said I'm sure there is plenty of Linux users curious on how a website mirror can be made through wget.
This article will briefly suggest few ways to do website mirroring on linux / bsd as wget is both available on those two free operating systems.

1. Most Simple exact mirror copy of website

The most basic use of wget's mirror capabilities is by using wget's -mirror argument:

# wget -m

Creating a mirror like this is not a very good practice, as the links of the mirrored pages will still link to external URLs. In other words link URL will not pointing to your local copy and therefore if you're not connected to the internet and try to browse random links of the webpage you will end up with many links which are not opening because you don't have internet connection.

2. Mirroring with rewritting links to point to localhost and in between download page delay

Making mirror with wget can put an heavy load on the remote server as it fetches the files as quick as the bandwidth allows it. On heavy servers rapid downloads with wget can significantly reduce the download server responce time. Even on a some high-loaded servers it can cause the server to hang completely.
Hence mirroring pages with wget without explicity setting delay in between each page download, could be considered by remote server as a kind of DoS – (denial of service) attack. Even some site administrators have already set firewall rules or web server modules configured like Apache mod_security which filter requests to IPs which are doing too frequent HTTP GET /POST requests to the web server.
To make wget delay with a 10 seconds download between mirrored pages use:

# wget -mk -w 10 -np --random-wait

The -mk stands for -m/-mirror and -k / shortcut argument for –convert-links (make links point locally), –random-wait tells wget to make random waits between o and 10 seconds between each page download request.

3. Mirror / retrieve website sub directory ignoring robots.txt "mirror restrictions"

Some websites has a robots.txt which restricts content download with clients like wget, curl or even prohibits, crawlers to download their website pages completely.

/robots.txt restrictions are not a problem as wget has an option to disable robots.txt checking when downloading.
Getting around the robots.txt restrictions with wget is possible through -e robots=off option.
For instance if you want to make a local mirror copy of the whole sub-directory with all links and do it with a delay of 10 seconds between each consequential page request without reading at all the robots.txt allow/forbid rules:

# wget -mk -w 10 -np -e robots=off --random-wait

4. Mirror website which is prohibiting Download managers like flashget, getright, go!zilla etc.

Sometimes when try to use wget to make a mirror copy of an entire site domain subdirectory or the root site domain, you get an error similar to:

Sorry, but the download manager you are using to view this site is not supported.
We do not support use of such download managers as flashget, go!zilla, or getright

This message is produced by the site dynamic generation language PHP / ASP / JSP etc. used, as the website code is written to check on the browser UserAgent sent.
wget's default sent UserAgent to the remote webserver is:

As this is not a common desktop browser useragent many webmasters configure their websites to only accept well known established desktop browser useragents sent by client browsers.
Here are few typical user agents which identify a desktop browser:

  • Mozilla/5.0 (Windows NT 6.1; rv:6.0) Gecko/20110814 Firefox/6.0
  • Mozilla/5.0 (X11; Linux i686; rv:6.0) Gecko/20100101 Firefox/6.0
  • Mozilla/6.0 (Macintosh; I; Intel Mac OS X 11_7_9; de-LI; rv:1.9b4) Gecko/2012010317 Firefox/10.0a4
  • Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.2a1pre) Gecko/20110324 Firefox/4.2a1pre

etc. etc.

If you're trying to mirror a website which has implied some kind of useragent restriction based on some "valid" useragent, wget has the -U option enabling you to fake the useragent.

If you get the Sorry but the download manager you are using to view this site is not supported , fake / change wget's UserAgent with cmd:

# wget -mk -w 10 -np -e robots=off \
--referer="" \--user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv: Gecko/20070725 Firefox/" \--header="Accept:text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" \--header="Accept-Language: en-us,en;q=0.5" \--header="Accept-Encoding: gzip,deflate" \--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" \--header="Keep-Alive: 300"

For the sake of some wget anonimity – to make wget permanently hide its user agent and pretend like a Mozilla Firefox running on MS Windows XP use .wgetrc like this in home directory.

5. Make a complete mirror of a website under a domain name

To retrieve complete working copy of a site with wget a good way is like so:

# wget -rkpNl5 -w 10 --random-wait

Where the arguments meaning is:
-r – Retrieve recursively
-k – Convert the links in documents to make them suitable for local viewing
-p – Download everything (inline images, sounds and referenced stylesheets etc.)
-N – Turn on time-stamping
-l5 – Specify recursion maximum depth level of 5

6. Make a dynamic pages static site mirror, by converting CGI, ASP, PHP etc. to HTML for offline browsing

It is often websites pages are ending in a .php / .asp / .cgi … extensions. An example of what I mean is for instance the URL You see the url page is tutorial.php once mirrored with wget the local copy will also end up in .php and therefore will not be suitable for local browsing as .php extension is not understood how to interpret by the local browser.
Therefore to copy website with a non-html extension and make it offline browsable in HTML there is the –html-extension option e.g.:

# wget -mk -w 10 -np -e robots=off \
--random-wait \

A good practice in mirror making is to set a download limit rate. Setting such rate is both good for UP and DOWN side (the local host where downloading and remote server). download-limit is also useful when mirroring websites consisting of many enormous files (documental movies, some music etc.).
To set a download limit to add –limit-rate= option. Passing by to wget –limit-rate=200K would limit download speed to 200KB.

Other useful thing to assure wget has made an accurate mirror is wget logging. To use it pass -o ./my_mirror.log to wget.

Speeding up Apache through apache2-mpm-worker and php5-cgi on Debian / How to improve Apache performance and decrease server memory consumption

Friday, March 18th, 2011

Reading Time: 5minutes

speeding up apache through apache2-mpm-worker and php5-cgi on Debian Linux / how to improve apache performance and decrease server responce time
By default most Apache running Linux servers on the Internet are configured to use with the mpm prefork apache module
Historically prefork apache module is the predecessor of the worker module therefore it's believed to be a way more tested and reliable, if you need a critical reliable webserver configuration.

However from my experience by so far with the Apache MPM Worker I can boldly say that many of the rumors concerning the unreliabity of apache2-mpm-worker are just myths.

The old way Apache handles connections e.g. the mod prefork is the well known way that high amount of the daemons on Linux and BSD are still realying on.
When prefork is a used by Apache, every new TCP/IP connection arriving at your Linux server on the Apache configured port let's say on port 80 is being served by Apache in a way that the Apache process (mother process) parent does fork a new Apache parent copy in order to serve the new request.
Thus by using the prefork Apache needs to fork new process (if it doesn't have already an empty forked one waiting for connections) and serve the HTTP request of the new client, after the request of the client is completed the newly forked Apache usually dies (even though it again depends on the way the Apache server is configured via the Apache configuration – apache2.conf / httpd.conf etc.).

Now you can imagine how slow and memory consuming it is that all the time the parent Apache process spawns new processes, kills old ones etc. in order to fulfill the client requests.

Now just to compare the Apace mpm prefork does not use the old forking way, but relies on a few Apache processes which handles all the requests without constantly being destroyed and recreated like with the prefork module.
This saves operations and system resources, threaded programming has already been proven to be more efficient way to handle tasks and is heavily adopted in GUI programming for instance in Microsoft Windows, Mac OS X, Linux Gnome, KDE etc.

There is plenty of information and statistical data which compares Apache running with prefork and respectively worker modules online.
As the goal of this article is not to went in depths with this kind of information I would not say more on it but let you explore online a bit more about them in case if you're interested.

The purpose of this article is to explain in short how to substitute the Apache2-MPM-Prefork and how your server performance could benefit out of the use of Apache2-MPM-Worker.
On Debian the default Apache process serving module in Apache 1.3x,Apache 2.0x and 2.2x is prefork thus the installation of apache2-mpm-worker is not "a standard way" to install Apache

Deciding to swith from the default Debian apache-mpm-prefork to apache-mpm-worker is quite a serious and responsible decision and in some cases might cause troubles, if you have decided to follow my article be sure to consider all the possible negative consequences of switching to the apache worker !

Now after having said a bunch of info which might be not necessary with the experienced system admin I'll continue on with the steps to install the apache2-mpm-worker.

1. Install the apache2-mpm-worker

debian:~# apt-get install apache2-mpm-worker php5-cgi
Reading state information... Done
The following packages were automatically installed and are no longer required:
The following packages will be REMOVED apache2-mpm-prefork libapache2-mod-php5
The following NEW packages will be installed apache2-mpm-worker
0 upgraded, 1 newly installed, 2 to remove and 46 not upgraded.
Need to get 0B/259kB of archives.After this operation, 6193kB disk space will be freed.

As you can notice in below's text confirmation which will appear you will have to remove the apache2-mpm-prefork and the apache2-mpm-worker modules before you can proceed to install the apache2-mpm-prefork.

You might ask yourself if I remove my installed libphp how would I be able to use my Apache with my PHP based websites? And why does the apt package manager requires the libapache2-mod-php5 to get removed.
The explanation is simple apache2-mpm-worker is not thread safe, in other words scripts which does use the php fork(); function would not work correctly with the Apache worker module and will probably be leading to PHP and Apache crashes.
Therefore in order to install the apache mod worker it's necessary that no libapache2-mod-php5 is existent on the system.
In order to have a PHP installed on the server again you will have to use the php5-cgi deb package, this is the reason in the above apt-get command I'm also requesting apt to install the php5-cgi package next to apache2-mpm-worker.

2. Enable the cgi and cgid apache modules

debian:~# a2enmod cgi
debian:~# a2enmod cgid

3. Activate the mod_actions apache modules

debian:~# cd /etc/apache2/mods-enabled
debian:~# ln -sf ../mods-available/actions.load
debian:~# ln -sf ../mods-available/actions.conf

4. Add configuration options in order to enable mod worker to use the newly installed php5-cgi

Edit /etc/apache2/mods-available/actions.conf vim, mcedit or nano (e.g. your editor of choice and add inside:

&ltIfModule mod_actions.c>
Action application/x-httpd-php /cgi-bin/php5

After completing all the above instructions, you might also need to edit your /etc/apache2/apache2.conf to tune up, how your Apache mpm worker will serve client requests.
Configuring the <IfModule mpm_worker_module> in apache2.conf is necessary to optimize your newly installed mpm_worker module for performance.

5. Configure the mod_worker_module in apache2.conf One example configuration for the mod worker is:

<IfModule mpm_worker_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0

Consider the fact that this configuration is just a sample and it's in no means configured for serving Apache requests for high load Apache servers and you need to further play with the values to have a good results on your server.

6. Check that all is fine with your Apache configurations and no syntax errors are encountered

debian:~# /usr/sbin/apache2ctl -t
Syntax OK

If you get something different from Syntax OK track the error and fix it before you're ready to restart the Apache server.

7. Now restart the Apache server

debian:~# /etc/init.d/apache2 restart

All should run fine and hopefully your PHP scripts should be interpreted just fine through the php5-cgi instead of the libapache2-mod-php5.
Using the /usr/bin/php5-cgi will increase with some percentage your server CPU load but on other hand will drasticly decrease the Webserver memory consumption.
That's quite logical because the libapache2-mod-hp5 is loaded once during apache server whether a new instance of /usr/bin/php5-cgi is invoked during each of Apache requests via the mod worker.

There is one serious security flow coming with php5-cgi, DoS against a server processing scripts through php5-cgi is much easier to be achieved.
An example for a denial attack which could affect a website running with mod worker and php5-cgi, could be simulated from a simple user with a web browser which holds up the f5 or ctrl + r browser page refresh buttons.
In that case whenever php5-cgi is used the CPU load would rise drastic, one possible solution to this denial of service issues is by installing and using libapache2-mod-evasive like so:

8. Install libapache2-mod-evasive

debian:~# apt-get install libapache2-mod-evasive
The Apache mod evasive module is a nice apache module to minimize HTTP DoS and brute force attacks.
Now with mod worker through the php5-cgi, your apache should start serving requests more efficiently than before.
For some performance reasons some might even want to try out the fastcgi with the worker to boost the Apache performance but as I have never tried that I can't say how reliable a a mod worker with a fastcgi would be.

N.B. ! If you have some specific php configurations within /etc/php5/apache2/php.ini you will have to set them also in /etc/php5/cgi/php.ini before you proceed with the above instructions to install Apache otherwise your PHP scripts might not work as expected.

Mod worker is also capable to work with the standard mod php5 Apache module, but if you decide to go this route you will have to recompile your PHP lib manually from source as in Debian this option is not possible with the default php library.
This installation worked fine on Debian Lenny but suppose the same installation should work fine on Debian Squeeze as well as Debian testing/unstable.
Feedback on the afore-described mod worker installation is very welcome!

How rescue unbootable Windows PC, Windows files through files Network copy to remote server shared Folder using Hirens Boot CD

Saturday, November 12th, 2011

Reading Time: 2minutes

I'm rescuing some files from one unbootable Windows XP using a livecd with Hirens Boot CD 13

In order to rescue the three NTFS Windows partitions files, I mounted them after booting a Mini Linux from Hirens Boot CD.

Mounting NTFS using Hirens BootCD went quite smoothly to mount the 3 partitions I used cmds:

# mount /dev/sda1 /mnt/sda1
# mount /dev/sda2 /mnt/sda2
# mount /dev/sdb1 /mnt/sdb1

After the three NTFS file partitions are mounted I used smbclient to list all the available Network Shares on the remote Network Samba Shares Server which by the way possessed the NETBIOS name of SERVER 😉

# smbclient -L //SERVER/
Enter root's password:
Domain=[SERVER] OS=[Windows 7 Ultimate 7600] Server=[Windows 7 Ultimate 6.1]

Sharename Type Comment
——— —- ——-
!!!MUSIC Disk
ADMIN$ Disk Remote Admin
C$ Disk Default share
Canon Inkjet S9000 (Copy 2) Printer Canon Inkjet S9000 (Copy 2)
D$ Disk Default share
Domain=[SERVER] OS=[Windows 7 Ultimate 7600] Server=[Windows 7 Ultimate 6.1]
Server Comment
——— ——-
Workgroup Master
——— ——-

Further on to mount the //SERVER/D network samba drive – (the location where I wanted to transfer the files from the above 3 mounted partitions):

# mkdir /mnt/D
# mount // /mnt/D

Where the IP is actually the local network IP address of the //SERVER win smb machine.

Afterwards I used mc to copy all the files I needed to rescue from all the 3 above mentioned win partitions to the mounted //SERVER/D