Posts Tagged ‘setting’

Remove old unused kernels and cleanup orphaned packages on CentOS / RHEL/ Fedora and Debian Linux

Friday, October 23rd, 2020

Reading Time: 4minutes

remove-old-unused-kernel-on-centos-redhat-rhel-fedora-linux-howto-delete-orphaned-packages

If you administer CentOS 7 / CentOS  8 bunch of servers it is very likely after one of the scheduled Patch days every 6 months or so, you end up with a multiple Linux OS kernels installed on the system.
In normal situation on a freshly installed CentOS machine only one rpm package is installed on the system with the kernel release shipped with CentOS / RHEL / Fedora distro:
The reason to remove the old unused kernels is very simple, you don't want to have a messy installation and after some of the updates to boot up in a revert back old kernel or if you're pedantic to simply save few megas of space.
Some people choose to have more than one kernel just to make sure, if the new installed one doesn't boot, after a restart from ILO / IDRAC remote console interface you can select to boot the proper kernel. I agree having the old kernel before the system *kernel* upgrade as backup recovery is a good thing but this is a good thing to the point the system gets booted after reboot (you know we sysadmins usually after each major system package upgrade), we like to reboot the system warmly praying and hoping it will boot up next time 🙂
 

1. Remove CentOS last XX kernels from the OS

Of course removal of old kernels could be managed by a simple

yum remove kernel


yum-kernel-remove-centos-linux

One more than one kernel is present you can hence leave only lets say the last 2 installed kernel on the CentOS host (some people prefer to have only one) but just for the sake of having a backup kernel I like more to have last two kernels installed present, to do so run package-cleanup which is contained in yum-utils rpm package CentOS – this is CentOS / Redhat ( RHEL) specific command.
 

[root@centos ~ ]:#package-cleanup –oldkernels –count=2

package-cleanup-centos-linux-screenshot-1

–count=number argument – tells how many from the  latest version kernels to get removed.

Note if you don't have the package-cleanup command install yum-utils package:

[root@centos ~ :]#  yum install -y yum-utils

cleanup-old-kernels-linux-leave-only-set-of-2-kernels-active-on-centos-rhel-fedora


2. RemoveOld kernels from Fedora Linux – leave only the latest 3 installed

This is done with dnf by setting the –-latest-limit arg to negative value to how many last kernels want to keep

[root@fedora ~ ]:# dnf remove $(dnf repoquery –installonly –latest-limit=-3 -q)

 

3. Set how many kernels you want to be present on system all the time after package upgrades

It is possible to tell CentOS / RHEL / Fedora's on how many kernels show be kept installed on the system, the default configured on Operating system install time is to keep the last 5 installed kernel on the OS. This is controlled from installonly_limit=5 value that is usually as of year 2020 RPM based distributions found under /etc/yum.conf (on CentOS / RHEL) and in /etc/dnf/dnf.conf (in Fedora) configuration file and sets the desired number of kernels present on system after issuing commands yum upgrade / dnf upgrade –refresh etc.
The minimum number to give to  installonly_limit is 2.
 

4. Remove orphan rpm packages from server

The next thing to do is to check the installed orphan packages to see if we can safely remove them; by orphaned packages we mean all packages which no longer serve a purpose of package dependencies.
Orphan packages are packages who left over from some old dependencies that are no longer needed on the system but just take up space and impose a possible security risk as some of them might end up with time with a public well known and hacked CVE vulnearbility.

Let me try to explain this concept with a quick example: package A is depended on package B, thus, in order to install package A the package B must also be installed. Once the package A is removed the package B might still be installed, hence the package B is now orphaned package.
Here’s how we can safely see the orphan packages we do have on our system:

[root@centos ~ :]# package-cleanup –quiet –leaves –exclude-bin

And here’s how we can delete them:

[root@centos ~ :]# package-cleanup –quiet –leaves –exclude-bin | xargs yum remove -y


The above commands should be launched multiple times, because the packages deleted with the first batch could create additional orphan packages, and so on: be sure to perform these tasks until no orphan packages appear anymore after the first package-cleanup command.

 

5. Delete Old Kernels and keep only last three ones on Debian / Ubuntu Linux

To do the same on a debian based distribution there is a command is provided by a deb package byobu, if you want to clean up old kernels on Debians :

$ sudo purge-old-kernels –keep 3


That's all folks enjoy ! 🙂

 

Upgrade Debian Linux 9 to 10 Stretch to Buster and Disable graphical service load boot on Debian 10 Linux / Debian Buster is out

Tuesday, July 9th, 2019

Reading Time: 5minutes

howto-upgrade-debian-linux-debian-stretch-to-buster-debian-10-buster

I've just took a time to upgrade my Debian 9 Stretch Linux to Debian Buster on my old school Laptop (that turned 11 years old) Lenovo Thinkpad R61 . The upgrade went more or less without severe issues except few things.

The overall procedure followed is described n a few websites out there already and comes up to;

 

0. Set the proper repository location in /etc/apt/sources.list


Before update the sources.list used are:
 

deb [arch=amd64,i386] http://ftp.bg.debian.org/debian/ buster main contrib non-free
deb-src [arch=amd64,i386] http://ftp.bg.debian.org/debian/ buster main contrib non-free

 

deb [arch=amd64,i386] http://security.debian.org/ buster/updates main contrib non-free
deb-src [arch=amd64,i386] http://security.debian.org/ buster/updates main contrib non-free

deb [arch=amd64,i386] http://ftp.bg.debian.org/debian/ buster-updates main contrib non-free
deb-src [arch=amd64,i386] http://ftp.bg.debian.org/debian/ buster-updates main contrib non-free

deb http://ftp.debian.org/debian buster-backports main


For people that had stretch defined in /etc/apt/sources.list you should change them to buster or stable, easiest and quickest way to omit editting with vim / nano etc. is run as root or via sudo:
 

sed -i 's/stretch/buster/g' /etc/apt/sources.list
sed -i 's/stretch/buster/g' /etc/apt/sources.list.d/*.list

The minimum of config in sources.list afterthe modificationshould be
 

deb http://deb.debian.org/debian buster main
deb http://deb.debian.org/debian buster-updates main
deb http://security.debian.org/debian-security buster/updates main

Or if you want to always be with latest stable packages (which is my practice for notebooks):

deb http://deb.debian.org/debian stable main
deb http://deb.debian.org/debian stable-updates main
deb http://security.debian.org/debian-security stable/updates main

 

1. Getting list of hold packages if such exist and unholding them, e.g.

 

apt-mark showhold


Same could also be done via dpkg

dpkg –get-selections | grep hold


To unhold a package if such is found:

echo "package_name install"|sudo dpkg –set-selections

For those who don't know what hold package is this is usually package you want to keep at certain version all the time even though after running apt-get upgrade to get the latest package versions.
 

2. Use df -h and assure you have at least 5 – 10 GB free space on root directory / before proceed

df -h /

3. Update packages list to set new set repos as default

apt update

 

4. apt upgrade
 

apt upgrade

Here some 10 – 15 times you have to confirm what you want to do with configuration that has changed if you're unsure about the config (and it is not critical service) you're aware as such as Apache / MySQL / SMTP etc. it is best to install the latest maintainer version.

Hopefully here you will not get fatal errors that will interrupt it.

P.S. It is best to run apt-update either in VTTY (Virtual console session) with screen or tmux or via a physical tty (if this is not a remote server) as during the updates your GUI access to the gnome-terminal or konsole / xterm whatever console used might get cut. Thus it is best to do it with command:
 

screen apt upgrade

 

5. Run dist-upgrade to finalize the upgrade from Stertch to Buster

 

Once all is completed of the new installed packages, you will need to finally do, once again it is best to run via screen, if you don't have installed screen install it:

 

if [ $(which screen) ]; then echo 'Installed'; else apt-get install –yes screen ; fi

screen apt dist-upgrade


Here once again you should set whether old configuration to some e services has to stay or the new Debian maintainer package shipped one will overwrite the old and locally modified (due to some reason), here do wisely whatever you will otherwise some configured services might not boot as expected on next boot.

 

6. What if you get packages failed on update


If you get a certain package failed to configure after installed due to some reason, if it is a systemd service use:

 

journalctl -xe |head -n 50


or fully observer output of journalctl -xe and decide on yourself.

In most cases

dpkg-reconfigure failed-package-name


should do the trick or at least give you more hints on how to solve it.

 

Also if a package seems to be in inconsistent or broken state after upgrade  and simple dpkg-reconfigure doesn't help, a good command
that can help you is

 

dpkg-reconfigure -f package_name

 

or you can try to workaround a failed package setup with:
 

dpkg –configure -a

 
If dpkg-reconfigure doesn't help either as I experienced in prior of Debian from Debian 6 -> 7 an Debian 7 ->8 updates on some Computers, then a very useful thing to try is:
 

apt-get update –fix-missing 

apt-get install -f


At certain cases the only work around to be able to complete the package upgrade is to to remove the package with apt remove but due to config errors even that is not possible to work around this as final resort run:
 

dpkg –remove –force-remove-reinstreq

 

7. Clean up ununeeded packages

 

Some packages are left over due to package dependencies from Stretch and not needed in buster anymore to remove them.
 

apt autoremove

 

8. Reboot system once all upgrade is over

 

/sbin/reboot

 

9. Verify your just upgraded Debian is in a good state

 

root@noah:~# uname -a;
Linux noah 4.19.0-5-rt-amd64 #1 SMP PREEMPT RT Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux

 

root@noah:~# cat /etc/issue.net
Debian GNU/Linux 10
 

 

root@noah:~# lsb_release -a
No LSB modules are available.
Distributor ID:    Debian
Description:    Debian GNU/Linux 10 (buster)
Release:    10
Codename:    buster

 

root@noah:~# hostnamectl
   Static hostname: noah
         Icon name: computer-laptop
           Chassis: laptop
        Machine ID: 4759d9c2f20265938692146351a07929
           Boot ID: 256eb64ffa5e413b8f959f7ef43d919f
  Operating System: Debian GNU/Linux 10 (buster)
            Kernel: Linux 4.19.0-5-rt-amd64
      Architecture: x86-64

 

10. Remove annoying picture short animation with debian logo looping

 

plymouth-debian-graphical-boot-services

By default Debian 10 boots up with annoying screen hiding all the status of loaded services state .e.g. you cannot see the services that shows in [ FAILED ] state and  which do show as [ OK ] to revert back the old behavior I'm used to for historical reasons and as it shows a lot of good Boot time debugging info, in previous Debian distributions this was possible  by setting the right configuration options in /etc/default/grub

which so far in my config was like so

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y zswap.enabled=1 text"


Note that zswap.enabled=1 passed option is because my notebook is pretty old machine from 2008 with 4GB of memory and zswapdoes accelerate performance when working with swap – especially helpful on Older PCs for more you can read more about zswap on ArchLinux wiki
After modifying this configuration to load the new config into grub the cmd is:
 

/usr/sbin/update-grub

 
As this was not working and tried number of reboots finally I found that annoying animated gif like picture shown up is caused by plymouth below is excerpts from Plymouth's manual page:


       "The plymouth sends commands to a running plymouthd. This is used during the boot process to control the display of the graphical boot splash."

Plymouth has a set of themes one can set:

 

# plymouth-set-default-theme -l
futureprototype
details
futureprototype
joy
lines
moonlight
softwaves
spacefun
text
tribar

 

I tried to change that theme to make the boot process as text boot as I'm used to historically with cmd:
 

update-alternatives –config text.plymouth

 
As after reboot I hoped the PC will start booting in text but this does not happened so the final fix to turn back to textmode service boot was to completely remove plymouth
 

apt-get remove –yes plymouth

Thursday, July 14th, 2016

Reading Time: 2minutes

use-remote-dns-on-mozilla-firefox-howto-windows-linux-logo.svg

If you're using Mozilla Firefox browser to browse the Web with Traffic Tunneling via SSH Tunnel to your own Linux server like I do in order to prevent yourself traffic to be sniffed from your Work corporate computer (as most of the corporations such as IBM / Hewlett Packard / Concentrix etc. are forcing all employee PC traffic to be  to be transported via default set Windows Corporate Proxy active for all browsers.

Then you will certainly also want to prevent the DNS requests to be not logged somewhere in your Corporate IT department thus the question arises:

How to force DNS requests to be made through the Proxy server (SSH host)?

Nomatter where you're using Firefox browser with advanced proxying plugin such as FoxyProxy FF add-on or the default Proxy FF features the DNS lookups might end up in Corporate set DNS servers often forced for the computer / notebook and impossible to be changed to a custom ones as many of the Corporation internal Sharepoints and domains are only visible from their internal networks.

Thanksfully in newer versions there is an easy way to do it directly from Visual menus via:

Tools -> Options -> Advanced -> Network -> Settings

You will get a screen like below:
 

firefox-use-proxy-remote-dns-howto-screenshot

Just tick the Remote DNS and that will force Firefox to query remote Proxy server proxy DNS

 

If you happen to be running older Firefox which doesn't have the Remote DNS tick you can also try to set the setting manually:

 

  1. In firefox type this in your address bar:

    about:config

  2. Click I'll be careful I  promise.

  3. In the filter textbox, type: proxy

  4. Find the preference name called *network.proxy.socks_remote_dns*. Double click it to set it to true.

    i-will-be-careful-i-promise-firefox-windows-screenshot-warranty


network-proxy-socks-remote_dns-firefox-screenshot

Enjoy ! 🙂

What causes the “nRRPResponseCode 531” error, A fix to the nasty “nRRPResponseCode 531” error during domain name DNS change

Tuesday, March 16th, 2010

Reading Time: 3minutes
For two days now, I’m trying to set a custom DNS server for a (.net) domain purchased by gigaspark.com . Every time I try to change the nameservers for the (.net) domain an irritating error pops up, the error reads “nRRPResponseCode 531” and I cannot set my custom configured Bind DNS server for the (.net) domain. I believe the same problem happens also with (.com) domains.

In this relation, I tried googling online searching and searching what might be the stupid cause of the “nRRPResponseCode 531” error that prevents me from setting my custom configured Bind domain name servers to mydomain.net . I also contacted the support team from gigaspark multiply until I found out what is the trouble cause.
In short the “nRRPresponseCode 531” is an error that indicates your .net or .com domain is not figuring in VeriSign’s GRS domain database .
The Verisign GRS domain database contains a list of DNS servers that are correctly configured and trustworthy enough. I’ve seen many people online suffering from the same terrible error,
who pointed out that the error is caused by misconfigurations in the Bind DNS server or the zone file for the problematic domain name, though I’ve looked through multiple times to possibly track the problem in both my major named.conf and the rest of bind’s configuration files as well as in the domain name I had registered mydomain.net ,there was nothing misconfigured or unusual.
I have to admit, this problem is really odd, because I was able to successfully set the same custom configured Bind DNS server for mydomain.info and mydomain.biz but, yet whenever trying to set the same Bind DNS for mydomain.net I came across the shitty nrRRPResponseCode 531 .
Thanks to the kind help of Gigaspark’s tech support together with some google posts on the matter I figured out Gigaspark are using ENOM – a major domain name registrar offering easy ways for an end domain providers to become their resellers.
It seems ENOM’s policy is enforces you as a domain name customer to register your full DNS domain name let’s say (ns1.mydns.com) in Verisign’s GRS domain database otherwise they refuse you the right to set yourself your ns1.mydns.com for your domain, because if the DNS domain name is not figuring in that database it’s not trust worthy!
I believe many people would agree with me this is a real shit! You pay for your domain and you should have the full rights over it.
I mean you should be allowed to set whatever DNS domain name even, if it’s not an existing one and they shouldn’t bother you with stupid DNS domain name registrations in stupid Verisign GRS databases and so on!
Now you probably wonder what is the required steps to take to be able to register the domain in that Verisign GRS database in order to be able to set your ns1.mydomain.com as a default DNS server for your mydomainname.com .
Well you have to contact your domain registrar, let’s say gigaspark.com .
You log to your account on tucowsdomains for your domain mydomain.com … then you find something similar to: “register a nameserver” among the overall menus options.
Then you have to register your nameserver ns1.mydomain.com. Then you wait between 24 up to 48h and then you have to test if your NS has already properly entered the Verisign GRS database you have to visit on Verisign GRS Whois .
Hopefully the guys from Verisign GRS would approve your DNS host to enter there database and then at last you might be able to set in your DNS host as a preferred DNS for your (.net) / (.com?) domain name.
So go back to gigaspark’s slovenian interface and try changing the DNSes once again! If you’re lucky with God’s help (for sure), you would be at last be successful in setting your BIND name server as a primary DNS.

Make QMAIL with vpopmail vchkpw, courier-authlib and courier-imap auth work without MySQL on Debian Linux qmailrocks Thibs install

Friday, September 28th, 2012

Reading Time: 5minutesHow to make qmail vpopmail vchkpw courier-authlib and courier-imap work storing mails on hard disk with qmailrocks Thibs install

Recently installed a new QMAIL, following mostly Thibs Qmailrocks install guide. I didn’t followed literally Thibs good guide, cause in his guide in few of the sections like Install Vpopmail he recommends using MySQL as a Backend to store Vpopmail email data and passwords; I prefer storing all vpopmail data on the file system as I believe it is much better especially for tiny QMAIL mail servers with less than 500 mail box accounts.

In this little article I will explain, how I made Vpopmail courier-authlib and courier-imap play nice together without storing data in SQL backend.

1. Compile vpopmail with file system data storage support

So here is how I managed to make vpopmail + courier-authlib + courier-imap, work well together:

First its necessery to compile Vpopmailin store all its users data and mail data on file system. For this in Thibs Vpopmail Intsall step compiled Vpopmail without support for MySQL, e.g. instead of using his pointed compile time ./configure, arguments I used:


# cd /downloads/vpopmail-5.4.33
# ./configure \
--enable-qmaildir=/var/qmail/ \
--enable-qmail-newu=/var/qmail/bin/qmail-newu \
--enable-qmail-inject=/var/qmail/bin/qmail-inject \
--enable-qmail-newmrh=/var/qmail/bin/qmail-newmrh \
--enable-tcprules-prog=/usr/bin/tcprules \
--enable-tcpserver-file=/etc/tcp.smtp \
--enable-clear-passwd \
--enable-many-domains \
--enable-qmail-ext \
--enable-logging=y \
--enable-auth-logging \
--enable-libdir=/usr/lib/ \
--disable-roaming-users \
--disable-passwd \
--enable-domainquotas \
--enable-roaming-users
....
....
# make && make install-strip
# cat > ~vpopmail/etc/vusagec.conf < < __EOF__
Server:
Disable = True;
__EOF__
echo 'export PATH=$PATH:/var/qmail/bin/:/home/vpopmail/bin/' > /etc/profile.d/extrapath.sh
chmod +x /etc/profile.d/extrapath.sh
source /etc/profile

A tiny shell script with all above options to compile (qmail) vpopmail without MySQL / PostgreSQL support is here

For other steps concerning creation of vpopmail/vchkpw – user/group just follow as Thibs suggests.

2. Compile and install courier-authlib-0.59.1

I’ve made mirror of courier-authlib.0.59.1.tar.gz cause this version includes support for vchkpw without mysql, its a pity newer versions of courier-authlib not any more have support for vpopmail to store its data directly on the hard disk.

Then on downlaod, compile && install courier-authlib:

Download authlib courier-authlib.0.59.1.tar.gz – (I made mirror of courier-authlib.0.59.1.tar.gz you can use my mirror or download it somewhere else from the net):


# cd /usr/local/src
# wget -q https://pc-freak.net/files/courier-authlib.0.59.1.tar.gz
# tar -zxvvf courier-authlib.0.59.1.tar.gz

Compile courier-authlib

# ./configure --prefix=/usr/local --exec-prefix=/usr/local --with-authvchkpw --without-authldap --without-authmysql --disable-root-check --with-ssl --with-authchangepwdir=/usr/local/libexec/authlib
....
# make && make install && make install-strip && make install-configure
....

On Debian Squeeze, this version of courier-authlib compiles fine, on Debian Lenny I use it too and there it is okay.

Unless above commands returns a compile error authlib will be installed inside /usr/local/libexec. If you get any errors it is most likely due to some missing header files. The error should be self explanatory enough, but just in case you have troubles to find what deb is necessery to install, please check here the complete list of installed packages I have on the host . In case of problems the quickest way (if on Debian Squeeze) is to install same packages, type:


# wget -q https://pc-freak.net/files/list_of_all_deb_necessery_installed_packages_for_authlib.txt
# for i in $(cat list_of_all_deb_necessery_installed_packages_for_authlib.txt |awk '{ print $2 }'); do
apt-get install --yes $i;
done

This is for the lazy ones though it might install you some packs you don’t like to have on your host, so just install it in case you know what you’re doing 🙂

Next step is to set proper configuration for courier-authdaemon.

3. Configure courier-authlib in /usr/local/etc/authlib

Again for the lazy ones I have prepared a good config which is working 100% with vpopmail configured to store mails on the file system, to install the “good” configs, fetch mine and put them in proper location, e.g.:


# cd /usr/local/etc
# wget -q https://pc-freak.net/files/authlib-config-for-qmail-with-hdd-directory-stored-userdata.tar.gz
# tar -zxvvf authlib-config-for-qmail-with-hdd-directory-stored-userdata.tar.gz
....

For those who prefer not to use my configuration as pointed above, here is what you will need to change manually in configs:

Edit /usr/local/etc/authlib/authdaemonrc and make sure there variable authmodulelist and authmodulelist and daemons=5
equals to:


authmodulelist="authvchkpw"


authmodulelistorig="authuserdb authpgsql authldap authmysql authcustom authvchkpw authpipe"


daemons=10

Bear in mind here the setting daemons, will set how many maximum parallel connections should be possible to authdaemond on new IMAP fetch mail user requests. Setting it to 10 will allow your mail server to support up to 10 users to paralelly check your mail for a tiny mail server this setting is okay if you expect higher number of parallel mail users raise the setting to some setting fitting your needs.

P.S. On some qmail installations this value has created weird problems and took me hours to debug the whole mess is caused by this setting, make sure you plan it now unless you don’t to loose some time in future.

4. Stop debian courier-authdaemon and start custom compiled one

Now all is ready and authdaemond can be started, but before that if you have installed courier-authlib as a debian package you need to stop it via init script and only when completely sure old default Debian courier-authdaemon is stopped launch the new installed one:


# /etc/init.d/courier-authdaemon stop
# s ax |grep -i authdaemond |grep -v grep
#
# /usr/local/sbin/authdaemond start
#

To make the newly custom source installed courier-authdaemon to load itself on system boot instead of the debian installed package


# dpkg -l |grep -i courier-authdaemon
ii courier-authdaemon 0.63.0-3 Courier authentication daemon

open /etc/init.d/courier-authdaemond, after line:


. /lib/lsb/init-functions

add


/usr/local/sbin/authdaemond start
exit 0

This will make the script exit once launches cmd /usr/local/sbin/authdaemond start

5. Compile and Install courier-imap

You will also have to install from courier-imap archive source, I have tested it and know Qmail + Vpopmail + Courier-Imap works for sure with version courier-imap-4.1.2.tar.bz2

As of time of writing this post courier-imap-4.11.0.tar.bz2 is the latest available for download from Courier-imap download site unfortunately this version requires higher version of >= courier-authlib-0.63

In order install courier-imap-4.1.2.tar.bz2


# cd /usr/local/src
# wget -q https://pc-freak.net/files/courier-imap-4.1.2.tar.bz2
# tar -jxvvf courier-imap-4.1.2.tar.bz2
...
# chown -R hipo:hipo courier-imap-4.1.2
# su hipo
$ cd courier-imap-4.1.2/
$ export CFLAGS="-DHAVE_OPEN_SMTP_RELAY -DHAVE_VLOGAUTH"
$ export COURIERAUTHCONFIG=/usr/local/bin/courierauthconfig
$ export CPPFLAGS=-I/usr/local/courier-authlib/include
$ ./configure --prefix=/usr/local/courier-imap --disable-root-check
...
$ exit
# make
...
# make install
...
# make install configure

It is recommended courier-imap to be compiled with non root username. In above code I use my username hipo, other people have to use any non-root user.

6. Set proper configuration and new init script for courier-imap

In /usr/lib/courier-imap, download following working configs (for convenience I’ve made tar with my configs):


# cd /usr/lib/courier-imap
# rm -rf etc
# wget -q https://pc-freak.net/files/courier-imap-config-etc.tar.gz

Then you will have to overwrite default courier-imap init script in /etc/init.d/courier-imap with another one to start the custom compiled one instead of debian default installed courier-imap


# mv /etc/init.d/courier-imap /root
# cd /etc/init.d
# wget -q https://pc-freak.net/files/debian-courier-imap
# mv debian-courier-imap courier-imap
# chmod +x courier-imap

This init script is written use /var/lock/subsys/courier-imap, so you will have to also create /var/lock/subsys/


# mkdir -p /var/lock/subsys

7. Start custom installed courier-imap

The start/stop init script of newly installed courier-imap is /usr/lib/courier-imap/libexec/imapd.rc


/usr/lib/courier-imap/libexec/imapd.rc start

Since a new /etc/init.dcourier-imap is installed too, it can be also used to control courier-imap start/stop.

Well thats should be enough for Courier-authlib and Courier-Authlib to communicate fine between each other and be able to connect and fetch e-mail stored in file system by vpopmail.

8. Test if Qmail IMAP proto finally works


# telnet localhost 143
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE AUTH=CRAM-MD5 ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information.
a login username@mail-domain.com my-username-password
a OK LOGIN Ok.
a LIST "" "*"
* LIST (\HasNoChildren) "." "INBOX.Sent"
* LIST (\Marked \HasChildren) "." "INBOX"
* LIST (\HasNoChildren) "." "INBOX.Drafts"
* LIST (\HasNoChildren) "." "INBOX.Trash"
a OK LIST completed
a EXAMINE Inbox
* FLAGS (\Draft \Answered \Flagged \Deleted \Seen \Recent)
* OK [PERMANENTFLAGS ()] No permanent flags permitted
* 6683 EXISTS
* 471 RECENT
* OK [UIDVALIDITY 1272460837] Ok
* OK [MYRIGHTS "acdilrsw"] ACL
a OK [READ-ONLY] Ok
* 1 FETCH (BODY[] {2619}
Return-Path:
Delivered-To: hipo@my-domain-name.com
Received: (qmail 22304 invoked by uid 1048); 24 Apr 2012 14:49:49 -0000
Received: from unknown (HELO localhost) (127.0.0.1)
by mail.my-domain-name.com with SMTP; 24 Apr 2012 14:49:49 -0000
Delivered-To: hipo@my-domain-name.com
Received: from localhost [127.0.0.1]
......
......

That’s all it works. Enjoy 🙂

How to Turn Off, Suppress PHP Notices and Warnings – PHP error handling levels via php.ini and PHP source code

Friday, April 25th, 2014

Reading Time: 2minutes

php-logo-disable-warnings-and-notices-in-php-through-htaccess-php-ini-and-php-code

PHP Notices are common to occur after PHP version upgrades or where an obsolete PHP code is moved from Old version PHP to new version. This is common error in web software using Frameworks which have been abandoned by developers.

Having PHP Notices to appear on a webpage is pretty ugly and give a lot of information which might be used by malicious crackers to try to break your site thus it is always a good idea to disable PHP Notices. There are plenty of ways to disable PHP Notices

The easiest way to disable it is globally in all Webserver PHP library via php.ini (/etc/php.ini) open it and make sure display_errors is disabled:

display_errors = 0

or

display_errors = Off

Note that that some claim in PHP 5.3 setting display_errors to Off will not work as expected. Anyways to make sure where your loaded PHP Version display_errors is ON or OFF use phpinfo();

It is also possible to disable PHP Notices and error reporting straight from PHP code you need code like:

 

<?php
// Turn off all error reporting
error_reporting(0);
?>

 

or through code:

 

ini_set('display_errors',0);


PHP has different levels of error reporting, here is complete list of possible error handling variables:

 

 

 

<?php
// Report simple running errors

error_reporting(E_ERROR | E_WARNING | E_PARSE);

// Reporting E_NOTICE can be good too (to report uninitialized
// variables or catch variable name misspellings …)

error_reporting(E_ERROR | E_WARNING | E_PARSE | E_NOTICE);

// Report all errors except E_NOTICE
// This is the default value set in php.ini

error_reporting(E_ALL ^ E_NOTICE);
// Report all PHP errors (see changelog)

error_reporting(E_ALL);
// Report all PHP errors error_reporting(-1);
// Same as error_reporting(E_ALL);

ini_set('error_reporting', E_ALL); ?>

The level of logging could be tuned on Debian Linux via /etc/php5/apache2/php.ini or if necessary to set PHP log level in PHP CLI through/etc/php5/cli/php.ini with:

error_reporting = E_ALL & ~E_NOTICE

 

If you need to remove to remove exact warning or notices from PHP without changing the way  PHPLib behaves is to set @ infront of variable or function that is causing NOTICES or WARNING:
For example:
 

@yourFunctionHere();
@var = …;


Its also possible toDisable PHP Notices and Warningsusing .htaccess file (useful in shared hosting where you don't have access to global php.ini), here is how:

# PHP error handling for development servers
php_flag display_startup_errors off
php_flag display_errors off
php_flag html_errors off
php_flag log_errors on
php_flag ignore_repeated_errors off
php_flag ignore_repeated_source off
php_flag report_memleaks on
php_flag track_errors on
php_value docref_root 0
php_value docref_ext 0
php_value error_log /home/path/public_html/domain/php_errors.log
php_value error_reporting -1
php_value log_errors_max_len 0

This way though PHP Notices and Warnings will be suppressed errors will get logged into php_error.log

Apache increase loglevel – Increasing Apache logged data for better statistic analysis

Tuesday, July 1st, 2014

Reading Time: 2minutes

apache-increase-loglevel-howto-increasing-apache-logged-data-for-better-statistic-analysis
In case of development (QA) systems, where developers deploy new untested code, exposing Apache or related Apache modules to unexpected bugs often it is necessery to increase Apache loglevel to log everything, this is done with:

 

LogLevel debug

LogLevel warn is common logging option for Apache production webservers.
 

Loglevelwarn


in httpd.conf is the default Apache setting for Log. For some servers that produce too many logs this setting could be changed to LogLevel crit which will make the web-server log only errors of critical importance to webserver. Using LogLevel debug setting is very useful whether you have to debug issues with unworking (failing) SSL certificates. It will give you whole dump with SSL handshake and reason for it failing.

You should be careful before deciding to increasing server log level, especially on production servers.
Increased logging level puts higher load on Apache webserver, as well as produces a lot of gigabytes of mostly useless logs that could lead quickly to filling all free disk space.

If you  would like to increase logged data in access.log / error.log, because you would like to perform versatile statistical analisys on daily hits, unique visits, top landing pages etc. with Webalizer, Analog or Awstats.

Change LogFormat and CustomLog variables from common to combined.

By default Apache is logging with following LogFormat and Customlog
 

LogFormat "%h %l %u %t "%r" %>s %b" common
CustomLog logs/access_log common


Which will be logging in access.log format:

 

127.0.0.1 – jericho [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326


Change it to something like:

 

LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i" "%{User-agent}i"" combined CustomLog log/access_log combined


This would produce logs like:

127.0.0.1 – jericho [10/Oct/2000:13:55:36 -0700] “GET /apache_pb.gif HTTP/1.0” 200 2326 “http://www.example.com/start.html” “Mozilla/4.08 [en] (Win98; I ;Nav)"

 

Using Combined Log Format produces all logged information from CustomLog … common, and also logs the Referrer and User-Agentheaders, which indicate where users were before visiting your Web site page and which browsers they used. You can read rore on custom Apache logging tailoring theme on Apache's website

Fix MySQL connection error – Host ” is blocked because of many connection errors; unblock with ‘mysqladmin flush-hosts’

Wednesday, July 2nd, 2014

Reading Time: 2minutes

fix-mysql-too-many-connection-errors-explained

If you get a MySQL error like:

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

This most likely means your PHP / Java whatever programming language application connecting to MySQL is failing to authenticate with the application created (existing) or that the application is trying too many connections to MySQL in a rate where MySQL server can't serve all the requests.

Some common errors for Too many Connection errors are:
 

  • Networking Problem
  • Server itself could be down
  • Authentication Problems
  • Maximum Connection Errors allowed.

The value of the max_connection_errors system variable determines how many successive interrupted connection requests are permitted to myqsl server.
 

Well anyways if you get the:

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

You can consider this a sure sign application connections to MySQLis logging a lot of error connections, for some reason.
This error could also appear on very busy websites where high amount of separete connections are used – I've seen the error occur on PHP websites whether mysql_pconnect(); is selected in favour of the prooved working mysql_connect();

The first thing to do before changing / increasing default set of max connection errors is to check how many max connection errors are set within MySQL?

For that connect with MySQL CLI and issue:
 

mysql> SHOW VARIABLES LIKE '%error%';


+——————–+————————————————————-+
| Variable_name      | Value                                                           |
+——————–+————————————————————-+
| error_count        | 0                                                                     |
| log_error          | /var/log/mysql//mysqld.log                                |
| max_connect_errors | 10000                                                      |
| max_error_count    | 64                                                               |
| slave_skip_errors  | OFF                                                             |
+——————–+————————————————————-+


A very useful mysql cli command in debugging max connection errors reached problem is

mysql> SHOW PROCESSLIST;

 

To solve the error, try to tune in /etc/my.cnf, /etc/mysql/my.cnf or wherever my.cnf is located:

[mysqld]
max_connect_errors
variable

and

wait_timeout var. Some reasonable variable size would be:

max_connect_errors = 100000
wait_timeout = 60

If such (anyways) high values is still not high enough you can raise mysql config connection timeout

 

to

max_connect_errors = 100000000

Also if you want to try raise max_connect_errors var without making it permanenty (i.e. remember var setting after MySQL service restart), set it from MySQL cli with:

SET GLOBAL max_connect_errors


If you want to keep the set default max_connection_errors and fix it temporary, you can try to follow the error

Host '' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'

suggestion and issue in root console:

mysqladmin flush-hosts

Same could also be done from MySQL Cli with cmd:
 

FLUSH HOSTS;

Disable php notice logging / stop variable warnings in error.log on Apache / Nginx / Lighttpd

Monday, July 28th, 2014

Reading Time: 2minutes

disable_php_notice_warnings_logging_in-apache-nginx-lighttpd
At one of companies where I administrate few servers, we are in process of optimizing the server performance to stretch out the maximum out of server hardware and save money from unnecessery hardware costs and thus looking for ways to make server performance better.

On couple of web-sites hosted on few of the production servers, administrating, I've noticed dozens of PHP Notice errors, making the error.log quickly grow to Gigabytes and putting useless hard drive I/O overhead. Most of the php notice warnings are caused by unitialized php variables.

I'm aware having an unitialized values is a horrible security hole, however the websites are running fine even though the notice warnings and currently the company doesn't have the necessery programmers resource to further debug and fix all this undefined php vars, thus what happens is monthly a couple of hundreds megabytes of useless same php notice warnings are written in error.log.

That  error.log errors puts an extra hardship for awstats which is later generating server access statistics while generating the 404 errors statistics and thus awstats script has to read and analyze huge files with plenty of records which doesn't have nothing to do with 404 error

We found this PHP Notice warnings logged is one of the things we can optimize had to be disabled.

Here is how this is done:
On the servers running Debian Wheezy stable to disable php notices.

I had to change in /etc/php5/apache2/php.inierror_reporting variable.

Setting was to log everything (including PHP critical errors, warning and notices) like so:
 

vi /etc/php5/apache2/php.ini

error_reporting = E_ALL & ~E_DEPRECATED

to

error_reporting = E_COMPILE_ERROR|E_ERROR|E_CORE_ERROR


On CentOS, RHEL, SuSE based servers, edit instead /etc/php.ini.

This setting makes Apache to only log in error.log critical errors, php core dump (thread) errors and php code compilation (interpretation errors)

To make settings take affect on Debian host Apache webserver:

/etc/init.d/apache2 restart

On CentOS, RHEL Linux, had to restart Apache with:

/etc/init.d/httpd restart

For other servers running Nginx and Lighttpd webservers, after changing php.ini:

service nginx reload
service lighttpd restart

To disable php notices errors only on some websites, where .htaccess enabled, you can use also place in website DocumentRoot.htaccess:
 

php_value error_reporting 2039


Other way to disable via .htaccess is by adding to it code:
 

php_flag display_errors off


Also for hosted websites on some of the servers, where .htaccess is disabled, enabling / disabling php notices can be easily triggered by adding following php code to index.php

define('DEBUG', true);

if(DEBUG == true)
{
    ini_set('display_errors', 'On');
    error_reporting(E_ALL);
}
else
{
    ini_set('display_errors', 'Off');
    error_reporting(0);
}

 

Improve Websites SEO: Optimize images to Increase website loading performance on Linux server – Image Compress tools

Friday, December 5th, 2014

Reading Time: 4minutes

Optimize-website-images-pictures-to-Increase-website-loading-performance-on-Linux-server_Image_Compress_tools-Improve-Websites_SEO
Part of our daily life as Web hosting system adminstrators is to constantly strive to better utilize our Linux / Windows hosting servers hardware.
Therefore it is our constant task to look for new better ways to optimize our Apache Sites and Webservers in order to return served application content light fastto keep the Boss and customers happy 🙂

There are things to tune up for better server performance and better CPU / memory utilization on both server Application server side as well as the website programming code backend, html and pictures / images

Thus it is critically important to not onlykeep the Webserver / PHP engine optimized but keep hosted sites  stored images and source code clean and efficient.

We as admins usually couldn't directly interfere with clearning the source code and often we have to host a crappy written sites withpicture upload forms with un-optimized Image files that was  produced on old Photo Cameras, "Ancient" Mobile Mobiles, Win XP MS Paint, various versions Photoshop, Gimp etc.).

It is a well known fact that a big part from a Website User Experience is how fast the user loads a page, thus if HTML / CSS loaded images loads slow has a negative impact on user look & feel about website

Therefore by optimizing the size of hosted sites Images, you Save Network bandwidth and in some cases when Large Gallery sites HDD disk space.

On Linux, there are already a many command line tools to inspect and optimize (compress) the size of PNG, JPEG, GIF, BMP, PNM, Tiff Images, most famous ones are:

  • optipng – PNG optimizer that recompresses image files to a smaller size, without losing any information.
  • jpegoptim –   lossless JPEG optimization (based on optimizing the Huffman tables) and "lossy" optimization based on setting a maximum quality factor.
  • pngcrush – Recommended tool to use by Stoyan Stefanov (Yahoo Yslow Developer)
  • jpegtran – Recommended to use by Google 
  • gifsicle –  command-line tool for creating, editing, and getting information about GIF images and animations. 

It is hence useful to first run manually availale Linux image optimization tools (to get an idea what they do) and later automate them to run as scripts to optimize server stored images size and make pictures load faster on websites and thus improve End Users Experience and speed up Image content delivery to GoogleBot / YahooBot / Bing Crawlers which will make Search Engines to position server hosted sites better (more SEO Friendly).

 

  • How much percents of  space (Mega / Gigabytes ) Pictures compress can save you?

If you run it on 500MB image directory, you can probably save about 20 to 50MB of size, so don't expect extraordinary file reduce, however 5% to 10% reduce in size is not bad too. If you host 100 sites each with half gigas of data this would mean saving of 5GB of data and some 5GB from backups 🙂 At extraordinary cases you can expect 20% to 30% of storage reduce. For even better image compression you can try out GIMP's – Save for Web option.
 

  • Installing jpegtran, optpng, jpegoptim, pngcrush gifsicle on Debian / Ubuntu (deb based) Linux
     

apt-get install –yes libjpeg-progs optipng jpegoptim pngcrush gifsicle

 

  • Installing  jpegtran, optpng, jpegoptim, pngcrush, gifsicle on Fedora / CentOS / RHEL (RPM based distros)
     

yum -y install pngcrush libjpeg-turbo-utils opt-jpg opt-png opt-gif


gifsicle is not availale by default on Redhacks 🙂 but there is a RPM package for fedora from http://pkgs.repoforge.org/gifsicle/

 

Some examples of running image compression on GNU / Linux

  • optipngandjpegoptim optimize for all files in directory
     

cd /home/sites/

find . -iname '*.png' -print0 | xargs -0 optipng -o7 -preserve
find . -iname '*.jpg' -print0 |
 xargs -0 jpegoptim –max=90 –strip-all –preserve –totals


In jpegoptim command, the option –strip-all will strip any metadata including Exif data from images. For websites JPEG metadata is usually not needed, so usually its ok to strip them.

Above jpegoptim example will decrease slightly JPEG image quality to 90%. quality level of 90 is still high enough and website visitors are unlikely to spot any visible quality reduction / defects in the image.

 

  • pngcrush all files in a directory example
     

cd /home/sites/

for png in `find $IMG_DIR -iname "*.png"`; do
    echo "crushing $png …"
        pngcrush -rem alla -reduce -brute "$png" temp.png

 

    # preserve original on error
    if [ $? = 0 ]; then
        mv -f temp.png $png
        else
        rm temp.png
        fi
done

  • Run jpegtran on sites directory
     

find /home/sites -name "*.jpg" -type f -exec jpegtran -copy none -optimize -outfile {} {} ;

 

  • Set a script to compress / reduce size of Sites Images


Here is a basic optimize_images.sh which I used earlier before and was reducing the overall images size just 5 to 10%, then I found the much improved version of optimize images shell script  (useful to  clear up EXIF picture data / And Comments from JPG / PNG files). The script execution could take very long time on large image directories and thus could cause a high HDD disk I/O, however if ran once a week at night time its not such a big deal. 

To set it to run on your server as a cronjob:
 

cd /usr/sbin/
wget -q https://pc-freak.net/bshscr/optimize_images2.sh
crontab -u root -e 


Sample cron job to run once a month on 10th and 27th in 3 o'clock AM:
 

 00 3 10,27 * * /usr/sbin/optimize_images2.sh 2>&1 >/dev/null


Also if you need to further optimize million of tiny sized PNG files Yahoo Smush.it service could be helpful. For compression maniacs its worthy to check out also TinyPNG Service (however be awre that this service compresses files with significant quality loss) making picture quality visibly deteriorated.

Besides optimizing server stored Pictures, here are some other stuff that helps in increasing server utilization / lower webpages loading time.

Starting up with the installation (when site is to use Apache + PHP) for its backend, the first thing to on the freshlyinstalled Linux server is to implement the following list of Apache common Timeout variables that help better scale the webserver for the CMS-es hosted, enable Webserver caching with (mod_deflate), enable eAccelerator tune PHP common php variable etc.

Other thing  I sometimes use to speed-up performance of Apache child responce time up to 20-30  is to Include into Virtualhost / httpd.conf Apache configuration any htacces mod_rewrite rules.

On too heavily loaded sites On-line stores / Large Company website portals with more than 60 000 – 100 000 unique IP visitors a day it is useful tip to disable completely Apache logging in access.log / error.log.

Often when old architecture websites are moved from older Linux OS version to a newer one with newer versions of Apache / PHP often sites are working without major code rework, but use many functions which are already obsolete and thus many WARNING messages crap is logged into php_error.log / error.log. Thus to save disk space and decrease hard disk I/O operations it is good to Disable PHP Notices and Warnings messages