Posts Tagged ‘cgi’

Create and Configure SSL bundle file for GoGetSSL issued certificate in Apache Webserver on Linux

Saturday, November 3rd, 2018

gogetssl-install-certificate-on-linux-howto-sslcertificatechainfile-obsolete

I had a small task to configure a new WildCard SSL for domains on a Debian GNU / Linux Jessie running Apache 2.4.25.

The official documentation on how to install the SSL certificate on Linux given by GoGetSSL (which is by COMODO was obsolete as of time of writting this article and suggested as install instructions:
 

SSLEngine on
SSLCertificateKeyFile /etc/ssl/ssl.key/server.key
SSLCertificateFile /etc/ssl/ssl.crt/yourDomainName.crt
SSLCertificateChainFile /etc/ssl/ssl.crt/yourDomainName.ca-bundle


Adding such configuration to domain Vhost and testing with apache2ctl spits an error like:

 

root@webserver:~# apache2ctl configtest
AH02559: The SSLCertificateChainFile directive (/etc/apache2/sites-enabled/the-domain-name-ssl.conf:17) is deprecated, SSLCertificateFile should be used instead
Syntax OK

 


To make issued GoGetSSL work with Debian Linux, hence, here is the few things done:

The files issued by Gogetssl.COM were the following:

 

AddTrust_External_CA_Root.crt
COMODO_RSA_Certification_Authority.crt
the-domain-name.crt


The webserver had already SSL support via mod_ssl Apache module, e.g.:

 

root@webserver:~# ls -al /etc/apache2/mods-available/*ssl*
-rw-r–r– 1 root root 3112 окт 21  2017 /etc/apache2/mods-available/ssl.conf
-rw-r–r– 1 root root   97 сеп 19  2017 /etc/apache2/mods-available/ssl.load
root@webserver:~# ls -al /etc/apache2/mods-enabled/*ssl*
lrwxrwxrwx 1 root root 26 окт 19  2017 /etc/apache2/mods-enabled/ssl.conf -> ../mods-available/ssl.conf
lrwxrwxrwx 1 root root 26 окт 19  2017 /etc/apache2/mods-enabled/ssl.load -> ../mods-available/ssl.load


For those who doesn't have mod_ssl enabled, to enable it quickly run:

 

# a2enmod ssl


The VirtualHost used for the domains had Apache config as below:

 

 

 

NameVirtualHost *:443

<VirtualHost *:443>
    ServerAdmin support@the-domain-name.com
    ServerName the-domain-name.com
    ServerAlias *.the-domain-name.com the-domain-name.com

    DocumentRoot /home/the-domain-namecom/www
    SSLEngine On
#    <Directory />
#        Options FollowSymLinks
#        AllowOverride None
#    </Directory>
    <Directory /home/the-domain-namecom/www>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride None
        Include /home/the-domain-namecom/www/htaccess_new.txt
        Order allow,deny
        allow from all
    </Directory>

    ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
    <Directory "/usr/lib/cgi-bin">
        AllowOverride None
        Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
        Order allow,deny
        Allow from all
    </Directory>

    ErrorLog ${APACHE_LOG_DIR}/error.log

    # Possible values include: debug, info, notice, warn, error, crit,
    # alert, emerg.
    LogLevel warn

    CustomLog ${APACHE_LOG_DIR}/access.log combined

#    Alias /doc/ "/usr/share/doc/"
#   <Directory "/usr/share/doc/">
#       Options Indexes MultiViews FollowSymLinks
#       AllowOverride None
#       Order deny,allow
#       Deny from all
#       Allow from 127.0.0.0/255.0.0.0 ::1/128
#   </Directory>
SSLCertificateKeyFile /etc/apache2/ssl/the-domain-name.com.key
SSLCertificateFile /etc/apache2/ssl/chain.crt

 

</VirtualHost>

The config directives enabling and making the SSL actually work are:
 

SSLEngine On
SSLCertificateKeyFile /etc/apache2/ssl/the-domain-name.com.key
SSLCertificateFile /etc/apache2/ssl/chain.crt

 

The chain.crt file is actually a bundle file containing a bundle of the gogetssl CA_ROOT and RSA_Certification_Authority 3 files, to prepare that file, I've used bundle.sh small script found on serverfault.com here I've made a mirror of bundle.sh on www.pc-freak.net here   the script content is as follows:

To prepare the chain.crt  bundle, I ran:

 

sh create-ssl-bundle.sh _iq-test_cc.crt chain.crt
sh create-ssl-bundle.sh _iq-test_cc.crt >chain.crt
sh create-ssl-bundle.sh COMODO_RSA_Certification_Authority.crt >> chain.crt
sh create-ssl-bundle.sh bundle.sh AddTrust_External_CA_Root.crt >> chain.crt


Then I copied the file to /etc/apache2/ssl together with the-domain-name.com.key file earlier generated using openssl command earlier explained in my article how to install RapidSSL certificate on Linux

/etc/apache2/ssl was not previously existing (on Debian Linux), so to create it:

 

root@webserver:~# mkdir /etc/apache2/ssl
root@webserver:~# ls -al /etc/apache2/ssl/chain.crt
-rw-r–r– 1 root root 20641 Nov  2 12:27 /etc/apache2/ssl/chain.crt
root@webserver:~# ls -al /etc/apache2/ssl/the-domain-name.com.key
-rw-r–r– 1 root root 6352 Nov  2 20:35 /etc/apache2/ssl/the-domain-name.com.key

 

As I needed to add the SSL HTTPS configuration for multiple domains, further on I've wrote and used a tiny shell script add_new_vhost.sh which accepts as argument the domain name I want to add. The script works with a sample Skele (Template) file, which is included in the script itself and can be easily modified for the desired vhost config.
To add my multiple domains, I've used the script as follows:
 

sh add_new_vhost.sh add-new-site-domain.com
sh add_new_vhost.sh add-new-site-domain1.com


etc.

Here is the complete script as well:

 

#!/bin/sh
# Shell script to add easily new domains for virtual hosting on Debian machines
# arg1 should be a domain name
# This script takes the domain name which you type as arg1 uses it and creates
# Docroot / cgi-bin directory for the domain, create seperate site's apache log directory
# then takes a skele.com file and substitutes a skele.com with your domain name and directories
# This script's aim is to easily enable sysadmin to add new domains in Debian
sites_base_dir=/var/www/jail/home/www-data/sites/;
# the directory where the skele.com file is
skele_dir=/etc/apache2/sites-available;
# base directory where site log dir to be created
cr_sep_log_file_d=/var/log/apache2/sites;
# owner of the directories
username='www-data';
# read arg0 and arg1
arg0=$0;
arg1=$1;
if [[ -z $arg1 ]]; then
echo "Missing domain name";
exit 1;
fi

 

# skele template
echo "#
#  Example.com (/etc/apache2/sites-available/www.skele.com)
#
<VirtualHost *>
        ServerAdmin admin@design.bg
        ServerName  skele.com
        ServerAlias www.skele.com


        # Indexes + Directory Root.
        DirectoryIndex index.php index.htm index.html index.pl index.cgi index.phtml index.jsp index.py index.asp

        DocumentRoot /var/www/jail/home/www-data/sites/skelecom/www/docs
        ScriptAlias /cgi-bin "/var/www/jail/home/www-data/sites/skelecom/cgi-bin"
        
        # Logfiles
        ErrorLog  /var/log/apache2/sites/skelecom/error.log
        CustomLog /var/log/apache2/sites/skelecom/access.log combined
#       CustomLog /dev/null combined
      <Directory /var/www/jail/home/www-data/sites/skelecom/www/docs/>
                Options FollowSymLinks MultiViews -Includes
                AllowOverride None
                Order allow,deny
                allow from all
                # This directive allows us to have apache2's default start page
                # in /apache2-default/, but still have / go to the right place
#               RedirectMatch ^/$ /apache2-default/
        </Directory>

        <Directory /var/www/jail/home/www-data/sites/skelecom/www/docs/>
                Options FollowSymLinks ExecCGI -Includes
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>

</VirtualHost>
" > $skele_dir/skele.com;

domain_dir=$(echo $arg1 | sed -e 's/\.//g');
new_site_dir=$sites_base_dir/$domain_dir/www/docs;
echo "Creating $new_site_dir";
mkdir -p $new_site_dir;
mkdir -p $sites_base_dir/cgi-bin;
echo "Creating sites's Docroot and CGI directory";
chown -R $username:$username $new_site_dir;
chown -R $username:$username $sites_base_dir/cgi-bin;
echo "Creating site's Log files Directory";
mkdir -p $cr_sep_log_file_d/$domain_dir;
echo "Creating sites's VirtualHost file and adding it for startup";
sed -e "s#skele.com#$arg1#g" -e "s#skelecom#$domain_dir#g" $skele_dir/skele.com >> $skele_dir/$arg1;
ln -sf $skele_dir/$arg1 /etc/apache2/sites-enabled/;
echo "All Completed please restart apache /etc/init.d/apache restart to Load the new virtual domain";

# Date Fri Jan 11 16:27:38 EET 2008


Using the script saves a lot of time to manually, copy vhost file and then edit it to change ServerName directive, for vhosts whose configuration is identical and only the ServerName listener has to change, it is perfect to create all necessery domains, I've created a simple text file with each of the domains and run it in a loop:
 

while :; do sh add_new_vhost.sh $i; done < domain_list.txt
 

 

How to get rid of “PHP Warning: PHP Startup: Unable to load dynamic library ‘/usr/lib/php5/20090626/suhosin.so'” on Debian GNU / Linux

Tuesday, October 25th, 2011

PHP-warning-how-to-fix-warnings-and-errors-php-logo

After a recent new Debian Squeeze Apache+PHP server install and moving a website from another server host running on CentOS 5.7 Linux server, some of the PHP scripts running via crontab started displaying the following annoying PHP Warnings :

debian:~# php /home/website/www/cron/update.php

PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20090626/suhosin.so' – /usr/lib/php5/20090626/suhosin.so: cannot open shared object file: No such file or directory in Unknown on line 0

Obviously the error revealed that PHP cli is not happy that, I've previously removes the suhosin php5-suhosin module from the system.
I wouldn't have removed php5-suhosin if sometimes it doesn't produced some odd experiences with the Apache webserver.
To fix the PHP Warning, I used first grep to see, where exactly the suhosin module gets included in debian's php.ini config files. debian:~# cd /etc/php5
debian:/etc/php5# grep -rli suhosin *
apache2/conf.d/suhosin.ini
cgi/conf.d/suhosin.ini
cli/conf.d/suhosin.ini
conf.d/suhosin.ini

Yeah that's right Debian has three php.ini php config files. One for the php cli/usr/bin/php, another for the Apache webserver loaded php library/usr/lib/apache2/modules/libphp5.so and one for Apache's cgi module/usr/lib/apache2/modules/mod_fcgid.so .

I was too lazy to edit all the above found declarations trying to include the suhosin module in PHP, hence I remembered that probably all this obsolete suhosin module declaration are still present because probably the php5-suhosin package is still not purged from the system.

A quick check with dpkg , further strenthened my assumption as the php5-suhosin module was still hanging around as an (rc – remove candidate);

debian:~# dpkg -l |grep -i suhosin
rc php5-suhosin 0.9.32.1-1 advanced protection module for php5

Hence to remove the obsolete package config and directories completely out of the system and hence solve the PHP Warning I used dpkg –purge, like so:

debian:~# dpkg --purge php5-suhosin
(Reading database ... 76048 files and directories currently installed.)
Removing php5-suhosin ...
Purging configuration files for php5-suhosin ...
Processing triggers for libapache2-mod-php5 ...
Reloading web server config: apache2.

Further on to make sure the PHP Warning is solved I did the cron php script another go and it produced no longer errors:

debian:~# php /home/website/www/cron/update.php
debian:~#

How to make a mirror of website on GNU / Linux with wget / Few tips on wget site mirroring

Wednesday, February 22nd, 2012

how-to-make-mirror-of-website-on-linux-wget

Everyone who used Linux is probably familiar with wget or has used this handy download console tools at least thousand of times. Not so many Desktop GNU / Linux users like Ubuntu and Fedora Linux users had tried using wget to do something more than single files download.
Actually wget is not so popular as it used to be in earlier linux days. I've noticed the tendency for newer Linux users to prefer using curl (I don't know why).

With all said I'm sure there is plenty of Linux users curious on how a website mirror can be made through wget.
This article will briefly suggest few ways to do website mirroring on linux / bsd as wget is both available on those two free operating systems.

1. Most Simple exact mirror copy of website

The most basic use of wget's mirror capabilities is by using wget's -mirror argument:

# wget -m http://website-to-mirror.com/sub-directory/

Creating a mirror like this is not a very good practice, as the links of the mirrored pages will still link to external URLs. In other words link URL will not pointing to your local copy and therefore if you're not connected to the internet and try to browse random links of the webpage you will end up with many links which are not opening because you don't have internet connection.

2. Mirroring with rewritting links to point to localhost and in between download page delay

Making mirror with wget can put an heavy load on the remote server as it fetches the files as quick as the bandwidth allows it. On heavy servers rapid downloads with wget can significantly reduce the download server responce time. Even on a some high-loaded servers it can cause the server to hang completely.
Hence mirroring pages with wget without explicity setting delay in between each page download, could be considered by remote server as a kind of DoS – (denial of service) attack. Even some site administrators have already set firewall rules or web server modules configured like Apache mod_security which filter requests to IPs which are doing too frequent HTTP GET /POST requests to the web server.
To make wget delay with a 10 seconds download between mirrored pages use:

# wget -mk -w 10 -np --random-wait http://website-to-mirror.com/sub-directory/

The -mk stands for -m/-mirror and -k / shortcut argument for –convert-links (make links point locally), –random-wait tells wget to make random waits between o and 10 seconds between each page download request.

3. Mirror / retrieve website sub directory ignoring robots.txt "mirror restrictions"

Some websites has a robots.txt which restricts content download with clients like wget, curl or even prohibits, crawlers to download their website pages completely.

/robots.txt restrictions are not a problem as wget has an option to disable robots.txt checking when downloading.
Getting around the robots.txt restrictions with wget is possible through -e robots=off option.
For instance if you want to make a local mirror copy of the whole sub-directory with all links and do it with a delay of 10 seconds between each consequential page request without reading at all the robots.txt allow/forbid rules:

# wget -mk -w 10 -np -e robots=off --random-wait http://website-to-mirror.com/sub-directory/

4. Mirror website which is prohibiting Download managers like flashget, getright, go!zilla etc.

Sometimes when try to use wget to make a mirror copy of an entire site domain subdirectory or the root site domain, you get an error similar to:

Sorry, but the download manager you are using to view this site is not supported.
We do not support use of such download managers as flashget, go!zilla, or getright

This message is produced by the site dynamic generation language PHP / ASP / JSP etc. used, as the website code is written to check on the browser UserAgent sent.
wget's default sent UserAgent to the remote webserver is:
Wget/1.11.4

As this is not a common desktop browser useragent many webmasters configure their websites to only accept well known established desktop browser useragents sent by client browsers.
Here are few typical user agents which identify a desktop browser:
 

  • Mozilla/5.0 (Windows NT 6.1; rv:6.0) Gecko/20110814 Firefox/6.0
  • Mozilla/5.0 (X11; Linux i686; rv:6.0) Gecko/20100101 Firefox/6.0
  • Mozilla/6.0 (Macintosh; I; Intel Mac OS X 11_7_9; de-LI; rv:1.9b4) Gecko/2012010317 Firefox/10.0a4
  • Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.2a1pre) Gecko/20110324 Firefox/4.2a1pre

etc. etc.

If you're trying to mirror a website which has implied some kind of useragent restriction based on some "valid" useragent, wget has the -U option enabling you to fake the useragent.

If you get the Sorry but the download manager you are using to view this site is not supported , fake / change wget's UserAgent with cmd:

# wget -mk -w 10 -np -e robots=off \
--random-wait
--referer="http://www.google.com" \--user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6" \--header="Accept:text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" \--header="Accept-Language: en-us,en;q=0.5" \--header="Accept-Encoding: gzip,deflate" \--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" \--header="Keep-Alive: 300"

For the sake of some wget anonimity – to make wget permanently hide its user agent and pretend like a Mozilla Firefox running on MS Windows XP use .wgetrc like this in home directory.

5. Make a complete mirror of a website under a domain name

To retrieve complete working copy of a site with wget a good way is like so:

# wget -rkpNl5 -w 10 --random-wait www.website-to-mirror.com

Where the arguments meaning is:
-r – Retrieve recursively
-k – Convert the links in documents to make them suitable for local viewing
-p – Download everything (inline images, sounds and referenced stylesheets etc.)
-N – Turn on time-stamping
-l5 – Specify recursion maximum depth level of 5

6. Make a dynamic pages static site mirror, by converting CGI, ASP, PHP etc. to HTML for offline browsing

It is often websites pages are ending in a .php / .asp / .cgi … extensions. An example of what I mean is for instance the URL http://php.net/manual/en/tutorial.php. You see the url page is tutorial.php once mirrored with wget the local copy will also end up in .php and therefore will not be suitable for local browsing as .php extension is not understood how to interpret by the local browser.
Therefore to copy website with a non-html extension and make it offline browsable in HTML there is the –html-extension option e.g.:

# wget -mk -w 10 -np -e robots=off \
--random-wait \
--convert-links http://www.website-to-mirror.com

A good practice in mirror making is to set a download limit rate. Setting such rate is both good for UP and DOWN side (the local host where downloading and remote server). download-limit is also useful when mirroring websites consisting of many enormous files (documental movies, some music etc.).
To set a download limit to add –limit-rate= option. Passing by to wget –limit-rate=200K would limit download speed to 200KB.

Other useful thing to assure wget has made an accurate mirror is wget logging. To use it pass -o ./my_mirror.log to wget.
 

Speeding up Apache through apache2-mpm-worker and php5-cgi on Debian / How to improve Apache performance and decrease server memory consumption

Friday, March 18th, 2011

speeding up apache through apache2-mpm-worker and php5-cgi on Debian Linux / how to improve apache performance and decrease server responce time
By default most Apache running Linux servers on the Internet are configured to use with the mpm prefork apache module
Historically prefork apache module is the predecessor of the worker module therefore it's believed to be a way more tested and reliable, if you need a critical reliable webserver configuration.

However from my experience by so far with the Apache MPM Worker I can boldly say that many of the rumors concerning the unreliabity of apache2-mpm-worker are just myths.

The old way Apache handles connections e.g. the mod prefork is the well known way that high amount of the daemons on Linux and BSD are still realying on.
When prefork is a used by Apache, every new TCP/IP connection arriving at your Linux server on the Apache configured port let's say on port 80 is being served by Apache in a way that the Apache process (mother process) parent does fork a new Apache parent copy in order to serve the new request.
Thus by using the prefork Apache needs to fork new process (if it doesn't have already an empty forked one waiting for connections) and serve the HTTP request of the new client, after the request of the client is completed the newly forked Apache usually dies (even though it again depends on the way the Apache server is configured via the Apache configuration – apache2.conf / httpd.conf etc.).

Now you can imagine how slow and memory consuming it is that all the time the parent Apache process spawns new processes, kills old ones etc. in order to fulfill the client requests.

Now just to compare the Apace mpm prefork does not use the old forking way, but relies on a few Apache processes which handles all the requests without constantly being destroyed and recreated like with the prefork module.
This saves operations and system resources, threaded programming has already been proven to be more efficient way to handle tasks and is heavily adopted in GUI programming for instance in Microsoft Windows, Mac OS X, Linux Gnome, KDE etc.

There is plenty of information and statistical data which compares Apache running with prefork and respectively worker modules online.
As the goal of this article is not to went in depths with this kind of information I would not say more on it but let you explore online a bit more about them in case if you're interested.

The purpose of this article is to explain in short how to substitute the Apache2-MPM-Prefork and how your server performance could benefit out of the use of Apache2-MPM-Worker.
On Debian the default Apache process serving module in Apache 1.3x,Apache 2.0x and 2.2x is prefork thus the installation of apache2-mpm-worker is not "a standard way" to install Apache

Deciding to swith from the default Debian apache-mpm-prefork to apache-mpm-worker is quite a serious and responsible decision and in some cases might cause troubles, if you have decided to follow my article be sure to consider all the possible negative consequences of switching to the apache worker !

Now after having said a bunch of info which might be not necessary with the experienced system admin I'll continue on with the steps to install the apache2-mpm-worker.

1. Install the apache2-mpm-worker

debian:~# apt-get install apache2-mpm-worker php5-cgi
Reading state information... Done
The following packages were automatically installed and are no longer required:
The following packages will be REMOVED apache2-mpm-prefork libapache2-mod-php5
The following NEW packages will be installed apache2-mpm-worker
0 upgraded, 1 newly installed, 2 to remove and 46 not upgraded.
Need to get 0B/259kB of archives.After this operation, 6193kB disk space will be freed.

As you can notice in below's text confirmation which will appear you will have to remove the apache2-mpm-prefork and the apache2-mpm-worker modules before you can proceed to install the apache2-mpm-prefork.

You might ask yourself if I remove my installed libphp how would I be able to use my Apache with my PHP based websites? And why does the apt package manager requires the libapache2-mod-php5 to get removed.
The explanation is simple apache2-mpm-worker is not thread safe, in other words scripts which does use the php fork(); function would not work correctly with the Apache worker module and will probably be leading to PHP and Apache crashes.
Therefore in order to install the apache mod worker it's necessary that no libapache2-mod-php5 is existent on the system.
In order to have a PHP installed on the server again you will have to use the php5-cgi deb package, this is the reason in the above apt-get command I'm also requesting apt to install the php5-cgi package next to apache2-mpm-worker.

2. Enable the cgi and cgid apache modules

debian:~# a2enmod cgi
debian:~# a2enmod cgid

3. Activate the mod_actions apache modules

debian:~# cd /etc/apache2/mods-enabled
debian:~# ln -sf ../mods-available/actions.load
debian:~# ln -sf ../mods-available/actions.conf

4. Add configuration options in order to enable mod worker to use the newly installed php5-cgi

Edit /etc/apache2/mods-available/actions.conf vim, mcedit or nano (e.g. your editor of choice and add inside:

&ltIfModule mod_actions.c>
Action application/x-httpd-php /cgi-bin/php5
</IfModule>

After completing all the above instructions, you might also need to edit your /etc/apache2/apache2.conf to tune up, how your Apache mpm worker will serve client requests.
Configuring the <IfModule mpm_worker_module> in apache2.conf is necessary to optimize your newly installed mpm_worker module for performance.

5. Configure the mod_worker_module in apache2.conf One example configuration for the mod worker is:

<IfModule mpm_worker_module>
StartServers 2
MaxClients 150
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestsPerChild 0
</IfModule>

Consider the fact that this configuration is just a sample and it's in no means configured for serving Apache requests for high load Apache servers and you need to further play with the values to have a good results on your server.

6. Check that all is fine with your Apache configurations and no syntax errors are encountered

debian:~# /usr/sbin/apache2ctl -t
Syntax OK

If you get something different from Syntax OK track the error and fix it before you're ready to restart the Apache server.

7. Now restart the Apache server

debian:~# /etc/init.d/apache2 restart

All should run fine and hopefully your PHP scripts should be interpreted just fine through the php5-cgi instead of the libapache2-mod-php5.
Using the /usr/bin/php5-cgi will increase with some percentage your server CPU load but on other hand will drasticly decrease the Webserver memory consumption.
That's quite logical because the libapache2-mod-hp5 is loaded once during apache server whether a new instance of /usr/bin/php5-cgi is invoked during each of Apache requests via the mod worker.

There is one serious security flow coming with php5-cgi, DoS against a server processing scripts through php5-cgi is much easier to be achieved.
An example for a denial attack which could affect a website running with mod worker and php5-cgi, could be simulated from a simple user with a web browser which holds up the f5 or ctrl + r browser page refresh buttons.
In that case whenever php5-cgi is used the CPU load would rise drastic, one possible solution to this denial of service issues is by installing and using libapache2-mod-evasive like so:

8. Install libapache2-mod-evasive

debian:~# apt-get install libapache2-mod-evasive
The Apache mod evasive module is a nice apache module to minimize HTTP DoS and brute force attacks.
Now with mod worker through the php5-cgi, your apache should start serving requests more efficiently than before.
For some performance reasons some might even want to try out the fastcgi with the worker to boost the Apache performance but as I have never tried that I can't say how reliable a a mod worker with a fastcgi would be.

N.B. ! If you have some specific php configurations within /etc/php5/apache2/php.ini you will have to set them also in /etc/php5/cgi/php.ini before you proceed with the above instructions to install Apache otherwise your PHP scripts might not work as expected.

Mod worker is also capable to work with the standard mod php5 Apache module, but if you decide to go this route you will have to recompile your PHP lib manually from source as in Debian this option is not possible with the default php library.
This installation worked fine on Debian Lenny but suppose the same installation should work fine on Debian Squeeze as well as Debian testing/unstable.
Feedback on the afore-described mod worker installation is very welcome!

Monitoring Windows hosts with Nagios on Debian GNU/Linux

Tuesday, August 30th, 2011

Nagios logo install and configure nagios to monitor Windows hosts with on Debian GNU/Linux

In this article in short, I’ll explain how I configured Nagios on a Debian GNU/Linux release (Squeeze 6) to monitor a couple of Windows hosts running inside a local network. Now let’s start.

1. Install necessery nagios debian packages

apt-get install nagios-images nagios-nrpe-plugin nagios-nrpe-server nagios-plugins nagios-plugins-basic nagios-plugins-standard
nagios3 nagios3-cgi nagios3-common nagios3-core

2. Edit /etc/nagios-plugins/config/nt.cfg

In the File substitute:

define command { command_name check_nt command_line /usr/lib/nagios/plugins/check_nt -H '$HOSTADDRESS$' -v '$ARG1$' }

With:

define command {
command_name check_nt
command_line /usr/lib/nagios/plugins/check_nt -H '$HOSTADDRESS$' -p 12489 -v $ARG1$ $ARG2$
}

3. Modify nrpe.cfg to put in allowd hoss to connect to the Nagions nrpe server

vim /etc/nagios/nrpe.cfg

Lookup inside for nagios’s configuration directive:

allowed_hosts=127.0.0.1

In order to allow more hosts to report to the nagios nrpe daemon, change the value to let’s say:

allowed_hosts=127.0.0.1,192.168.1.4,192.168.1.5,192.168.1.6

This config allows the three IPs 192.168.1.4-6 to be able to report for nrpe.

For the changes to nrpe server to take effect, it has to be restrarted.

debian:~# /etc/init.d/nagios-nrpe-server restart

Further on some configurations needs to be properly done on the nrpe agent Windows hosts in this case 192.168.1.4,192.168.1.5,192.168.1.6

4. Install the nsclient++ on all Windows hosts which CPU, Disk, Temperature and services has to be monitored

Download the agent from http://sourceforge.net/projects/nscplus and launch the installer, click twice on it and follow the installation screens. Its necessery that during installation the agent has the NRPE protocol enabled. After the installation is complete one needs to modify the NSC.ini
By default many of nsclient++ tracking modules are not enabled in NSC.ini, thus its necessery that the following DLLs get activated in the conf:

FileLogger.dll
CheckSystem.dll
CheckDisk.dll
NSClientListener.dll
SysTray.dll
CheckEventLog.dll
CheckHelpers.dll

Another requirement is to instruct the nsclient++ angent to have access to the Linux installed nagios server again with adding it to the allowed_hosts config variable:

allowed_hosts=192.168.1.1

In my case the Nagios runs on Debian Lenny (Squeeze) 6 and possess the IP address of 192.168.1.1
To test the intalled windows nsclient++ agents are properly installed a simple telnet connection from the Linux host is enough:

5. Create necessery configuration for the nagios Linux server to include all the Windows hosts which will be monitored

There is a window.cfg template file located in /usr/share/doc/nagios3-common/examples/template-object/windows.cfg on Debian.

The file is a good start point for creating a conf file to be understand by nagios and used to periodically refresh information about the status of the Windows hosts.

Thus it’s a good idea to copy the file to nagios3 config directory:

debian:~# mkdir /etc/nagios3/objects
debian:~# cp -rpf /usr/share/doc/nagios3-common/examples/template-object/windows.cfg /etc/nagios3/objects/windows.cfg

A sample windows.cfg content, (which works for me fine) and monitor a couple of Windows nodes running MS-SQL service and IIS and makes sure the services are up and running are:

define host{
use windows-server ; Inherit default values from a template
host_name Windows1 ; The name we're giving to this host
alias Iready Server ; A longer name associated with the host
address 192.168.1.4 ; IP address of the host
}
define host{
use windows-server ; Inherit default values from a template
host_name Windows2 ; The name we're giving to this host
alias Iready Server ; A longer name associated with the host
address 192.168.1.4 ; IP address of the host
}
define hostgroup{
hostgroup_name windows-servers ; The name of the hostgroup
alias Windows Servers ; Long name of the group
}
define hostgroup{
hostgroup_name IIS
alias IIS Servers
members Windows1,Windows2
}
define hostgroup{
hostgroup_name MSSQL
alias MSSQL Servers
members Windows1,Windows2
}
define service{
use generic-service
host_name Windows1
service_description NSClient++ Version
check_command check_nt!CLIENTVERSION
}
define service{ use generic-service
host_name Windows1
service_description Uptime
check_command check_nt!UPTIME
}
define service{ use generic-service
host_name Windows1
service_description CPU Load
check_command check_nt!CPULOAD!-l 5,80,90
}
define service{
use generic-service
host_name Windows1
service_description Memory Usage
check_command check_nt!MEMUSE!-w 80 -c 90
define service{
use generic-service
host_name Windows1
service_description C: Drive Space
check_command check_nt!USEDDISKSPACE!-l c -w 80 -c 90
}
define service{
use generic-service
host_name Windows1
service_description W3SVC
check_command check_nt!SERVICESTATE!-d SHOWALL -l W3SVC
}
define service{
use generic-service
host_name Windows1
service_description Explorer
check_command check_nt!PROCSTATE!-d SHOWALL -l Explorer.exe
}
define service{
use generic-service
host_name Windows2
service_description NSClient++ Version
check_command check_nt!CLIENTVERSION
}
define service{ use generic-service
host_name Windows2
service_description Uptime
check_command check_nt!UPTIME
}
define service{ use generic-service
host_name Windows2
service_description CPU Load
check_command check_nt!CPULOAD!-l 5,80,90
}
define service{
use generic-service
host_name Windows2
service_description Memory Usage
check_command check_nt!MEMUSE!-w 80 -c 90
define service{
use generic-service
host_name Windows2
service_description C: Drive Space
check_command check_nt!USEDDISKSPACE!-l c -w 80 -c 90
}
define service{
use generic-service
host_name Windows2
service_description W3SVC
check_command check_nt!SERVICESTATE!-d SHOWALL -l W3SVC
}
define service{
use generic-service
host_name Windows2
service_description Explorer
check_command check_nt!PROCSTATE!-d SHOWALL -l Explorer.exe
}
define service{ use generic-service
host_name Windows1
service_description SQL port Check
check_command check_tcp!1433
}
define service{
use generic-service
host_name Windows2
service_description SQL port Check
check_command check_tcp!1433
}
The above config, can easily be extended for more hosts, or if necessery easily setup to track more services in nagios web frontend.
6. Test if connectivity to the nsclient++ agent port is available from the Linux server

debian:~# telnet 192.168.58.6 12489
Trying 192.168.58.6...
Connected to 192.168.58.6.
Escape character is '^]'.
asd
ERROR: Invalid password.

Another good idea is to launch on the Windows host the NSClient++ (system tray) , e.g.:

Start, All Programs, NSClient++, Start NSClient++ (system tray).

Test Nagios configuration from the Linux host running nagios and nrpe daemons to check if the check_nt, can succesfully authenticate and retrieve data generated from the nsclient++ on the Windows host:

debian:~# /usr/lib/nagios/plugins/check_nt -H 192.168.1.5 -p 12489 -v CPULOAD -w 80 -c 90 -l 5,80,90,10,80,90

If everything is okay and the remote Windows system 192.168.1.5 has properly configured and running NSClient++ the above command should return an output like:

CPU Load 1% (5 min average) 1% (10 min average) | '5 min avg Load'=1%;80;90;0;100 '10 min avg Load'=1%;80;90;0;100

In case of the command returns:

could not fetch information from server

instead this means that probably there is some kind of problem with authentication or handshake of the Linux host’s nagios check_nt to the Windows server’s running on 12489.

This is sometimes caused by misconfigured NSC.ini file, however in other occasions this error is caused by misconfigured Windows Firewall or because the NSClient++ is not running with Administrator user.

By the way important note to make about Windows 2008r2 is that if NSClient++ is running there it’s absolutely required to Login with Windows Administrator and run the NSClient++ /start , if it’s run through the Run As Adminsitrator with an admin privileged user the aforementioned error might appear, so be careful.
I’ve experienced this error myself and it took me about 40 minutes to find that I have to run it directly with Administrator user after logging as Administrator.

7. Create nagios web iface Apache configuration

nagios debian pachage is shipped with a config which is suitable to be set debian:~# cp -rpf /usr/share/doc/nagios3-common/examples/apache2.conf /etc/apache2/sites-avalable/nagios
debian:~# ln -sf /etc/apache2/sites-available/nagios /etc/apache2/sites-enabled/nagios

The /etc/apache2/sites-available/nagios can easily be configured to work on Virtualhost, to do so the above copied file need to be wrapped inside a VirtualHost directive. For that put in the beginning of the file;

<VirtualHost *:80>

and in the end of the file:

<VirtualHost *:80>

8. Restart nagios server and Apache for the new settings to take effect

debian:~# /etc/init.d/apache2 restart
...
debian:~# /etc/init.d/nagios3 restart

If some custom configuration about tracking the Debian Linux nagios host running services needs to be made, its also helpful for one to check in /etc/nagios3/conf.d

Well that’s mostly what I had to do to make the Nagios3 server to keep track of a small Windows network on Debian GNU/Linux Squeeze 6, hope this small article helps. Cheers 😉