Posts Tagged ‘client’

Configure rsyslog buffering on Linux to avoid message lost to Central Logging server

Wednesday, January 13th, 2021

Reading Time: 2minutes


1. Rsyslog Buffering

One of the best practice about logs management is to send syslog to a central server. However, a logging system should be capable of avoiding message loss in situations where the server is not reachable. To do so, unsent data needs to be buffered at the client when central server is not available. You might have recently noticed that many servers forwarding logs messages to a central server do not have buffering functionalities activated. Thus I strongly advise you to have look to this documentation to know how to check your configuration:

Rsyslog buffering with TCP/UDP configured

In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that "the action is not ready", which means the remote server is offline. This can be detected with plain TCP syslog and RELP, but not with UDP. So you need to use either of the two. In this howto, we use plain TCP syslog.

– Version requirement

Please note that we are using rsyslog-specific features. The are required on the client, but not on the server. So the client system must run rsyslog (at least version 3.12.0), while on the server another syslogd may be running, as long as it supports plain tcp syslog.

How To Setup rsyslog buffering on Linux

First, you need to create a working directory for rsyslog. This is where it stores its queue files (should need arise). You may use any location on your local system. Next, you need to do is instruct rsyslog to use a disk queue and then configure your action. There is nothing else to do. With the following simple config file, you forward anything you receive to a remote server and have buffering applied automatically when it goes down. This must be done on the client machine.

# Example:
# $ModLoad imuxsock             # local message reception
# $WorkDirectory /rsyslog/work  # default location for work (spool) files
# $ActionQueueType LinkedList   # use asynchronous processing
# $ActionQueueFileName srvrfwd  # set file name, also enables disk mode
# $ActionResumeRetryCount -1    # infinite retries on insert failure
# $ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
# *.*       @@server:port

Fix FTP active connection issues “Cannot create a data connection: No route to host” on ProFTPD Linux dedicated server

Tuesday, October 1st, 2019

Reading Time: 5minutes


Earlier I've blogged about an encounter problem that prevented Active mode FTP connections on CentOS
As I'm working for a client building a brand new dedicated server purchased from Contabo Dedi Host provider on a freshly installed Debian 10 GNU / Linux, I've had to configure a new FTP server, since some time I prefer to use Proftpd instead of VSFTPD because in my opinion it is more lightweight and hence better choice for a small UNIX server setups. During this once again I've encounted the same ACTIVE FTP not working from FTP server to FTP client host machine. But before shortly explaining, the fix I find worthy to explain briefly what is ACTIVE / PASSIVE FTP connection.


1. What is ACTIVE / PASSIVE FTP connection?

Whether in active mode, the client specifies which client-side port the data channel has been opened and the server starts the connection. Or in other words the default FTP client communication for historical reasons is in ACTIVE MODE. E.g.
Client once connected to Server tells the server to open extra port or ports locally via which the overall FTP data transfer will be occuring. In the early days of networking when FTP protocol was developed security was not of such a big concern and usually Networks did not have firewalls at all and the FTP DATA transfer host machine was running just a single FTP-server and nothing more in this, early days when FTP was not even used over the Internet and FTP DATA transfers happened on local networks, this was not a problem at all.

In passive mode, the server decides which server-side port the client should connect to. Then the client starts the connection to the specified port.

But with the ever increasing complexity of Internet / Networks and the ever tightening firewalls due to viruses and worms that are trying to own and exploit networks creating unnecessery bulk loads this has changed …


2. Installing and configure ProFTPD server Public ServerName

I've installed the server with the common cmd:


apt –yes install proftpd


And the only configuration changed in default configuration file /etc/proftpd/proftpd.conf  was
ServerName          "Debian"

I do this in new FTP setups for the logical reason to prevent the multiple FTP Vulnerability Scan script kiddie Crawlers to know the exact OS version of the server, so this was changed to:


ServerName "MyServerHostname"


Though this is the bad security through obscurity practice doing so is a good practice.

3. Create iptable firewall rules to allow ACTIVE FTP mode

But anyways, next step was to configure the firewall to be allowed to communicate on TCP PORT 21 and 20 to incoming source ports range 1024:65535 (to enable ACTIVE FTP) on firewal level with iptables on INPUT and OUTPUT chain rules, like this:


iptables -A INPUT -p tcp –sport 1024:65535 -d 0/0 –dport 21 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 –sport 1024:65535 -d 0/0 –dport 20 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 21 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 20 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED,RELATED -j ACCEPT

Talking about Active and Passive FTP connections perhaps for novice Linux users it might be worthy to say few words on Active and Passive FTP connections

Once firewall has enabled FTP Active / Passive connections is on and FTP server is listening, to test all is properly configured check iptable rules and FTP listener:

/sbin/iptables -L INPUT |grep ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp-data state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp-data

netstat -l | grep "ftp"
tcp6       0      0 [::]:ftp                [::]:*                  LISTEN    


4. Loading nf_nat_ftp module and net.netfilter.nf_conntrack_helper (for backward compitability)

Next step of course was to add the necessery modules nf_nat_ftpnf_conntrack_sane that makes FTP to properly forward ports with respective Firewall states on any of above source ports which are usually allowed by firewalls, note that the range of ports given 1024:65535 might be too much liberal for paranoid sysadmins and in many cases if ports are not filtered, if you are a security freak you can use some smaller range such as 60000-65535.


Here is time to say for sysadmins who haven't recently had a task to configure a new (unecrypted) File Transfer Server as today Secure FTP is almost alltime used for file transfers for the sake of security might be puzzled to find out the old Linux kernel ip_conntrack_ftp which was the standard module used to make FTP Active connections work is substituted nowadays with  nf_nat_ftp and nf_conntrack_sane.

To make the 2 modules permanently loaded on next boot on Debian Linux they have to be added to /etc/modules

Here is how sample /etc/modules that loads the modules on next system boot looks like

cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

Next to say is that in newer Linux kernels 3.x / 4.x / 5.xthe nf_nat_ftp and nf_conntrack-sane behaviour changed so simply loading the modules would not work and if you do the stupidity to test it with some FTP client (I used gFTP / ncftp from my Linux desktop ) you are about to get FTPNo route to host errors like:


Cannot create a data connection: No route to host



Sometimes, instead of No route to host error the error FTP client might return is:


227 entering passive mode FTP connect connection timed out error

To make the nf_nat_ftp module on newer Linux kernels hence you have to enable backwards compatibility Kernel variable





echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper


To make it permanent if you have enabled /etc/rc.local legacy one single file boot place as I do on servers – for how to enable rc.local on newer Linuxes check here

or alternatively add it to load via sysctl

sysctl -w net.netfilter.nf_conntrack_helper=1

And to make change permanent (e.g. be loaded on next boot)

echo 'net.netfilter.nf_conntrack_helper=1' >> /etc/sysctl.conf


5. Enable PassivePorts in ProFTPD or PassivePortRange in PureFTPD

Last but not least open /etc/proftpd/proftpd.conf find PassivePorts config value (commented by default) and besides it add the following line:


PassivePorts 60000 65534


Just for information if instead of ProFTPd you experience the error on PureFTPD the configuration value to set in /etc/pure-ftpd.conf is:

PassivePortRange 30000 35000

That's all folks, give the ncftp / lftp / filezilla or whatever FTP client you prefer and test it the FTP client should be able to talk as expected to remote server in ACTIVE FTP mode (and the auto passive mode) will be not triggered anymore, nor you will get a strange errors and failure to connect in FTP clients as gftp.

Cheers 🙂

Export / Import PuTTY Tunnels SSH Sessions from one to another Windows machine howto

Thursday, January 31st, 2019

Reading Time: 3minutes


As I've started on job position – Linux Architect in last November 2018 in Itelligence AG as a contractor (External Service) – a great German company who hires the best IT specialists out there and offers a flexible time schedules for emploees doing various very cool IT advanced operations and Strategic advancement of SAP's Cloud used Technology and Services improvements for SAP SE – SAP S4HANA and HEC (HANA Enterprise Cloud) and been given for work hardware a shiny Lenovo Thinkpad 500 Laptop with Windows 10 OS (SAP pre-installed), I needed to make some SSH Tunnels to machines to (Hop Station / Jump hosts) for that purpose, after some experimenting with MobaXterm Free (Personal Edition 11.0) and the presumable limitations of tunnels of the free client as well as my laziness to add the multiple ssh tunnels to different ssh / rdp / vnc etc. servers, finally I decided to just copy all the tunnels from a colleague who runs Putty and again use the good old Putty – old school Winblows SSH Terminal Client but just for creating the SSH tunnels and for rest use MobaXterm, just like in old times while still employe in Hewlett Packard. For that reason to copy the Tunnels from my dear German Colleague Henry Beck (A good herated collegue who works in field of Storage dealing with NetApps / filer Clusters QNap etc.).

Till that moment I had no idea how copying a saved SSH Tunnels definition is possible, I did a quick research just to find out this is done not with Putty Interface itself but, insetead through dumping Windows Putty Stored Registry records into a File, then transfer to the PC where Tunnels needs to be imported and then again (either double click the registry file) to load it, into registry or use Windows registry editor command line interface reg, here is how:

1. Export


Run cmd.exe(note below command) 

requires elevated Run as Administrator prompt:

Only sessions:

regedit /e "%USERPROFILE%\Desktop\putty-sessions.reg" HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\Sessions

All settings:

regedit /e "%USERPROFILE%\Desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTatham


If you have powershell installed on machine, to dump

Only sessions:


reg export HKCU\Software\SimonTatham\PuTTY\Sessions ([Environment]::GetFolderPath("Desktop") + "\putty-sessions.reg")

All settings:

reg export HKCU\Software\SimonTatham ([Environment]::GetFolderPath("Desktop") + "\putty.reg")

2. Import

Double-click on the 


 file and accept the import.


Alternative ways:



require elevated command prompt:

regedit /i putty-sessions.reg regedit /i putty.reg


reg import putty-sessions.reg reg import putty.reg

Below are some things to consider:

Note !do not replace 


 with your username.


Note !: It will create a 


 file on the Desktop of the current user (for a different location modify path)


Note !: It will not export your related (old system stored) SSH keys.

What to expect next?


The result is in Putty you will have the Tunnel sessions loadable when you launch (Portable or installed) Putty version.
Press Load button over the required saved Tunnels list and there you go under


Connection SSH -> Tunnels 


you will see all the copied tunnels.


Installing the phpbb forum on Debian (Squeeze/Sid) Linux

Saturday, September 11th, 2010

Reading Time: 4minutes


I've just installed the phpbb forum on a Debian Linux because we needed a goodquick to install communication media in order to improve our internal communication in a student project in Strategic HR we're developing right now in Arnhem Business School.

Here are the exact steps I followed to have a properly it properly instlled:

1. Install the phpbb3 debian package
This was pretty straight forward:

debian:~# apt-get install phpbb3

At this point of installation I've faced a dpkg-reconfigure phpbb deb package configuration issue:
I was prompted to pass in the credentials for my MySQL password right after I've selected the MySQL as my preferred database back engine.
I've feeded my MySQL root password as well as my preferred forum database name, however the database installation failed because, somehow the configuration procedure tried to connect to my MySQL database with the htcheck user.
I guess this has to be a bug in the package itself or something from my previous installation misconfigured the way the debian database backend configuration was operating.
My assumption is that my previously installed htcheck package or something beforehand I've done right after the htcheck and htcheck-php packages installation.

after the package configuration failed still the package had a status of properly installed when I reviewed it with dpkg
I've thought about trying to manually reconfigure it using the dpkg-reconfigure debian command and I gave it a try like that:

debian:~# dpkg-reconfigure phpbb3

This time along with the other fields I've to fill in the ncurses interface I was prompted for a username before the password prompted appeared.
Logically I tried to fill in the root as it's my global privileges MySQL allowed user.
However that didn't helped at all and again the configuration tried to send the credentials with user htcheck to my MySQL database server.
To deal with the situation I had to approach it in the good old manual way.

2. Manually prepare / create the required phpbb forum database

To completet that connected to the MySQL server with the mysql client and created the proper database like so:

debian:~# mysql -u root -p
CREATE database phpbb3forum;

3. Use phpmyadmin or the mysql client command line to create a new user for the phpbb forum

Here since adding up the user using the phpmyadmin was a way easier to do I decided to go that route, anyways using the mysql cli is also an option.

From phpmyadmin It's pretty easy to add a new user and grant privileges to a certain database, to do so navigate to the following database:

Privileges -> -> Add a new user ->

Now type your User name: , Host , Password , Re-type password , also for a Host: you have to choose Local from the drop down menu.

Leave the Database for user field empty as we have already previously created our desired database in step 2 of this article

Now press the "Go" button and the user will get created.

Further after choose the Privileges menu right on the bottom of the page once again, select through the checkbox the username you have just created let's say the previously created user is phpbb3

Go to Action (There is a picture with a man and a pencil on the right side of this button

Scroll down to the page part saying Database-specific privileges and in the field Add privileges on the following database: fill in your previosly created database name in our case it's phpbb3forum

and then press the "Go" button once again.
A page will appear where you will have to select the exact privileges you would like to grant on the specific selected database.
For some simplicity just check all the checkbox to grant as many privilegs to your database as you could.
Then again you will have to press the "Go" button and there you go you should have already configured an username and database ready to go with your new phpbb forum.

4. Create a virtualhost if you would like to have the forum as a subdomain or into a separate domain

If you decide to have the forum on a separate sub-domain or domain as I did you will have to add some kind of Virtualhost into either your Apache configuration /etc/apache2/apache2.conf or into where officially the virutualhosts are laid in Debian Linux in /etc/apache2/sites-available
I've personally created a new file like for instance /etc/apache2/sites-available/

Here is an example content of the new Virtualhost:

<VirtualHost *>

# Indexes + Directory Root.
DirectoryIndex index.php index.php5 index.htm index.html index.cgi index.phtml index.jsp index.asp

DocumentRoot /usr/share/phpbb3/www/

# Logfiles
ErrorLog /var/log/apache2/yourdomain/error.log
CustomLog /var/log/apache2/yourdomain/access.log combined
# CustomLog /dev/null combined
<Directory /usr/share/phpbb3/www/>
Options FollowSymLinks MultiViews -Includes ExecCGI
AllowOverride All
Order allow,deny
allow from all </Directory>

In above Virtualhost just change the values for ServerAdmin , ServerName , DocumentRoot , ErrorLog , CustomLog and Directory declaration to adjust it to your situation.

5. Restart the Apache webserver for the new Virtualhost to take affect

debian:~# /etc/init.d/apache2 restart

Now accessing your should display the installed phpbb3 forum
The default username and password for your forum you can use straight are:

username: admin
password: admin

So far so good you by now have the PHPBB3 forum properly installed and running, however if you try to Register a new user in the forum you will notice that it's impossible because of a terrible ugly message reading:

Sorry but this board is currently unavailable.

I've spend few minutes online to scrape through the forums before I can understand what I have to stop that annoying message from appearing and allow new users to register in the phpbb forum

The solution came natural and was a setting that had to be changed with the forum admin account, thus login as admin and look at the bottom of the page, below the text reading Powered by phpBB © 2000, 2002, 2005, 2007 phpBB Group you will notice a link with Administration Control Panel
just press there a whole bunch of menus will appear on the screen allowing you to do numerous things, however what you will have to do is go to
Board Settings -> Disable Board

and change the radio button there to say No

That's all now your forum will be ready to go and your users can freely register and if the server where the forum is installed has an already running mail server, they will receive an emails with a registration data concerning their new registrations in your new phpbb forum.
Cheers and Enjoy your new shiny phpbb Forum 🙂

Block Web server over loading Bad Crawler Bots and Search Engine Spiders with .htaccess rules

Monday, September 18th, 2017

Reading Time: 6minutes


In last post, I've talked about the problem of Search Index Crawler Robots aggressively crawling websitesand how to stop them (the article is here) explaning how to raise delays between Bot URL requests to website and how to completely probhit some bots from crawling with robots.txt.

As explained in article the consequence of too many badly written or agressive behaviour Spider is the "server stoning" and therefore degraded Web Server performance as a cause or even a short time Denial of Service Attack, depending on how well was the initial Server Scaling done.

The bots we want to filter are not to be confused with the legitimate bots, that drives real traffic to your website, just for information

 The 10 Most Popular WebCrawlers Bots as of time of writting are:

1. GoogleBot (The Google Crawler bots, funnily bots become less active on Saturday and Sundays :))

2. BingBot ( Crawler bots)

3. SlurpBot (also famous as Yahoo! Slurp)

4. DuckDuckBot (The dutch search engine crawler bots)

5. Baiduspider (The Chineese most famous search engine used as a substitute of Google in China)

6. YandexBot (Russian Yandex Search engine crawler bots used in Russia as a substitute for Google )

7. Sogou Spider (leading Chineese Search Engine launched in 2004)

8. Exabot (A French Search Engine, launched in 2000, crawler for ExaLead Search Engine)

9. FaceBot (Facebook External hit, this crawler is crawling a certain webpage only once the user shares or paste link with video, music, blog whatever  in chat to another user)

10. Alexa Crawler (la_archiver is a web crawler for Amazon's Alexa Internet Rankings, Alexa is a great site to evaluate the approximate page popularity on the internet, Alexa SiteInfo page has historically been the Swift Army knife for anyone wanting to quickly evaluate a webpage approx. ranking while compared to other pages)

Above legitimate bots are known to follow most if not all of W3C – World Wide Web Consorium (W3.Org) standards and therefore, they respect the content commands for allowance or restrictions on a single site as given from robots.txt but unfortunately many of the so called Bad-Bots or Mirroring scripts that are burning your Web Server CPU and Memory mentioned in previous article are either not following /robots.txt prescriptions completely or partially.

Hence with the robots.txt unrespective bots, the case the only way to get rid of most of the webspiders that are just loading your bandwidth and server hardware is to filter / block them is by using Apache's mod_rewrite through




Create if not existing in the DocumentRoot of your website .htaccess file with whatever text editor, or create it your windows / mac os desktop and transfer via FTP / SecureFTP to server.

I prefer to do it directly on server with vim (text editor)



vim /var/www/sites/


RewriteEngine On

IndexIgnore .htaccess */.??* *~ *# */HEADER* */README* */_vti*

SetEnvIfNoCase User-Agent "^Black Hole” bad_bot
SetEnvIfNoCase User-Agent "^Titan bad_bot
SetEnvIfNoCase User-Agent "^WebStripper" bad_bot
SetEnvIfNoCase User-Agent "^NetMechanic" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker" bad_bot
SetEnvIfNoCase User-Agent "^EmailCollector" bad_bot
SetEnvIfNoCase User-Agent "^EmailSiphon" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit" bad_bot
SetEnvIfNoCase User-Agent "^EmailWolf" bad_bot
SetEnvIfNoCase User-Agent "^ExtractorPro" bad_bot
SetEnvIfNoCase User-Agent "^CopyRightCheck" bad_bot
SetEnvIfNoCase User-Agent "^Crescent" bad_bot
SetEnvIfNoCase User-Agent "^Wget" bad_bot
SetEnvIfNoCase User-Agent "^SiteSnagger" bad_bot
SetEnvIfNoCase User-Agent "^ProWebWalker" bad_bot
SetEnvIfNoCase User-Agent "^CheeseBot" bad_bot
SetEnvIfNoCase User-Agent "^Teleport" bad_bot
SetEnvIfNoCase User-Agent "^TeleportPro" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc" bad_bot
SetEnvIfNoCase User-Agent "^Telesoft" bad_bot
SetEnvIfNoCase User-Agent "^Website Quester" bad_bot
SetEnvIfNoCase User-Agent "^WebZip" bad_bot
SetEnvIfNoCase User-Agent "^moget/2.1" bad_bot
SetEnvIfNoCase User-Agent "^WebZip/4.0" bad_bot
SetEnvIfNoCase User-Agent "^WebSauger" bad_bot
SetEnvIfNoCase User-Agent "^WebCopier" bad_bot
SetEnvIfNoCase User-Agent "^NetAnts" bad_bot
SetEnvIfNoCase User-Agent "^Mister PiX" bad_bot
SetEnvIfNoCase User-Agent "^WebAuto" bad_bot
SetEnvIfNoCase User-Agent "^TheNomad" bad_bot
SetEnvIfNoCase User-Agent "^WWW-Collector-E" bad_bot
SetEnvIfNoCase User-Agent "^RMA" bad_bot
SetEnvIfNoCase User-Agent "^libWeb/clsHTTP" bad_bot
SetEnvIfNoCase User-Agent "^asterias" bad_bot
SetEnvIfNoCase User-Agent "^httplib" bad_bot
SetEnvIfNoCase User-Agent "^turingos" bad_bot
SetEnvIfNoCase User-Agent "^spanner" bad_bot
SetEnvIfNoCase User-Agent "^InfoNaviRobot" bad_bot
SetEnvIfNoCase User-Agent "^Harvest/1.5" bad_bot
SetEnvIfNoCase User-Agent "Bullseye/1.0" bad_bot
SetEnvIfNoCase User-Agent "^Mozilla/4.0 (compatible; BullsEye; Windows 95)" bad_bot
SetEnvIfNoCase User-Agent "^Crescent Internet ToolPak HTTP OLE Control v.1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPickerSE/1.0" bad_bot
SetEnvIfNoCase User-Agent "^CherryPicker /1.0" bad_bot
SetEnvIfNoCase User-Agent "^WebBandit/3.50" bad_bot
SetEnvIfNoCase User-Agent "^NICErsPRO" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 5.01.4511" bad_bot
SetEnvIfNoCase User-Agent "^DittoSpyder" bad_bot
SetEnvIfNoCase User-Agent "^Foobot" bad_bot
SetEnvIfNoCase User-Agent "^WebmasterWorldForumBot" bad_bot
SetEnvIfNoCase User-Agent "^SpankBot" bad_bot
SetEnvIfNoCase User-Agent "^BotALot" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial/1.34" bad_bot
SetEnvIfNoCase User-Agent "^lwp-trivial" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.6" bad_bot
SetEnvIfNoCase User-Agent "^BunnySlippers" bad_bot
SetEnvIfNoCase User-Agent "^Microsoft URL Control – 6.00.8169" bad_bot
SetEnvIfNoCase User-Agent "^URLy Warning" bad_bot
SetEnvIfNoCase User-Agent "^Wget/1.5.3" bad_bot
SetEnvIfNoCase User-Agent "^LinkWalker" bad_bot
SetEnvIfNoCase User-Agent "^cosmos" bad_bot
SetEnvIfNoCase User-Agent "^moget" bad_bot
SetEnvIfNoCase User-Agent "^hloader" bad_bot
SetEnvIfNoCase User-Agent "^humanlinks" bad_bot
SetEnvIfNoCase User-Agent "^LinkextractorPro" bad_bot
SetEnvIfNoCase User-Agent "^Offline Explorer" bad_bot
SetEnvIfNoCase User-Agent "^Mata Hari" bad_bot
SetEnvIfNoCase User-Agent "^LexiBot" bad_bot
SetEnvIfNoCase User-Agent "^Web Image Collector" bad_bot
SetEnvIfNoCase User-Agent "^The Intraformant" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^True_Robot" bad_bot
SetEnvIfNoCase User-Agent "^BlowFish/1.0" bad_bot
SetEnvIfNoCase User-Agent "^JennyBot" bad_bot
SetEnvIfNoCase User-Agent "^MIIxpc/4.2" bad_bot
SetEnvIfNoCase User-Agent "^BuiltBotTough" bad_bot
SetEnvIfNoCase User-Agent "^ProPowerBot/2.14" bad_bot
SetEnvIfNoCase User-Agent "^BackDoorBot/1.0" bad_bot
SetEnvIfNoCase User-Agent "^toCrawl/UrlDispatcher" bad_bot
SetEnvIfNoCase User-Agent "^WebEnhancer" bad_bot
SetEnvIfNoCase User-Agent "^TightTwatBot" bad_bot
SetEnvIfNoCase User-Agent "^suzuran" bad_bot
SetEnvIfNoCase User-Agent "^VCI WebViewer VCI WebViewer Win32" bad_bot
SetEnvIfNoCase User-Agent "^VCI" bad_bot
SetEnvIfNoCase User-Agent "^Szukacz/1.4" bad_bot
SetEnvIfNoCase User-Agent "^QueryN Metasearch" bad_bot
SetEnvIfNoCase User-Agent "^Openfind data gathere" bad_bot
SetEnvIfNoCase User-Agent "^Openfind" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s Link Sleuth 1.1c" bad_bot
SetEnvIfNoCase User-Agent "^Xenu’s" bad_bot
SetEnvIfNoCase User-Agent "^Zeus" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey Bait & Tackle/v1.01" bad_bot
SetEnvIfNoCase User-Agent "^RepoMonkey" bad_bot
SetEnvIfNoCase User-Agent "^Zeus 32297 Webster Pro V2.9 Win32" bad_bot
SetEnvIfNoCase User-Agent "^Webster Pro" bad_bot
SetEnvIfNoCase User-Agent "^EroCrawler" bad_bot
SetEnvIfNoCase User-Agent "^LinkScan/8.1a Unix" bad_bot
SetEnvIfNoCase User-Agent "^Keyword Density/0.9" bad_bot
SetEnvIfNoCase User-Agent "^Kenjin Spider" bad_bot
SetEnvIfNoCase User-Agent "^Cegbfeieh" bad_bot


<Limit GET POST>
order allow,deny
allow from all
Deny from env=bad_bot


Above rules are Bad bots prohibition rules have RewriteEngine On directive included however for many websites this directive is enabled directly into VirtualHost section for domain/s, if that is your case you might also remove RewriteEngine on from .htaccess and still the prohibition rules of bad bots should continue to work
Above rules are also perfectly suitable wordpress based websites / blogs in case you need to filter out obstructive spiders even though the rules would work on any website domain with mod_rewrite enabled.

Once you have implemented above rules, you will not need to restart Apache, as .htaccess will be read dynamically by each client request to Webserver

2. Testing .htaccess Bad Bots Filtering Works as Expected

In order to test the new Bad Bot filtering configuration is working properly, you have a manual and more complicated way with lynx (text browser), assuming you have shell access to a Linux / BSD / *Nix computer, or you have your own *NIX server / desktop computer running

Here is how:


lynx -useragent="Mozilla/5.0 (compatible;; +" -head -dump



Note that lynx will provide a warning such as:

Warning: User-Agent string does not contain "Lynx" or "L_y_n_x"!

Just ignore it and press enter to continue.

Two other use cases with lynx, that I historically used heavily is to pretent with Lynx, you're GoogleBot in order to see how does Google actually see your website?

  • Pretend with Lynx You're GoogleBot


lynx -useragent="Mozilla/5.0 (compatible; Googlebot/2.1; +" -head -dump



  • How to Pretend with Lynx Browser You are GoogleBot-Mobile


lynx -useragent="Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_1 like Mac OS X; en-us) AppleWebKit/532.9 (KHTML, like Gecko) Version/4.0.5 Mobile/8B117 Safari/6531.22.7 (compatible; Googlebot-Mobile/2.1; +" -head -dump


Or for the lazy ones that doesn't have Linux / *Nix at disposal you can use WannaBrowser website

Wannabrowseris a web based browser emulator which gives you the ability to change the User-Agent on each website req1uest, so just set your UserAgent to any bot browser that we just filtered for example set User-Agent to CheeseBot

The .htaccess rule earier added once detecting your browser client is coming in with the prohibit browser agent will immediately filter out and you'll be unable to access the website with a message like:

HTTP/1.1 403 Forbidden


Just as I've talked a lot about Index Bots, I think it is worthy to also mention three great websites that can give you a lot of Up to Date information on exact Spiders returned user-agent, common known Bot traits as well as a a current updated list with the Bad Bots etc.

Bot and Browser Resources information user-agents, bad-bots and odd Crawlers and Bots specifics



An updated list with robots user-agents (crawler-user-agents) is also available in github here regularly updated by Caia Almeido

There are also a third party plugin (modules) available for Website Platforms like WordPress / Joomla / Typo3 etc.

Besides the listed on these websites as well as the known Bad and Good Bots, there are perhaps a hundred of others that might end up crawling your webdsite that might or might not need  to be filtered, therefore before proceeding with any filtering steps, it is generally a good idea to monitor your  HTTPDaccess.log / error.log, as if you happen to somehow mistakenly filter the wrong bot this might be a reason for WebsiteIndexing Problems.

Hope this article give you some valueable information. Enjoy ! 🙂


How to use wget and curl via HTTP Proxy server / How to set a HTTPS proxy server on a bash shell on Linux

Wednesday, January 27th, 2016

Reading Time: 2minutes


I've been working a bit on a client's automation, the task is to automate process of installations of Apaches / Tomcats / JBoss and Java servers, so me and colleagues don't waste too
much time in trivial things. To complete that I've created a small repository on a Apache with a WebDav server with major versions of each general branch of Application servers and Javas.
In order to access the remote URL where the .tar.gz binaries archives reside, I had to use a proxy serve as the client runs all his network in a DMZ and all Web Port 80 and 443 HTTPS traffic inside the client network
has to pass by the network proxy.

Thus to make the downloads possible via the shell script, writting I needed to set the script to use the HTTPS proxy server. I've been using proxy earlier and I was pretty aware of the http_proxy bash shell
variable thus I tried to use this one for the Secured HTTPS proxy, however the connection was failing and thanks to colleague Anatoliy I realized the whole problem is I'm trying to use http_proxy shell variable
which has to only be used for unencrypted Proxy servers and in this case the proxy server is over SSL encrypted HTTPS protocol so instead the right variable to use is:


Thehttps_proxy var syntax, goes like this:

export https_proxy="$proxy_url"


Once the https_proxy variable is set  UNIX's wget non interactive download tool starts using the proxy_url variable set proxy and the downloads in my script works.

Hence to make the different version application archives download work out, I've used wget like so:

 wget –no-check-certificate –timeout=5

For other BSD / HP-UX / SunOS UNIX Servers where  shells are different from Bourne Again (Bash) Shell, the http_proxy and  https_proxy variable might not be working.
In such cases if you have curl (command line tool) is available instead of wget to script downloads you can use something like:

 curl -O -1 -k –proxy

The http_proxy and https_proxy variables works perfect also on Mac OS X, default bash shell, so Mac users enjoy.
For some bash users in some kind of firewall hardened environments like in my case, its handy to permanently set a proxy to all shell activities via auto login Linux / *unix scripts .bashrc or .bash_profile that saves the inconvenience to always
set the proxy so lynx and links, elinks text console browsers does work also anytime you login to shell.

Well that's it, my script enjoys proxying traffic 🙂

Fix MySQL ibdata file size – ibdata1 file growing too large, preventing ibdata1 from eating all your server disk space

Thursday, April 2nd, 2015

Reading Time: 4minutes


If you're a webhosting company hosting dozens of various websites that use MySQL with InnoDB  engine as a backend you've probably already experienced the annoying problem of MySQL's ibdata1 growing too large / eating all server's disk space and triggering disk space low alerts. Theibdata1 file, taking up hundreds of gigabytes is likely to be encountered on virtually all Linux distributions which run default MySQL server <= MySQL 5.6 (with default distro shipped my.cnf). The excremental ibdata1 raise appears usually due to a application software bug on how it queries the database. In theory there are no limitation for ibdata1 except maximum file size limitation set for the filesystem (and there is no limitation option set in my.cnf) meaning it is quite possible that under certain conditionsibdata1 grow over time can happily fill up your server LVM (Storage) drive partitions.

Unfortunately there is no way to shrink the ibdata1 file and only known work around (I found) is to set innodb_file_per_table option in my.cnf to force the MySQL server create separate *.ibd files under datadir (my.cnf variable) for each freshly created InnoDB table.

1. Checking size of ibdata1 file

On Debian / Ubuntu and other deb based Linux servers datadir is /var/lib/mysql/ibdata1

server:~# du -hsc /var/lib/mysql/ibdata1
45G     /var/lib/mysql/ibdata1
45G     total

2. Checking info about Databases and Innodb storage Engine

server:~# mysql -u root -p

| Database           |
| information_schema |
| bible              |
| blog               |
| blog-sezoni        |
| blogmonastery      |
| daniel             |
| ezmlm              |
| flash-games        |

Next step is to get some understanding about how many existing InnoDB tables are present within Database server:


mysql> SELECT COUNT(1) EngineCount,engine FROM information_schema.tables WHERE table_schema NOT IN ('information_schema','performance_schema','mysql') GROUP BY engine;
| EngineCount | engine |
|         131 | InnoDB |
|           5 | MEMORY |
|         584 | MyISAM |
3 rows in set (0.02 sec)

To get some more statistics related to InnoDb variables set on the SQL server:

mysqladmin -u root -p'Your-Server-Password' var | grep innodb

Here is also how to find which tables use InnoDb Engine

mysql> SELECT table_schema, table_name
    -> WHERE engine = 'innodb';

| table_schema | table_name               |
| blog         | wp_blc_filters           |
| blog         | wp_blc_instances         |
| blog         | wp_blc_links             |
| blog         | wp_blc_synch             |
| blog         | wp_likes                 |
| blog         | wp_wpx_logs              |
| blog-sezoni  | wp_likes                 |
| icanga_web   | cronk                    |
| icanga_web   | cronk_category           |
| icanga_web   | cronk_category_cronk     |
| icanga_web   | cronk_principal_category |
| icanga_web   | cronk_principal_cronk    |

3. Check and Stop any Web / Mail / DNS service using MySQL

server:~# ps -efl |grep -E 'apache|nginx|dovecot|bind|radius|postfix'

Below cmd should return empty output, (e.g. Apache / Nginx / Postfix / Radius / Dovecot / DNS etc. services are properly stopped on server).

4. Create Backup dump all MySQL tables with mysqldump

Next step is to create full backup dump of all current MySQL databases (with mysqladmin):

server:~# mysqldump –opt –allow-keywords –add-drop-table –all-databases –events -u root -p > dump.sql
server:~# du -hsc /root/dump.sql
940M    dump.sql
940M    total


If you have free space on an external backup server or remotely mounted attached (NFS or SAN Storage) it is a good idea to make a full binary copy of MySQL data (just in case something wents wrong with above binary dump), copy respective directory depending on the Linux distro and install location of SQL binary files set (in my.cnf).
To check where are MySQL binary stored database data (check in my.cnf):

server:~# grep -i datadir /etc/mysql/my.cnf
datadir         = /var/lib/mysql

If server is CentOS / RHEL Fedora RPM based substitute in above grep cmd line /etc/mysql/my.cnf with /etc/my.cnf

if you're on Debian / Ubuntu:

server:~# /etc/init.d/mysql stop
server:~# cp -rpfv /var/lib/mysql /root/mysql-data-backup

Once above copy completes, DROP all all databases except, mysql, information_schema (which store MySQL existing user / passwords and Access Grants and Host Permissions)

5. Drop All databases except mysql and information_schema

server:~# mysql -u root -p



DROP DATABASE wordpress;
DROP DATABASE micropcfreak;
DROP DATABASE statusnet;

          etc. etc.

ACHTUNG !!!DON'T execute!DROP database mysql; DROP database information_schema; !!! – cause this might damage your User permissions to databases

6. Stop MySQL server and add innodb_file_per_table and few more settings to prevent ibdata1 to grow infinitely in future

server:~# /etc/init.d/mysql stop

server:~# vim /etc/mysql/my.cnf

Delete files taking up too much space –ibdata1 ib_logfile0 and ib_logfile1

server:~# cd /var/lib/mysql/
server:~#  rm -f ibdata1 ib_logfile0 ib_logfile1
server:~# /etc/init.d/mysql start
server:~# /etc/init.d/mysql stop
server:~# /etc/init.d/mysql start
server:~# ps ax |grep -i mysql


You should get no running MySQL instance (processes), so above ps command should return blank.

7. Re-Import previously dumped SQL databases with mysql cli client

server:~# cd /root/
server:~# mysql -u root -p < dump.sql

Hopefully import should went fine, and if no errors experienced new data should be in.

Altearnatively if your database is too big and you want to import it in less time to mitigate SQL downtime, instead import the database with:

server:~# mysql -u root -p
mysql> SOURCE /root/dump.sql;


If something goes wrong with the import for some reason, you can always copy over sql binary files from /root/mysql-data-backup/ to /var/lib/mysql/

8. Connect to mysql and check whether databases are listable and re-check ibdata file size

Once imported login with mysql cli and check whther databases are there with:

server:~# mysql -u root -p

Next lets see what is currently the size of ibdata1, ib_logfile0 and ib_logfile1

server:~# du -hsc /var/lib/mysql/{ibdata1,ib_logfile0,ib_logfile1}
19M     /var/lib/mysql/ibdata1
1,1G    /var/lib/mysql/ib_logfile0
1,1G    /var/lib/mysql/ib_logfile1
2,1G    total

Now ibdata1 will grow, but only contain table metadata. Each InnoDB table will exist outside of ibdata1.
To better understand what I mean, lets say you have InnoDB table named blogdb.mytable.
If you go into /var/lib/mysql/blogdb, you will see two files
representing the table:

  •     mytable.frm (Storage Engine Header)
  •     mytable.ibd (Home of Table Data and Table Indexes for blogdb.mytable)

Now construction will be like that for each of MySQL stored databases instead of everything to go to ibdata1.
MySQL 5.6+ admins could relax as innodb_file_per_table is enabled by default in newer SQL releases.

Now to make sure your websites are working take few of the hosted websites URLs that use any of the imported databases and just browse.
In my case ibdata1 was 45GB after clearing it up I managed to save 43 GB of disk space!!!

Enjoy the disk saving! 🙂

Apache Webserver disable file extension – Forbid / Deny access in Apache config to certain file extensions for a Virtualhost

Monday, March 30th, 2015

Reading Time: 2minutes

If you're a Webhosting company sysadmin like me and you already have configured directory listing for certain websites / Vhosts and those files are mirrored from other  development webserver location but some of the uploaded developer files extensions which are allowed to be interptered such as php include files .inc / .htaccess mod_rewrite rules / .phps / .html / .txt need to be working on the dev / test server but needs to be disabled (excluded) from delivery or interpretting for some directory on the prod server.

Open Separate host VirtualHost file or Apache config (httpd.conf / apache2.conf)  if all Vhosts  for which you want to disable certain file extensions and add inside:

<Directory "/var/www/sploits">
        AllowOverride All



Extension Deny Rules such as:

For disabling .inc files from inclusion from other PHP sources:

<Files  ~ "\.inc$">
  Order allow,deny
  Deny from all

To Disable access to .htaccess single file only


<Files ~ "^\.htaccess">
  Order allow,deny
  Deny from all

To Disable .txt from being served by Apache and delivered to requestor browser:


<Files  ~ "\.txt$">
  Order allow,deny
  Deny from all


To Disable any left intact .html from being delivered to client:


<Files  ~ "\.html$">
  Order allow,deny
  Deny from all


Do it for as many extensions as you need.
Finally to make changes affect restart Apache as usual:

If on Deb based Linux issue:

/etc/init.d/apach2 restart

On CentOS / RHEL and other Redhats / RedHacks 🙂

/etc/init.d/httpd restart

How to renew self signed QMAIL toaster and QMAIL rocks expired SSL pem certificate

Friday, September 2nd, 2011

Reading Time: 3minutes


One of the QMAIL server installs, I have installed very long time ago. I've been notified by clients, that the certificate of the mail server has expired and therefore I had to quickly renew the certificate.

This qmail installation, SSL certificates were located in /var/qmail/control under the names servercert.key and cervercert.pem

Renewing the certificates with a new self signed ones is pretty straight forward, to renew them I had to issue the following commands:

1. Generate servercert encoded key with 1024 bit encoding

debian:~# cd /var/qmail/control
debian:/var/qmail/control# openssl genrsa -des3 -out servercert.key.enc 1024
Generating RSA private key, 1024 bit long modulus
e is 65537 (0x10001)
Enter pass phrase for servercert.key.enc:
Verifying - Enter pass phrase for servercert.key.enc:

In the Enter pass phrase for servercert.key.enc I typed twice my encoded key password, any password is good, here though using a stronger one is better.

2. Generate the servercert.key file

debian:/var/qmail/control# openssl rsa -in servercert.key.enc -out servercert.key
Enter pass phrase for servercert.key.enc:
writing RSA key

3. Generate the certificate request

debian:/var/qmail/control# openssl req -new -key servercert.key -out servercert.csr
debian:/var/qmail/control# openssl rsa -in servercert.key.enc -out servercert.key
Enter pass phrase for servercert.key.enc:writing RSA key
root@soccerfame:/var/qmail/control# openssl req -new -key servercert.key -out servercert.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
Country Name (2 letter code) [AU]:UK
State or Province Name (full name) [Some-State]:London
Locality Name (eg, city) []:London
Organization Name (eg, company) [Internet Widgits Pty Ltd]:My Company
Organizational Unit Name (eg, section) []:My Org
Common Name (eg, YOUR name) []:
Email Address []

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

In the above prompts its necessery to fill in the company name and location, as each of the prompts clearly states.

4. Sign the just generated certificate request

debian:/var/qmail/control# openssl x509 -req -days 9999 -in servercert.csr -signkey servercert.key -out servercert.crt

Notice the option -days 9999 this option instructs the newly generated self signed certificate to be valid for 9999 days which is quite a long time, the reason why the previous generated self signed certificate expired was that it was built for only 365 days

5. Fix the newly generated servercert.pem permissions debian:~# cd /var/qmail/control
debian:/var/qmail/control# chmod 640 servercert.pem
debian:/var/qmail/control# chown vpopmail:vchkpw servercert.pem
debian:/var/qmail/control# cp -f servercert.pem clientcert.pem
debian:/var/qmail/control# chown root:qmail clientcert.pem
debian:/var/qmail/control# chmod 640 clientcert.pem

Finally to load the new certificate, restart of qmail is required:

6. Restart qmail server

debian:/var/qmail/control# qmailctl restart
Restarting qmail:
* Stopping qmail-smtpd.
* Sending qmail-send SIGTERM and restarting.
* Restarting qmail-smtpd.

Test the newly installed certificate

To test the newly installed SSL certificate use the following commands:

debian:~# openssl s_client -crlf -connect localhost:465 -quiet
depth=0 /C=UK/ST=London/L=London/O=My Org/OU=My Company/
verify error:num=18:self signed certificate
verify return:1
debian:~# openssl s_client -starttls smtp -crlf -connect localhost:25 -quiet
depth=0 /C=UK/ST=London/L=London/O=My Org/OU=My Company/
verify error:num=18:self signed certificate
verify return:1

If an error is returned like 32943:error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol:s23_clnt.c:607: this means that SSL variable in the qmail-smtpdssl/run script is set to 0.

To solve this error, change SSL=0 to SSL=1 in /var/qmail/supervise/qmail-smtpdssl/run and do qmailctl restart

The error verify return:1 displayed is perfectly fine and it's more of a warning than an error as it just reports the certificate is self signed.

How to install nginx webserver from source on Debian Linux / Install Latest Nginx on Debian

Wednesday, March 23rd, 2011

Reading Time: 6minutes

Nginx install server logo
If you're running a large website consisting of a mixture of php scripts, images and html. You probably have noticed that using just one Apache server to serve all the content is not that efficient

Each Apache child (I assume you're using Apache mpm prefork consumes approximately (20MB), this means that each client connection would consume 20 mb of your server memory.
This as you can imagine is truly a suicide in terms of memory. Each request for a picture, css or simple html file would ask Apache to fork another process and will consume (20mb of extra memory form your server mem capacity)!.

Taking in consideration all this notes and the need for some efficiency here, the administrator should normally think about dividing the processing of the so called static content from the dynamic content served on the server.

Apache is really a nice webserver software but with all the loaded modules to serve dynamic content, for instance php, cgi, python etc., it's becoming not the best solution for handling a (css, javascript, html, flv, avi, mov etc. files).

Even a plain Apache server installation without (libphp, mod_rewrite mod deflate etc.) is still not dealing efficiently enough with the aforementioned static files content

Here comes the question if Apache is not that quick and efficient in serving static files, what then? The answer is caching webserver! By caching the regular static content files, your website visitors will benefit by experiencing shorter webserver responce files in downloading static contents and therefore will generally hasten your website and improve the end user's experience.

There are plenty of caching servers out there, some are a proprietary software and some are free software.

However the three most popular servers out there for static file content serving are:

  • Squid,
  • Varnish
  • Nginx

In this article as you should have already found out by the article title I'll discuss Nginx

You might ask why exactly Nginx and not some of the other twos, well simply cause Squid is too complicated to configure and on the other hand does provide lower performance than Nginx. On the other hand Varnish is also a good solution for static file webserver, but I believe it is not tested enough. However I should mention that my experience with testing varnish on my own home router is quite good by so far.

If you're further interested into varhisn cache I would suggest you checkout .

Now as I have said a few words about squid and varhisn let's proceed to the essence of the article and say few words about nginx

Here is a quote describing nginx in a short and good manner directly extracted from

nginx [engine x] is a HTTP and reverse proxy server, as well as a mail proxy server written by Igor Sysoev. It has been running for more than five years on many heavily loaded Russian sites including Rambler ( According to Netcraft nginx served or proxied 4.70% busiest sites in April 2010. Here are some of success stories: FastMail.FM,

By default nginx is available ready to be installed in Debian via apt-get, however sadly enough the version available for install is pretty much outdated as of time of writting the nginx debian version in lenny's deb package repositories is 0.6.32-3+lenny3

This version was release about 2 years ago and is currently completely outdated, therefore I found it is not a good idea to use this old and probably slower release of nginx and I jumped further to install my nginx from source:
Nginx source installation actually is very simple on Linux platforms.

1. As a first step in order to be able to succeed with the install from source make sure your system you have installed the packages:

debian:~# apt-get install libpcre3 libpcre3-dev libpcrecpp0 libssl-dev zlib1g-dev build-essential

2. Secondly download latest nginx source code tarball

Check out on the latest stable release of nginx and further issue the commands below:

debian:~# cd /usr/local/src
debian:/usr/local/src# wget

3.Unarchive nginx source code

debian:/usr/local/src#tar -zxvvf nginx-0.9.6.tar.gz

The nginx server requirements for me wasn't any special so I proceeded and used the nginx ./configure script which is found in nginx-0.9.6

4. Compline nginx server

debian:/usr/local/src# cd nginx-0.9.6
debian:/usr/local/src/nginx-0.9.6# ./configure && make && make install
+ Linux 2.6.26-2-amd64 x86_64
checking for C compiler ... found
+ using GNU C compiler
+ gcc version: 4.3.2 (Debian 4.3.2-1.1)
checking for gcc -pipe switch ... found

The last lines printed by the nginx configure script are actually the major interesting ones for administration purposes the default complation options in my case were:

Configuration summary
+ using system PCRE library
+ OpenSSL library is not used
+ md5: using system crypto library
+ sha1 library is not used
+ using system zlib library

nginx path prefix: "/usr/local/nginx"
nginx binary file: "/usr/local/nginx/sbin/nginx"
nginx configuration prefix: "/usr/local/nginx/conf"
nginx configuration file: "/usr/local/nginx/conf/nginx.conf"
nginx pid file: "/usr/local/nginx/logs/"
nginx error log file: "/usr/local/nginx/logs/error.log"
nginx http access log file: "/usr/local/nginx/logs/access.log"
nginx http client request body temporary files: "client_body_temp"
nginx http proxy temporary files: "proxy_temp"
nginx http fastcgi temporary files: "fastcgi_temp"
nginx http uwsgi temporary files: "uwsgi_temp"
nginx http scgi temporary files: "scgi_temp"

If you want to setup nginx server to support ssl (https) and for instance install nginx to a different server path you can use some ./configure configuration options, for instance:

./configure –sbin-path=/usr/local/sbin –with-http_ssl_module

Now before you can start the nginx server, you should also set up the nginx init script;

5. Download and set a ready to use script with cmd:

debian:~# cd /etc/init.d
debian:/etc/init.d# wget
debian:/etc/init.d# mv nginx-init-script nginx
debian:/etc/init.d# chmod +x nginx

6. Configure Nginx

Nginx is a really easy and simple server, just like the Russians, Simple but good!
By the way it's interesting to mention nginx has been coded by a Russian, so it's robust and hard as a rock as all the other Russian creations 🙂
Nginx configuration files in a default install as the one in my case are to be found in /usr/local/nginx/conf

In the nginx/conf directory you're about to find the following list of files which concern nginx server configurations:

deiban:/usr/local/nginx:~# ls -1

The .default files are just a copy of the ones without the .default extension and contain the default respective file directives.

In my case I'm not using fastcgi to serve perl or php scripts via nginx so I don't need to configure the fastcgi.conf and fastcgi_params files, the scgi_params and uwsgi_params conf files are actually files which contain nginx configuration directives concerning the use of nginx to process SSI (Server Side Include) scripts and therefore I skip configuring the SSI conf files.
koi-utf and koi-win are two files which usually you don't need to configure and aims the nginx server to support the UTF-8 character encoding and the mime.types conf is a file which has a number of mime types the nginx server will know how to handle.

Therefore after all being said the only file which needs to configured is nginx.conf

7. Edit /usr/local/nginx/conf/nginx.conf

debian:/usr/local/nginx:# vim /usr/local/nginx/conf/nginx.conf

Therein you will find the following default configuration:

#gzip on;

server {
listen 80;
server_name localhost;

#charset koi8-r;

#access_log logs/host.access.log main;

location / {
root html;
index index.html index.htm;
#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;

In the default configuration above you need to modify only the above block of code as follows:

server {
listen 80;

#charset koi8-r;

#access_log logs/access.log main;

location / {
root /var/www/;
index index.html index.htm;

Change the and /var/www/ with your directory and website destinations.

8. Start nginx server with nginx init script

debian:/usr/local/nginx:# /etc/init.d/nginx start
Starting nginx:

This should bring up the nginx server, if something is miss configured you will notice also some error messages, as you can see in my case in above init script output, thanksfully there are no error messages.
Note that you can also start nginx directly via invoking /usr/local/nginx/sbin/nginx binary

To check if the nginx server has properly started from the command line type:

debian:/usr/local/nginx:~# ps ax|grep -i nginx|grep -v grep
9424 ? Ss 0:00 nginx: master process /usr/local/nginx/sbin/nginx
9425 ? S 0:00 nginx: worker process

Another way to check if the web browser is ready to serve your website file conten,t you can directly access your website by pointing your browser to with, you should get your either your custom index.html file or the default nginx greeting Welcome to nginx

9. Add nginx server to start up during system boot up

debian:/usr/local/nginx:# /usr/sbin/update-rc.d -f nginx defaults

That's all now you have up and running nginx and your static file serving will require you much less system resources, than with Apache.
Hope this article was helpful to somebody, feedback on it is very welcome!