Posts Tagged ‘DDoS’

How to Configure Nginx as a Reverse Proxy Load Balancer on Debian, CentOS, RHEL Linux

Monday, December 14th, 2020

set-up-nginx-reverse-proxy-howto-linux-logo

What is reverse Proxy?

Reverse Proxy (RP) is a Proxy server which routes all incoming traffic to secondary Webserver situated behind the Reverse Proxy site.

Then all incoming replies from secondary webserver (which is not visible) from the internet gets routed back to Reverse Proxy service. The result is it seems like all incoming and outgoing HTTP requests are served from Reverse Proxy host where in reality, reverse proxy host just does traffic redirection. Problem with reverse proxies is it is one more point of failure the good side of it can protect and route only certain traffic to your webserver, preventing the behind reverse proxy located server from crackers malicious HTTP requests.

Treverse proxy, which accepts all traffic and forwards it to a specific resource, like a server or container.  Earlier I've blogged on how to create Apache reverse Proxy to Tomcat.
Having a reverse proxy with apache is a classical scenarios however as NGINX is taking lead slowly and overthrowing apache's use due to its easiness to configure, its high reliability and less consumption of resources.


One of most common use of Reverse Proxy is to use it as a software Load Balancer for traffic towards another webserver or directly to a backend server. Using RP as a to mitigate DDoS attacks from hacked computers Bot nets (coming from a network range) is very common Anti-DDoS protection measure.
With the bloom of VM and Contrainerizations technology such as docker. And the trend to switch services towards micro-services, often reverse proxy is used to seamessly redirect incoming requests traff to multiple separate OS docker running containers etc.


Some of the other security pros of using a Reverse proxy that could be pointed are:

  • Restrict access to locations that may be obvious targets for brute-force attacks, reducing the effectiveness of DDOS attacks by limiting the number of connections and the download rate per IP address. 
  • Cache pre-rendered versions of popular pages to speed up page load times.
  • Interfere with other unwanted traffic when needed.

 


what-is-reverse-proxy-explained-proxying-tubes

 

1. Install nginx webserver


Assuming you have a Debian at hand as an OS which will be used for Nginx RP host, lets install nginx.
 

[hipo@proxy ~]$ sudo su – root

[root@proxy ~]#  apt update

[root@proxy ~]# apt install -y nginx


Fedora / CentOS / Redhat Linux could use yum or dnf to install NGINX
 

[root@proxy ~]# dnf install -y nginx
[root@proxy ~]# yum install -y nginx

 

2. Launch nginx for a first time and test


Start nginx one time to test default configuration is reachable
 

systemctl enable –now nginx


To test nginx works out of the box right after install, open a browser and go to http://localhost if you have X or use text based browser such as lynx or some web console fetcher as curl to verify that the web server is running as expected.

nginx-test-default-page-centos-linux-screenshot
 

3. Create Reverse proxy configuration file

Remove default Nginx package provided configuration

As of 2020 by default nginx does load configuration from file /etc/nginx/sites-enabled/default on DEB based Linuxes and in /etc/nginx/nginx.conf on RPM based ones, therefore to setup our desired configuration and disable default domain config to be loaded we have to unlink on Debian

[root@proxy ~]# unlink /etc/nginx/sites-enabled/default

or move out the original nginx.conf on Fedora / CentOS / RHEL:
 

[root@proxy ~]# mv /etc/nginx/nginx.conf /etc/nginx/nginx.conf-distro

 

Lets take a scenario where you have a local IP address that's NAT-ted ot DMZ-ed and and you want to run nginx to server as a reverse proxy to accelerate traffic and forward all traffic to another webserver such as LigHTTPD / Apache or towards java serving Application server Jboss / Tomcat etc that listens on the same host on lets say on port 8000 accessible via app server from /application/.

To do so prepare new /etc/nginx/nginx.conf to look like so
 

[root@proxy ~]# mv /etc/nginx/nginx.conf /etc/nginx.conf.bak
[root@proxy ~]# vim /etc/nginx/nginx.conf

user nginx;
worker_processes auto;
worker_rlimit_nofile 10240;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

 

events {
#       worker_connections 768;
        worker_connections 4096;
        multi_accept on;
        # multi_accept on;
}

http {

        ##
        # Basic Settings
        ##

        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        #keepalive_timeout 65;
        keepalive_requests 1024;
        client_header_timeout 30;
        client_body_timeout 30;
        keepalive_timeout 30;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ##
        # SSL Settings
        ##

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        ssl_prefer_server_ciphers on;

        ##
        # Logging Settings
        ##

        access_log /var/log/nginx/domain.com/access.log;
        error_log /var/log/nginx/domain.com/error.log;

        ##
        # Gzip Settings
        ##

        gzip on;

        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        ##
        # Virtual Host Configs
        ##

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;

 include /etc/nginx/default.d/*.conf;

upstream myapp {
    server 127.0.0.1:8000 weight=3;
    server 127.0.0.1:8001;
    server 127.0.0.1:8002;
    server 127.0.0.1:8003;
# Uncomment and use Load balancing with external FQDNs if needed
#  server srv1.example.com;
#   server srv2.example.com;
#   server srv3.example.co

}

#mail {
#       # See sample authentication script at:
#       # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
#
#       # auth_http localhost/auth.php;
#       # pop3_capabilities "TOP" "USER";
#       # imap_capabilities "IMAP4rev1" "UIDPLUS";
#
#       server {
#               listen     localhost:110;
#               protocol   pop3;
#               proxy      on;
#       }
#
#       server {
#               listen     localhost:143;
#               protocol   imap;
#               proxy      on;
#       }
#}

 

In the example above, there are 3 instances of the same application running on 3 IPs on different ports, just for the sake to illustrate Fully Qualified Domain Names (FQDNs) Load balancing is also supported you can see the 3 commented application instances srv1-srv3.
 When the load balancing method is not specifically configured, it defaults to round-robin. All requests are proxied to the server group myapp1, and nginx applies HTTP load balancing to distribute the requests.Reverse proxy implementation in nginx includes load balancing for HTTP, HTTPS, FastCGI, uwsgi, SCGI, memcached, and gRPC.
To configure load balancing for HTTPS instead of HTTP, just use “https” as the protocol.


To download above nginx.conf configured for High traffic servers and supports Nginx virtualhosts click here.

Now lets prepare for the reverse proxy nginx configuration a separate file under /etc/nginx/default.d/ all files stored there with .conf extension are to be red by nginx.conf as instructed by /etc/nginx/nginx.conf :

We'll need prepare a sample nginx

[root@proxy ~]# vim /etc/nginx/sites-available/reverse-proxy.conf

server {

        listen 80;
        listen [::]:80;


 server_name domain.com www.domain.com;
#index       index.php;
# fallback for index.php usually this one is never used
root        /var/www/domain.com/public    ;
#location / {
#try_files $uri $uri/ /index.php?$query_string;
#}

        location / {
                    proxy_pass http://127.0.0.1:8080;
  }

 

location /application {
proxy_pass http://domain.com/application/ ;

proxy_http_version                 1.1;
proxy_cache_bypass                 $http_upgrade;

# Proxy headers
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        "upgrade";
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host  $host;
proxy_set_header X-Forwarded-Port  $server_port;

# Proxy timeouts
proxy_connect_timeout              60s;
proxy_send_timeout                 60s;
proxy_read_timeout                 60s;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

}

##listen 443 ssl;
##    ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
##    ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;
##    include /etc/letsencrypt/options-ssl-nginx.conf;
##    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

}

Get above reverse-proxy.conf from here

As you see config makes all incoming traffic towards root ( / ) NGINX directory for domain http://domain.com on port 80 on Nginx Webserver to be passed on http://127.0.0.1:8000/application.

      location / {
                    proxy_pass http://127.0.0.1:8000;
  }


Another set of configuration has configuration domain.com/application to reverse proxy to Webserver on Port 8080 /application.

 

location /application {
proxy_pass http://domain.com/application/ ;

proxy_http_version                 1.1;
proxy_cache_bypass                 $http_upgrade;

# Proxy headers
proxy_set_header Upgrade           $http_upgrade;
proxy_set_header Connection        "upgrade";
proxy_set_header Host              $host;
proxy_set_header X-Real-IP         $remote_addr;
proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host  $host;
proxy_set_header X-Forwarded-Port  $server_port;

# Proxy timeouts
proxy_connect_timeout              60s;
proxy_send_timeout                 60s;
proxy_read_timeout                 60s;

        access_log /var/log/nginx/reverse-access.log;
        error_log /var/log/nginx/reverse-error.log;

}

– Enable new configuration to be active in NGINX

 

[root@proxy ~]# ln -s /etc/nginx/sites-available/reverse-proxy.conf /etc/nginx/sites-enabled/reverse-proxy.conf

 

4. Test reverse proxy nginx config for syntax errors

 

[root@proxy ~]# nginx -t

 

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Test connectivity to listen external IP address

 

5. Enable nginx SSL letsencrypt certificates support

 

[root@proxy ~]# apt-get update
[root@proxy ~]# apt-get install software-properties-common

[root@proxy ~]# apt-get update
[root@proxy ~]# apt-get install python-certbot-nginx

 

6. Generate NGINX Letsencrypt certificates

 

[root@proxy ~]# certbot –nginx

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: your.domain.com
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for your.domain.com
Waiting for verification…
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/reverse-proxy.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: No redirect – Make no further changes to the webserver configuration.
2: Redirect – Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/reverse-proxy.conf

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Congratulations! You have successfully enabled https://your.domain.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=your.domain.com
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

 

7. Set NGINX Reverse Proxy to auto-start on Linux server boot

On most modern Linux distros use systemctl for legacy machines depending on the Linux distribution use the addequate runlevel /etc/rc3.d/ symlink link on Debian based distros on Fedoras / CentOS / RHEL and other RPM based ones use chkconfig RedHat command.

 

[root@proxy ~]# systemctl start nginx
[root@proxy ~]# systemctl enable nginx

 

8. Fixing weird connection permission denied errors


If you get a weird permission denied errors right after you have configured the ProxyPass on Nginx and you're wondering what is causing it you have to know by default on CentOS 7 and RHEL you'll get this errors due to automatically enabled OS selinux security hardening.

If this is the case after you setup Nginx + HTTPD or whatever application server you will get errors in  /var/log/nginx.log like:

2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:01 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"
2020/12/14 07:46:02 [crit] 7626#0: *1 connect() to 127.0.0.1:8080 failed (13: Permission denied) while connecting to upstream, client: 127.0.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8080/", host: "localhost"


The solution to proxy_pass weird permission denied errors is to turn off selinux

[root@proxy ~]# setsebool -P httpd_can_network_connect 1

To permanently allow nginx and httpd

[root@proxy ~]# cat /var/log/audit/audit.log | grep nginx | grep denied | audit2allow -M mynginx
[root@proxy ~]# semodule -i mynginx.pp

 

[root@proxy ~]# cat /var/log/audit/audit.log | grep httpd | grep denied | audit2allow -M myhttpd
[root@proxy ~]# semodule -i myhttpd.pp


Then to let know nginx and httpd (or whatever else app you run) be aware of new settings restart both

[root@proxy ~]# systemctl restart nginx
[root@proxy ~]# systemctl restart httpd

Resolving “nf_conntrack: table full, dropping packet.” flood message in dmesg Linux kernel log

Wednesday, March 28th, 2012

nf_conntrack_table_full_dropping_packet
On many busy servers, you might encounter in /var/log/syslog or dmesg kernel log messages like

nf_conntrack: table full, dropping packet

to appear repeatingly:

[1737157.057528] nf_conntrack: table full, dropping packet.
[1737157.160357] nf_conntrack: table full, dropping packet.
[1737157.260534] nf_conntrack: table full, dropping packet.
[1737157.361837] nf_conntrack: table full, dropping packet.
[1737157.462305] nf_conntrack: table full, dropping packet.
[1737157.564270] nf_conntrack: table full, dropping packet.
[1737157.666836] nf_conntrack: table full, dropping packet.
[1737157.767348] nf_conntrack: table full, dropping packet.
[1737157.868338] nf_conntrack: table full, dropping packet.
[1737157.969828] nf_conntrack: table full, dropping packet.
[1737157.969928] nf_conntrack: table full, dropping packet
[1737157.989828] nf_conntrack: table full, dropping packet
[1737162.214084] __ratelimit: 83 callbacks suppressed

There are two type of servers, I've encountered this message on:

1. Xen OpenVZ / VPS (Virtual Private Servers)
2. ISPs – Internet Providers with heavy traffic NAT network routers
 

I. What is the meaning of nf_conntrack: table full dropping packet error message

In short, this message is received because the nf_conntrack kernel maximum number assigned value gets reached.
The common reason for that is a heavy traffic passing by the server or very often a DoS or DDoS (Distributed Denial of Service) attack. Sometimes encountering the err is a result of a bad server planning (incorrect data about expected traffic load by a company/companeis) or simply a sys admin error…

– Checking the current maximum nf_conntrack value assigned on host:

linux:~# cat /proc/sys/net/ipv4/netfilter/ip_conntrack_max
65536

– Alternative way to check the current kernel values for nf_conntrack is through:

linux:~# /sbin/sysctl -a|grep -i nf_conntrack_max
error: permission denied on key 'net.ipv4.route.flush'
net.netfilter.nf_conntrack_max = 65536
error: permission denied on key 'net.ipv6.route.flush'
net.nf_conntrack_max = 65536

– Check the current sysctl nf_conntrack active connections

To check present connection tracking opened on a system:

:

linux:~# /sbin/sysctl net.netfilter.nf_conntrack_count
net.netfilter.nf_conntrack_count = 12742

The shown connections are assigned dynamicly on each new succesful TCP / IP NAT-ted connection. Btw, on a systems that work normally without the dmesg log being flooded with the message, the output of lsmod is:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
ip_tables 9899 1 iptable_filter
x_tables 14175 1 ip_tables

On servers which are encountering nf_conntrack: table full, dropping packet error, you can see, when issuing lsmod, extra modules related to nf_conntrack are shown as loaded:

linux:~# /sbin/lsmod | egrep 'ip_tables|conntrack'
nf_conntrack_ipv4 10346 3 iptable_nat,nf_nat
nf_conntrack 60975 4 ipt_MASQUERADE,iptable_nat,nf_nat,nf_conntrack_ipv4
nf_defrag_ipv4 1073 1 nf_conntrack_ipv4
ip_tables 9899 2 iptable_nat,iptable_filter
x_tables 14175 3 ipt_MASQUERADE,iptable_nat,ip_tables

 

II. Remove completely nf_conntrack support if it is not really necessery

It is a good practice to limit or try to omit completely use of any iptables NAT rules to prevent yourself from ending with flooding your kernel log with the messages and respectively stop your system from dropping connections.

Another option is to completely remove any modules related to nf_conntrack, iptables_nat and nf_nat.
To remove nf_conntrack support from the Linux kernel, if for instance the system is not used for Network Address Translation use:

/sbin/rmmod iptable_nat
/sbin/rmmod ipt_MASQUERADE
/sbin/rmmod rmmod nf_nat
/sbin/rmmod rmmod nf_conntrack_ipv4
/sbin/rmmod nf_conntrack
/sbin/rmmod nf_defrag_ipv4

Once the modules are removed, be sure to not use iptables -t nat .. rules. Even attempt to list, if there are any NAT related rules with iptables -t nat -L -n will force the kernel to load the nf_conntrack modules again.

Btw nf_conntrack: table full, dropping packet. message is observable across all GNU / Linux distributions, so this is not some kind of local distribution bug or Linux kernel (distro) customization.
 

III. Fixing the nf_conntrack … dropping packets error

– One temporary, fix if you need to keep your iptables NAT rules is:

linux:~# sysctl -w net.netfilter.nf_conntrack_max=131072

I say temporary, because raising the nf_conntrack_max doesn't guarantee, things will get smoothly from now on.
However on many not so heavily traffic loaded servers just raising the net.netfilter.nf_conntrack_max=131072 to a high enough value will be enough to resolve the hassle.

– Increasing the size of nf_conntrack hash-table

The Hash table hashsize value, which stores lists of conntrack-entries should be increased propertionally, whenever net.netfilter.nf_conntrack_max is raised.

linux:~# echo 32768 > /sys/module/nf_conntrack/parameters/hashsize
The rule to calculate the right value to set is:
hashsize = nf_conntrack_max / 4

– To permanently store the made changes ;a) put into /etc/sysctl.conf:

linux:~# echo 'net.netfilter.nf_conntrack_count = 131072' >> /etc/sysctl.conf
linux:~# /sbin/sysct -p

b) put in /etc/rc.local (before the exit 0 line):

echo 32768 > /sys/module/nf_conntrack/parameters/hashsize

Note: Be careful with this variable, according to my experience raising it to too high value (especially on XEN patched kernels) could freeze the system.
Also raising the value to a too high number can freeze a regular Linux server running on old hardware.

– For the diagnosis of nf_conntrack stuff there is ;

/proc/sys/net/netfilter kernel memory stored directory. There you can find some values dynamically stored which gives info concerning nf_conntrack operations in "real time":

linux:~# cd /proc/sys/net/netfilter
linux:/proc/sys/net/netfilter# ls -al nf_log/

total 0
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ./
dr-xr-xr-x 0 root root 0 Mar 23 23:02 ../
-rw-r--r-- 1 root root 0 Mar 23 23:02 0
-rw-r--r-- 1 root root 0 Mar 23 23:02 1
-rw-r--r-- 1 root root 0 Mar 23 23:02 10
-rw-r--r-- 1 root root 0 Mar 23 23:02 11
-rw-r--r-- 1 root root 0 Mar 23 23:02 12
-rw-r--r-- 1 root root 0 Mar 23 23:02 2
-rw-r--r-- 1 root root 0 Mar 23 23:02 3
-rw-r--r-- 1 root root 0 Mar 23 23:02 4
-rw-r--r-- 1 root root 0 Mar 23 23:02 5
-rw-r--r-- 1 root root 0 Mar 23 23:02 6
-rw-r--r-- 1 root root 0 Mar 23 23:02 7
-rw-r--r-- 1 root root 0 Mar 23 23:02 8
-rw-r--r-- 1 root root 0 Mar 23 23:02 9

 

IV. Decreasing other nf_conntrack NAT time-out values to prevent server against DoS attacks

Generally, the default value for nf_conntrack_* time-outs are (unnecessery) large.
Therefore, for large flows of traffic even if you increase nf_conntrack_max, still shorty you can get a nf_conntrack overflow table resulting in dropping server connections. To make this not happen, check and decrease the other nf_conntrack timeout connection tracking values:

linux:~# sysctl -a | grep conntrack | grep timeout
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_established = 432000
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_events_retry_timeout = 15
net.ipv4.netfilter.ip_conntrack_generic_timeout = 600
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_sent2 = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_syn_recv = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 432000
net.ipv4.netfilter.ip_conntrack_tcp_timeout_fin_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close_wait = 60
net.ipv4.netfilter.ip_conntrack_tcp_timeout_last_ack = 30
net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
net.ipv4.netfilter.ip_conntrack_tcp_timeout_close = 10
net.ipv4.netfilter.ip_conntrack_tcp_timeout_max_retrans = 300
net.ipv4.netfilter.ip_conntrack_udp_timeout = 30
net.ipv4.netfilter.ip_conntrack_udp_timeout_stream = 180
net.ipv4.netfilter.ip_conntrack_icmp_timeout = 30

All the timeouts are in seconds. net.netfilter.nf_conntrack_generic_timeout as you see is quite high – 600 secs = (10 minutes).
This kind of value means any NAT-ted connection not responding can stay hanging for 10 minutes!

The value net.netfilter.nf_conntrack_tcp_timeout_established = 432000 is quite high too (5 days!)
If this values, are not lowered the server will be an easy target for anyone who would like to flood it with excessive connections, once this happens the server will quick reach even the raised up value for net.nf_conntrack_max and the initial connection dropping will re-occur again …

With all said, to prevent the server from malicious users, situated behind the NAT plaguing you with Denial of Service attacks:

Lower net.ipv4.netfilter.ip_conntrack_generic_timeout to 60 – 120 seconds and net.ipv4.netfilter.ip_conntrack_tcp_timeout_established to stmh. like 54000

linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_generic_timeout = 120
linux:~# sysctl -w net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000

This timeout should work fine on the router without creating interruptions for regular NAT users. After changing the values and monitoring for at least few days make the changes permanent by adding them to /etc/sysctl.conf

linux:~# echo 'net.ipv4.netfilter.ip_conntrack_generic_timeout = 120' >> /etc/sysctl.conf
linux:~# echo 'net.ipv4.netfilter.ip_conntrack_tcp_timeout_established = 54000' >> /etc/sysctl.conf

Secure Apache webserver against basic Denial of Service attacks with mod_evasive on Debian Linux

Wednesday, September 7th, 2011

Secure Apache against basic Denial of Service attacks with mod evasive, how webserver DDoS works

One good module that helps in mitigating, very basic Denial of Service attacks against Apache 1.3.x 2.0.x and 2.2.x webserver is mod_evasive

I’ve noticed however many Apache administrators out there does forget to install it on new Apache installations or even some of them haven’t heard about of it.
Therefore I wrote this small article to create some more awareness of the existence of the anti DoS module and hopefully thorugh it help some of my readers to strengthen their server security.

Here is a description on what exactly mod-evasive module does:

debian:~# apt-cache show libapache2-mod-evasive | grep -i description -A 7

Description: evasive module to minimize HTTP DoS or brute force attacks
mod_evasive is an evasive maneuvers module for Apache to provide some
protection in the event of an HTTP DoS or DDoS attack or brute force attack.
.
It is also designed to be a detection tool, and can be easily configured to
talk to ipchains, firewalls, routers, and etcetera.
.
This module only works on Apache 2.x servers

How does mod-evasive anti DoS module works?

Detection is performed by creating an internal dynamic hash table of IP Addresses and URIs, and denying any single IP address which matches the criterias:

  • Requesting the same page more than number of times per second
  • Making more than N (number) of concurrent requests on the same child per second
  • Making requests to Apache during the IP is temporarily blacklisted (in a blocking list – IP blacklist is removed after a time period))

These anti DDoS and DoS attack protection decreases the possibility that Apache gets DoSed by ana amateur DoS attack, however it still opens doors for attacks who has a large bot-nets of zoombie hosts (let’s say 10000) which will simultaneously request a page from the Apache server. The result in a scenario with a infected botnet running a DoS tool in most of the cases will be a quick exhaustion of system resources available (bandwidth, server memory and processor consumption).
Thus mod-evasive just grants a DoS and DDoS security only on a basic, level where someone tries to DoS a webserver with only possessing access to few hosts.
mod-evasive however in many cases mesaure to protect against DoS and does a great job if combined with Apache mod-security module discussed in one of my previous blog posts – Tightening PHP Security on Debian with Apache 2.2 with ModSecurity2
1. Install mod-evasive

Installing mod-evasive on Debian Lenny, Squeeze and even Wheezy is done in identical way straight using apt-get:

deiban:~# apt-get install libapache2-mod-evasive
...

2. Enable mod-evasive in Apache

debian:~# ln -sf /etc/apache2/mods-available/mod-evasive.load /etc/apache2/mods-enabled/mod-evasive.load

3. Configure the way mod-evasive deals with potential DoS attacks

Open /etc/apache2/apache2.conf, go down to the end of the file and paste inside, below three mod-evasive configuration directives:

<IfModule mod_evasive20.c>
DOSHashTableSize 3097DOS
PageCount 30
DOSSiteCount 40
DOSPageInterval 2
DOSSiteInterval 1
DOSBlockingPeriod 120
#DOSEmailNotify hipo@mymailserver.com
</IfModule>

In case of the above configuration criterias are matched, mod-evasive instructs Apache to return a 403 (Forbidden by default) error page which will conserve bandwidth and system resources in case of DoS attack attempt, especially if the DoS attack targets multiple requests to let’s say a large downloadable file or a PHP,Perl,Python script which does a lot of computation and thus consumes large portion of server CPU time.

The meaning of the above three mod-evasive config vars are as follows:

DOSHashTableSize 3097 – Increasing the DoSHashTableSize will increase performance of mod-evasive but will consume more server memory, on a busy webserver this value however should be increased
DOSPageCount 30 – Add IP in evasive temporary blacklist if a request for any IP that hits the same page 30 consequential times.
DOSSiteCount 40 – Add IP to be be blacklisted if 40 requests are made to a one and the same URL location in 1 second time
DOSBlockingPeriod 120 – Instructs the time in seconds for which an IP will get blacklisted (e.g. will get returned the 403 foribden page), this settings instructs mod-evasive to block every intruder which matches DOSPageCount 30 or DOSSiteCount 40 for 2 minutes time.
DOSPageInterval 2 – Interval of 2 seconds for which DOSPageCount can be reached.
DOSSiteInterval 1 – Interval of 1 second in which if DOSSiteCount of 40 is matched the matched IP will be blacklisted for configured period of time.

mod-evasive also supports IP whitelisting with its option DOSWhitelist , handy in cases if for example, you should allow access to a single webpage from office env consisting of hundred computers behind a NAT.
Another handy configuration option is the module capability to notify, if a DoS is originating from a number of IP addresses using the option DOSEmailNotify
Using the DOSSystemCommand in relation with iptables, could be configured to filter out any IP addresses which are found to be matching the configured mod-evasive rules.
The module also supports custom logging, if you want to keep track on IPs which are found to be trying a DoS attack against the server place in above shown configuration DOSLogDir “/var/log/apache2/evasive” and create the /var/log/apache2/evasive directory, with:
debian:~# mkdir /var/log/apache2/evasive

I decided not to log mod-evasive DoS IP matches as this will just add some extra load on the server, however in debugging some mistakenly blacklisted IPs logging is sure a must.

4. Restart Apache to load up mod-evasive debian:~# /etc/init.d/apache2 restart
...

Finally a very good reading which sheds more light on how exactly mod-evasive works and some extra module configuration options are located in the documentation bundled with the deb package to read it, issue:

debian:~# zless /usr/share/doc/libapache2-mod-evasive/README.gz

How to find and kill Abusers on OpenVZ Linux hosted Virtual Machines (Few bash scripts to protect OpenVZ CentOS server from script kiddies and easify the daily admin job)

Friday, July 22nd, 2011

OpenVZ Logo - Anti Denial Of Service shell scripts

These days, I’m managing a number of OpenVZ Virtual Machine host servers. Therefore constantly I’m facing a lot of problems with users who run shit scripts inside their Linux Virtual Machines.

Commonly user Virtual Servers are used as a launchpad to attack hosts do illegal hacking activities or simply DDoS a host..
The virtual machines users (which by the way run on top of the CentOS OpenVZ Linux) are used to launch a Denial service scripts like kaiten.pl, trinoo, shaft, tfn etc.

As a consequence of their malicious activities, oftenly the Data Centers which colocates the servers are either null routing our server IPs until we suspend the Abusive users, or the servers go simply down because of a server overload or a kernel bug hit as a result of the heavy TCP/IP network traffic or CPU/mem overhead.

Therefore to mitigate this abusive attacks, I’ve written few bash shell scripts which, saves us a lot of manual check ups and prevents in most cases abusers to run the common DoS and “hacking” script shits which are now in the wild.

The first script I’ve written is kill_abusers.sh , what the script does is to automatically look up for a number of listed processes and kills them while logging in /var/log/abusers.log about the abusive VM user procs names killed.

I’ve set this script to run 4 times an hour and it currently saves us a lot of nerves and useless ticket communication with Data Centers (DCs), not to mention that reboot requests (about hanged up servers) has reduced significantly.
Therefore though the scripts simplicity it in general makes the servers run a way more stable than before.

Here is OpenVZ kill/suspend Abusers procs script kill_abusers.sh ready for download

Another script which later on, I’ve written is doing something similar and still different, it does scan the server hard disk using locate and find commands and tries to identify users which has script kiddies programs in their Virtual machines and therefore are most probably crackers.
The scripts looks up for abusive network scanners, DoS scripts, metasploit framework, ircds etc.

After it registers through scanning the server hdd, it lists only files which are preliminary set in the script to be dangerous, and therefore there execution inside the user VM should not be.

search_for_abusers.sh then logs in a files it’s activity as well as the OpenVZ virtual machines user IDs who owns hack related files. Right after it uses nail mailing command to send email to a specified admin email and reports the possible abusers whose VM accounts might need to either be deleted or suspended.

search_for_abusers can be download here

Honestly I truly liked my search_for_abusers.sh script as it became quite nice and I coded it quite quickly.
I’m intending now to put the Search for abusers script on a cronjob on the servers to check periodically and report the IDs of OpenVZ VM Users which are trying illegal activities on the servers.

I guess now our beloved Virtual Machine user script kiddies are in a real trouble ;P