Posts Tagged ‘configure’

Install certbot on Debian, Ubuntu, CentOS, Fedora Linux 10 / Generate and use Apache / Nginx SSL Letsencrypt certificates

Monday, December 21st, 2020

letsencrypt certbot install on any linux distribution with apache or nginx webserver howto</a>
Let's Encrypt is a free, automated, and open certificate authority brought to you by the nonprofit <a  data-cke-saved-href=
Internet Security Research Group (ISRG). ISRG group gave initiative with the goal to "encrypt the internet", i.e. offer free alternative to the overpriced domani registrer sold certificates with the goal to make more people offer SSL / TSL Free secured connection line on their websites. 
ISRG group supported Letsencrypt non-profit certificate authority actrively by Internet industry standard giants such as Mozilla, Cisco, EFF (Electronic Frontier Foundation),  Facebook, Google Chrome, Amazon AWS, OVH Cloud, Redhat, VMWare, Github and many many of the leading companies in IT.

Letsencrpyt is aimed at automating the process designed to overcome manual creation, validation, signing, installation, and renewal of certificates for secure websites. I.e. you don't have to manually write on console complicated openssl command lines with passing on Certificate CSR /  KEY / PEM files etc and generate Self-Signed Untrusted Authority Certificates (noted in my previous article How to generate Self-Signed SSL Certificates with openssl or use similar process to pay money generate secret key and submit the key to third party authority through a their website webadmin  interface in order to Generate SSL brought by Godaddy or Other Certificate Authority.

But of course as you can guess there are downsides as you submit your private key automatically via letsencrypt set of SSL certificate automation domain scripts to a third party Certificate Authority which is at A security intrusion in their private key store servers might mean a catastrophy for your data as malicious stealer might be able to decrypt your data with some additional effort and see in plain text what is talking to your Apache / Nginx or Mail Server nevertheless the cert. Hence for a high standards such as PCI environments Letsencrypt as well as for the paranoid security freak admins,  who don't trust the mainstream letsencrypt is definitely not a choice. Anyways for most small and midsized businesses who doesn't hold too much of a top secret data and want a moderate level of security Letsencrypt is a great opportunity to try. But enough talk, lets get down to business.

How to install and use certbot on Debian GNU / Linux 10 Buster?
Certbot is not available from the Debian software repositories by default, but it’s possible to configure the buster-backports repository in your /etc/apt/sources.list file to allow you to install a backport of the Certbot software with APT tool.

1. Install certbot on Debian / Ubuntu Linux


root@webserver:/etc/apt# tail -n 1 /etc/apt/sources.list
deb buster-backports main

If not there append the repositories to file:


  • Install certbot-nginx certbot-apache deb packages

root@webserver:/ # echo 'deb buster-backports main' >> /etc/apt/sources.list


  • Install certbot-nginx certbot-apache deb packages

root@webserver:/ # apt update
root@webserver:/ # apt install certbot python-certbot-nginx python3-certbot-apache python-certbot-nginx-doc

This will install the /usr/bin/certbot python executable script which is used to register / renew / revoke / delete your domains certificates.

2. Install letsencrypt certbot client on CentOS / RHEL / Fedora and other Linux Distributions


For RPM based distributions and other Linux distributions you will have to install snap package (if not already installed) and use snap command :



[root@centos ~ :] # yum install snapd
systemctl enable –now snapd.socket

To enable classic snap support, enter the following to create a symbolic link between

[root@centos ~ :] # ln -s /var/lib/snapd/snap /snap

snap command lets you install, configure, refresh and remove snaps.  Snaps are packages that work across many different Linux distributions, enabling secure delivery and operation of the latest apps and utilities.

[root@centos ~ :] # snap install core; sudo snap refresh core

Logout from console or Xsession to make the snap update its $PATH definitions.

Then use snap universal distro certbot classic package

 [root@centos ~ :] # snap install –classic certbot
[root@centos ~ :] # ln -s /snap/bin/certbot /usr/bin/certbot


If you're having an XOrg server access on the RHEL / CentOS via Xming or other type of Xemulator you might check out also the snap-store as it contains a multitude of packages installable which are not usually available in RPM distros.

 [root@centos ~ :] # snap install snap-store


snap-store is a powerful and via it you can install many non easily installable stuff on Linux such as eclipse famous development IDE, notepad++ , Discord, the so favourite for the Quality Assurance guy Protocol tester Postman etc.

  • Installing certbot to any distribution via script

Another often preferred solution to Universally deploy  and upgrade an existing LetsEncrypt program to any Linux distribution (e.g. RHEL / CentOS / Fedora etc.) is the script. To install acme you have to clone the repository and run the script with –install

P.S. If you don't have git installed yet do

root@webserver:/ # apt-get install –yes git

and then the usual git clone to fetch it at your side

# cd /root
# git clone
Cloning into ''…
remote: Enumerating objects: 71, done.
remote: Counting objects: 100% (71/71), done.
remote: Compressing objects: 100% (53/53), done.
remote: Total 12475 (delta 39), reused 38 (delta 18), pack-reused 12404
Receiving objects: 100% (12475/12475), 4.79 MiB | 6.66 MiB/s, done.
Resolving deltas: 100% (7444/7444), done.

# sh –install

To later upgrade to latest you can do

# sh –upgrade

In order to renew a concrete existing letsencrypt certificiate

# sh –renew

To renew all certificates using script

# ./ –renew-all


3. Generate Apache or NGINX Free SSL / TLS Certificate with certbot tool

Now lets generate a certificate for a domain running on Apache Webserver with a Website WebRoot directory /home/phpdev/public/www


root@webserver:/ # certbot –apache –webroot -w /home/phpdev/public/www/ -d -d

root@webserver:/ # certbot certonly –webroot -w /home/phpdev/public/www/ -d -d

As you see all the domains for which you will need to generate are passed on with -d option.

Once certificates are properly generated you can test it in a browser and once you're sure they work as expected usually you can sleep safe for the next 3 months ( 90 days) which is the default for TSL / SSL Letsencrypt certificates the reason behind of course is security.


4. Enable freshly generated letsencrypt SSL certificate in Nginx VirtualHost config

Go to your nginx VirtualHost configuration (i.e. /etc/nginx/sites-enabled/ ) and inside chunk of config add after location { … } – 443 TCP Port SSL listener (as in shown in bolded configuration)

server {

   location ~ \.php$ {
      include /etc/nginx/fastcgi_params;
##      fastcgi_pass;
      fastcgi_pass unix:/run/php/php7.3-fpm.sock;
      fastcgi_index index.php;
      fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name;



    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot


5. Enable new generated letsencrypt SSL certificate in Apache VirtualHost

In /etc/apache2/{sites-available,sites-enabled}/ you should have as a minimum a configuration setup like below:


NameVirtualHost *:443 <VirtualHost>
    HostnameLookups off
    DocumentRoot /var/www
    DirectoryIndex index.html index.htm index.php index.html.var



CheckSpelling on
SSLEngine on

    <Directory />
        Options FollowSymLinks
        AllowOverride All
        ##Order allow,deny
        ##allow from all
        Require all granted
    <Directory /var/www>
        Options Indexes FollowSymLinks MultiViews
        AllowOverride All
##      Order allow,deny
##      allow from all
Require all granted

Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/
SSLCertificateKeyFile /etc/letsencrypt/live/


6. Simulate a certificate regenerate with –dry-run

Soon before the 90 days period expiry approaches, it is a good idea to test how all installed Nginx webserver certficiates will be renewed and whether any issues are expected this can be done with the –dry-run option.

root@webserver:/ # certbot renew –dry-run


– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates below have not been saved.)

Congratulations, all renewals succeeded. The following certs have been renewed:
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
  /etc/letsencrypt/live/ (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
**          (The test certificates above have not been saved.)
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –


7. Renew a certificate from a multiple installed certificate list

In some time when you need to renew letsencrypt domain certificates you can list them and choose manually which one you want to renew.

root@webserver:/ # certbot –force-renewal
Saving debug log to /var/log/letsencrypt/letsencrypt.log

How would you like to authenticate and install certificates?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: Apache Web Server plugin (apache)
2: Nginx Web Server plugin (nginx)
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 3
Renewing an existing certificate
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
1: No redirect – Make no further changes to the webserver configuration.
2: Redirect – Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Redirecting all traffic on port 80 to ssl in /etc/nginx/sites-enabled/

– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Your existing certificate has been successfully renewed, and the new certificate
has been installed.

The new certificate covers the following domains:

You should test your configuration at:
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

 – Congratulations! Your certificate and chain have been saved at:

   Your key file has been saved at:
   Your cert will expire on 2021-03-21. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 – If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:
   Donating to EFF:          


8. Renew all present SSL certificates

root@webserver:/ # certbot renew

Processing /etc/letsencrypt/renewal/
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –
Cert not yet due for renewal


– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/ expires on 2021-03-01 (skipped)
  /etc/letsencrypt/live/ expires on 2021-02-28 (skipped)
  /etc/letsencrypt/live/ expires on 2021-02-28 (skipped)
  /etc/letsencrypt/live/ expires on 2021-03-01 (skipped)
  /etc/letsencrypt/live/ expires on 2021-02-25 (skipped)
  /etc/letsencrypt/live/ expires on 2021-03-21 (skipped)
  /etc/letsencrypt/live/ expires on 2021-02-28 (skipped)
  /etc/letsencrypt/live/ expires on 2021-03-01 (skipped)
No renewals were attempted.
– – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – – –



9. Renew all existing server certificates from a cron job

The certbot package will install a script under /etc/cron.d/certbot to be run that will attempt every 12 hours however from my experience
often this script is not going to work, the script looks similar to below:

# Upgrade all existing SSL certbot machine certificates


0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q renew

Another approach to renew all installed certificates if you want to have a specific options and keep log of what happened is using a tiny shell script like this:


10. Auto renew installed SSL / TSL Certbot certificates with a bash loop over all present certificates

# update SSL certificates
# prints from 1 to 104 (according to each certbot generated certificate and triggers rewew and logs what happened to log file
# an ugly hack for certbot certificate renew
for i in $(seq 1 104); do echo "Updating $i SSL Cert" | tee -a /root/certificate-update.log; yes "$i" | certbot –force-renewal | tee -a /root/certificate-update.log 2>&1; sleep 5; done

Note: The seq 1 104 is the range depends on the count of installed SSL certificates you have installed on the machine, that can be seen and set the proper value according to your case when you run one time certbot –force-renewal.

Report haproxy node switch script useful for Zabbix or other monitoring

Tuesday, June 9th, 2020

For those who administer corosync clustered haproxy and needs to build monitoring in case if the main configured Haproxy node in the cluster is changed, I've developed a small script to be integrated with zabbix-agent installed to report to a central zabbix server via a zabbix proxy.
The script  is very simple it assumed DC1 variable is the default used haproxy node and DC2 and DC3 are 2 backup nodes. The script is made to use crm_mon which is not installed by default on each server by default so if you'll be using it you'll have to install it first, but anyways the script can easily be adapted to use pcs cmd instead.

Below is the bash shell script:

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }'); do ((f++)); DC[$f]="$i"; done; \
DC=$(sudo /usr/sbin/crm_mon -n -1 | grep 'Current DC' | awk '{ print $1 " " $2 " " $3}' | awk '{ print $3 }'); \
if [ “$DC” == “${DC[1]}” ]; then echo “1 Default DC Switched to ${DC[1]}”; elif [ “$DC” == “${DC[2]}” ]; then \
echo "2 Default DC Switched to ${DC[2]}”; elif [ “$DC” == “${DC[3]}” ]; then echo “3 Default DC: ${DC[3]}"; fi

To configure it with zabbix monitoring it can be configured via UserParameterScript.

The way I configured  it in Zabbix is as so:

1. Create the userpameter_active_node.conf

Below script is 3 nodes Haproxy cluster

# cat > /etc/zabbix/zabbix_agentd.d/userparameter_active_node.conf

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }'); do ((f++)); DC[$f]="$i"; done; \
DC=$(sudo /usr/sbin/crm_mon -n -1 | grep 'Current DC' | awk '{ print $1 " " $2 " " $3}' | awk '{ print $3 }'); \
if [ “$DC” == “${DC[1]}” ]; then echo “1 Default DC Switched to ${DC[1]}”; elif [ “$DC” == “${DC[2]}” ]; then \
echo "2 Default DC Switched to ${DC[2]}”; elif [ “$DC” == “${DC[3]}” ]; then echo “3 Default DC: ${DC[3]}"; fi

Once pasted to save the file press CTRL + D

The version of the script with 2 nodes slightly improved is like so:

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }' | sed -e 's#:##g'); do DC_ARRAY[$f]=”$i”; ((f++)); done; GET_CURR_DC=$(sudo /usr/sbin/crm_mon -n -1 | grep ‘Current DC’ | awk ‘{ print $1 ” ” $2 ” ” $3}’ | awk ‘{ print $3 }’); if [ “$GET_CURR_DC” == “${DC_ARRAY[0]}” ]; then echo “1 Default DC ${DC_ARRAY[0]}”; fi; if [ “$GET_CURR_DC” == “${DC_ARRAY[1]}” ]; then echo “2 Default Current DC Switched to ${DC_ARRAY[1]} Please check “; fi; if [ -z “$GET_CURR_DC” ] || [ -z “$DC_ARRAY[1]” ]; then printf "Error something might be wrong with HAProxy Cluster on  $HOSTNAME "; fi;

The script with a bit of more comments as explanations is available here 
2. Configure access for /usr/sbin/crm_mon for zabbix user in sudoers


# vim /etc/sudoers

zabbix          ALL=NOPASSWD: /usr/sbin/crm_mon

3. Configure in Zabbix for active.dc key Trigger and Item


Fix FTP active connection issues “Cannot create a data connection: No route to host” on ProFTPD Linux dedicated server

Tuesday, October 1st, 2019


Earlier I've blogged about an encounter problem that prevented Active mode FTP connections on CentOS
As I'm working for a client building a brand new dedicated server purchased from Contabo Dedi Host provider on a freshly installed Debian 10 GNU / Linux, I've had to configure a new FTP server, since some time I prefer to use Proftpd instead of VSFTPD because in my opinion it is more lightweight and hence better choice for a small UNIX server setups. During this once again I've encounted the same ACTIVE FTP not working from FTP server to FTP client host machine. But before shortly explaining, the fix I find worthy to explain briefly what is ACTIVE / PASSIVE FTP connection.


1. What is ACTIVE / PASSIVE FTP connection?

Whether in active mode, the client specifies which client-side port the data channel has been opened and the server starts the connection. Or in other words the default FTP client communication for historical reasons is in ACTIVE MODE. E.g.
Client once connected to Server tells the server to open extra port or ports locally via which the overall FTP data transfer will be occuring. In the early days of networking when FTP protocol was developed security was not of such a big concern and usually Networks did not have firewalls at all and the FTP DATA transfer host machine was running just a single FTP-server and nothing more in this, early days when FTP was not even used over the Internet and FTP DATA transfers happened on local networks, this was not a problem at all.

In passive mode, the server decides which server-side port the client should connect to. Then the client starts the connection to the specified port.

But with the ever increasing complexity of Internet / Networks and the ever tightening firewalls due to viruses and worms that are trying to own and exploit networks creating unnecessery bulk loads this has changed …


2. Installing and configure ProFTPD server Public ServerName

I've installed the server with the common cmd:


apt –yes install proftpd


And the only configuration changed in default configuration file /etc/proftpd/proftpd.conf  was
ServerName          "Debian"

I do this in new FTP setups for the logical reason to prevent the multiple FTP Vulnerability Scan script kiddie Crawlers to know the exact OS version of the server, so this was changed to:


ServerName "MyServerHostname"


Though this is the bad security through obscurity practice doing so is a good practice.

3. Create iptable firewall rules to allow ACTIVE FTP mode

But anyways, next step was to configure the firewall to be allowed to communicate on TCP PORT 21 and 20 to incoming source ports range 1024:65535 (to enable ACTIVE FTP) on firewal level with iptables on INPUT and OUTPUT chain rules, like this:


iptables -A INPUT -p tcp –sport 1024:65535 -d 0/0 –dport 21 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A INPUT -p tcp -s 0/0 –sport 1024:65535 -d 0/0 –dport 20 -m state –state NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 21 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -s 0/0 –sport 20 -d 0/0 –dport 1024:65535 -m state –state ESTABLISHED,RELATED -j ACCEPT

Talking about Active and Passive FTP connections perhaps for novice Linux users it might be worthy to say few words on Active and Passive FTP connections

Once firewall has enabled FTP Active / Passive connections is on and FTP server is listening, to test all is properly configured check iptable rules and FTP listener:

/sbin/iptables -L INPUT |grep ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp spts:1024:65535 dpt:ftp-data state NEW,ESTABLISHED
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp
ACCEPT     tcp  —  anywhere             anywhere             tcp dpt:ftp-data

netstat -l | grep "ftp"
tcp6       0      0 [::]:ftp                [::]:*                  LISTEN    


4. Loading nf_nat_ftp module and net.netfilter.nf_conntrack_helper (for backward compitability)

Next step of course was to add the necessery modules nf_nat_ftp nf_conntrack_sane that makes FTP to properly forward ports with respective Firewall states on any of above source ports which are usually allowed by firewalls, note that the range of ports given 1024:65535 might be too much liberal for paranoid sysadmins and in many cases if ports are not filtered, if you are a security freak you can use some smaller range such as 60000-65535.


Here is time to say for sysadmins who haven't recently had a task to configure a new (unecrypted) File Transfer Server as today Secure FTP is almost alltime used for file transfers for the sake of security might be puzzled to find out the old Linux kernel ip_conntrack_ftp which was the standard module used to make FTP Active connections work is substituted nowadays with  nf_nat_ftp and nf_conntrack_sane.

To make the 2 modules permanently loaded on next boot on Debian Linux they have to be added to /etc/modules

Here is how sample /etc/modules that loads the modules on next system boot looks like

cat /etc/modules
# /etc/modules: kernel modules to load at boot time.
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.

Next to say is that in newer Linux kernels 3.x / 4.x / 5.x the nf_nat_ftp and nf_conntrack-sane behaviour changed so  simply loading the modules would not work and if you do the stupidity to test it with some FTP client (I used gFTP / ncftp from my Linux desktop ) you are about to get FTP No route to host errors like:


Cannot create a data connection: No route to host



Sometimes, instead of No route to host error the error FTP client might return is:


227 entering passive mode FTP connect connection timed out error

To make the nf_nat_ftp module on newer Linux kernels hence you have to enable backwards compatibility Kernel variable





echo 1 > /proc/sys/net/netfilter/nf_conntrack_helper


To make it permanent if you have enabled /etc/rc.local legacy one single file boot place as I do on servers – for how to enable rc.local on newer Linuxes check here

or alternatively add it to load via sysctl

sysctl -w net.netfilter.nf_conntrack_helper=1

And to make change permanent (e.g. be loaded on next boot)

echo 'net.netfilter.nf_conntrack_helper=1' >> /etc/sysctl.conf


5. Enable PassivePorts in ProFTPD or PassivePortRange in PureFTPD

Last but not least open /etc/proftpd/proftpd.conf find PassivePorts config value (commented by default) and besides it add the following line:


PassivePorts 60000 65534


Just for information if instead of ProFTPd you experience the error on PureFTPD the configuration value to set in /etc/pure-ftpd.conf is:

PassivePortRange 30000 35000

That's all folks, give the ncftp / lftp / filezilla or whatever FTP client you prefer and test it the FTP client should be able to talk as expected to remote server in ACTIVE FTP mode (and the auto passive mode) will be not triggered anymore, nor you will get a strange errors and failure to connect in FTP clients as gftp.

Cheers 🙂

Mount remote Linux SSHFS Filesystem harddisk on Windows Explorer SWISH SSHFS file mounter and a short evaluation on what is available to copy files to SSHFS from Windows PC

Monday, February 22nd, 2016


I'm forced to use Windows on my workbook and I found it really irritating, that I can't easily share files in a DropBox, Google Drive, MS OneDrive, Amazon Storage or other cloud-storage free remote service. etc.
I don't want to use DropBox like non self-hosted Data storage because I want to keep my data private and therefore the only and best option for me was to make possible share my Linux harddisk storage
dir remotely to the Windows notebook.

I didn't wanted to setup some complex environment such as Samba Share Server (which used to be often a common option to share file from Linux server to Windows), neither wanted to bother with  installing FTP service and bother with FTP clients, or configuring some other complex stuff such as WebDav – which BTW is an accepted and heavily used solution across corporate clients to access read / write files on a remote Linux servers.
Hence, I made a quick research what else besides could be used to easily share files and data from Windows PC / notebook to a home brew or professional hosting Linux server.

It turned out, there are few of softwares that gives a similar possibility for a home lan small network Linux / Windows hybrid network users such, here is few of the many:

  • SyncThingSyncthing is an open-source file synchronization client/server application, written in Go, implementing its own, equally free Block Exchange Protocol. The source code's content-management repository is hosted on GitHub







  • OwnCloud – ownCloud provides universal access to your files via the web, your computer or your mobile devices







  • Seafile – Seafile is a file hosting software system. Files are stored on a central server and can be synchronized with personal computers and mobile devices via the Seafile client. Files can also be accessed via the server's web interface


I've checked all of them and give a quick try of Syncthing which is really easy to start, just download the binary launch it and configure it under https://Localhost:8385 URL from a browser on the Linux server.
Syncthing seemed to be nice and easy to configure solution to be able to Sync files between Server A (Windows) and Server B(Linux) and guess many would enjoy it, if you want to give it a try you can follow this short install syncthing article.
However what I didsliked in both SyncThing and OwnCloud and Seafile and all of the other Sync file solutions was, they only supported synchronization via web and didn't seemed to have a Windows Explorer integration and did required
the server to run more services, posing another security hole in the system as such third party softwares are not easily to update and maintain.

Because of that finally after rethinking about some other ways to copy files to a locally mounted Sync directory from the Linux server, I've decided to give SSHFS a try. Mounting SSHFS between two Linux / UNIX hosts is
quite easy task with SSHFS tool

In Windows however the only way I know to transfer files to Linux via SSHFS was with WinSCP client and other SCP clients as well as the experimental:

As well as few others such as ExpandDrive, Netdrive, Dokan SSHFS (mirrored for download here)
I should say that I first decided to try copying few dozen of Gigabyte movies, text, books etc. using WinSCP direct connection, but after getting a couple of timeouts I was tired of WinSCP and decided to look for better way to copy to remote Linux SSHFS.
However the best solution I found after a bit of extensive turned to be:

SWISH – Easy SFTP for Windows

Swish is very straight forward to configure compared to all of them you download the .exe which as of time of writting is at version 0.8.0 install on the PC and right in My Computer you will get a New Device called Swish next to your local and remote drives C:/ D:/ , USBs etc.


As you see in below screenshot two new non-standard buttons will Appear in Windows Explorer that lets you configure SWISH


Next and final step before you have the SSHFS remote Linux filesystem visible on Windows Xp / 7 / 8 / 10 is to fill in remote Linux hostname address (or even better fill in IP to get rid of possible DNS issues), UserName (UserID) and Direcory to mount.


Then you will see the SSHFS moutned:


You will be asked to accept the SSH host-key as it used to be unknown so far


That's it now you will see straight into Windows Explorer the remote Linux SSHFS mounted:


Once setupped a Swish connection to copy files directly to it you can use the Send to Embedded Windows dialog, as in below screenshot


The only 3 problem with SWISH are:

1. It doesn't support Save password, so on every Windows PC reboot when you want to connect to remote Linux SSHFS, you will have to retype remote login user pass.
Fron security stand point this is not such a bad thing, but it is a bit irritating to everytime type the password without an option to save permanently.
The good thing here is you can use Launch Key Agent
as visible in above screenshot and set in Putty Key Agent your remote host SSH key so the passwordless login will work without any authentication at
all however, this might open a security hole if your Win PC gets infected by virus, which might delete something on remote mounted SSHFS filesystem so I personally prefer to retype password on every boot.

2. it is a bit slow so if you're planning to Transfer large amounts of Data as hundreds of megabytes, expect a very slow transfer rate, even in a Local  10Mbit Network to transfer 20 – 30 GB of data, it took me about 2-3 hours or so.
SWISH is not actively supported and it doesn't have new release since 20th of June 2013, but for the general work I need it is just perfect, as I don't tent to be Synchronizing Dozens of Gigabytes all the time between my notebook PC and the Linux server.

3. If you don't use the established mounted connection for a while or your computer goes to sleep mode after recovering your connection to remote Linux HDD if opened in Windows File Explorer will probably be dead and you will have to re-enable it.

For Mac OS X users who want to mount / attach remote directory from a Linux partitions should look in fuguA Mac OS X SFTP, SCP and SSH Frontend

I'll be glad to hear from people on other good ways to achieve same results as with SWISH but have a better copy speed while using SSHFS.

Windows “God Mode” one shortcut to see and configure all setttings in Microsoft Windows 7 / 8 / 10 – Windows Master Control Panel hidden feature

Monday, January 25th, 2016

One very handy "secret" feature of Windows Operating System which is very useful to people who administrate a dozen of Windows servers daily is called "God Mode".
The idea behind "God Mode" is pretty simple it aims to give you maximum control and viability concentrated in one single Window interface.

God Mode was quite a lot ranted over the past years so it is likely that many of my blog readers are already aware of that Windows secret, but for those who didn't it will be
nice to check it out. To see the God Mode Windows functionality just create a New Folder in Windows Desktop or Anywhere on the Windows PC and Rename the New Folder to:



By creating folder witth his text string you will be able to do almost everything you ever tend to do on Windows from changing the outlook of theme and mouse cursor, changing,
Win explorer's folder's options, modify fonts, change cursor blink rate, get windows performance tools and information, add / remove programs, modify language, modify
firewall settings and in short do everything that is provided by Control Panel + some other goodies like Administrative Tools, Restore Options, Event logs etc. grouped in a fantastic readable manner.
GodMode naming says it all more or less it aims to give you "Godlike" accessibility to the Windows. Of course to be able to properly use the feature you will have to create
the Folder named GodMode.{ED7BA470-8E54-465E-825C-99712043E01C} with Administrator user.
The GodMode is available in Windows OSes since quite a long (2007) and is documented officially by Microsoft

Another alternative shortcut that gives the Godmode Master control panel is:



Enjoy 🙂

Clean slow Windows PC / Laptop from Spyware, Malware, Viruses, Worms and Trojans – Anti-Malware Program Arsenal

Monday, January 26th, 2015


Malware Bytes is a great tool to clean a PC in a quick and efficient way from Malware /  Spyware that wormed while browsing infectious site on the internet.
But sometimes PCs that has to be fixed are so badly infected with Spyware, Malware and Viruses that even after running Malware Bytes on boot time, left Work or Viruses do automatically download from the Internet or have been polymorphically renamed to a newer one that escapes Malware Bytes badware database and heroistics
Such problematic PCs are usually unmaintained user PCs whose Anti-Virus procetion with Nod32 or Kaspersky licensing has long expired leaving the PC without any mean of protection / PCs with removed Firewall / AV Program (due to Virus or Malware Infection) or on Computers which were used actively to download Cracked Programs, Games – by small kids or PCs used for watching heavily Porn (by teenagers).

Here is a List of Top Iseful FreeWare anti-Malware softwares, you can use in combination with MalwareBytes to (Clean) / Fix a Windows PC that is in almost unsolvable state (and obviously needs re-install) but contains too much software either obsolete or hard (time wasting) to configure:

Below anti-malware goodies helps in “Resurrecting” even the worst infected PC, so I believe every Win Admin should know them well and in computer clubs and university Windows computer networks with Internet it is recommended to check computers at least once a year …

1. Remove Bootkits and Trojans with Kaspersky TDSSKiller

Bootkit is a rootkit which loads when Windows system boots.  To search and destroy bootkits – Download the latest official version of Kaspersky TDSSKiller.


KASPERSKY TDSSKILLER DOWNLOAD LINK Run Kaspersky (after changing parameters  – enable Detect TDLFS file system) and remove any found infections

2. Download and use latest official version of RKill to terminate any malicious processes running in background


Please note that you will have to rename version of RKILL so that malicious software won’t block this utility from running. (link will automatically download RKILL renamed as iExplore.exe)
Double click on iExplore.exe to start RKill and stop any processes associated with Luhe.Sirefef.A.


RKill will now start working in the background, please be patient while the program looks for any malicious process and tries to end them.
When the Rkill utility has completed its task, it will generate a log.

Do not reboot your computer after running RKill as the malware programs will start again.

When the Rkill utility has completed its task, it will generate a log. Do not reboot computer after running RKill as the malware programs will start again.

3. Clean (any remaining) malware from your computer with HitmanPro



My Mirror of HitmanPro 3.7 (32 bit) Windows version is here
My Mirror of HitmanPro 3.7 (64 bit) Windows version is here

Because HitmanPro is unfortunately proprietary software, when you run a scan on the computer “Activate free license” button to begin the free 30 days trial, and remove all the malicious files found on your computer.

4. Remove Windows adware with AdwCleaner

The AdwCleaner utility will scan your computer and web browser for the malicious files, browser extensions and registry keys, that may have been installed on your computer without your knowledge.


Here isAdwCleaner utility ADWCLEANER DOWNLOAD LINK 
My Download AdwCleaner 4.109 is here

Note that before starting AdwCleaner, close all open programs and internet browsers. After finishing scan AdwCleaner requires a reboot (always backup cause you never know what can happen).

5. Remove any malicious registry keys added by malware with RogueKiller


RogueKiller is a utility that will scan for the unwanted registry keys and any other malicious files on your computer. It is pretty much like the free software Little Registry Cleaner but it is specialised in removing common malware left junk keys.

download the latest official version of RogueKiller from the below links.

ROGUEKILLER x86 DOWNLOAD LINK (For 32-bit machines)
ROGUEKILLER x64 DOWNLOAD LINK (For 64-bit machines)

Download Mirror link of Roguekiller X86 is here
Download Mirror link of Roguekiller X64 is here

Wait for the Prescan to complete.This should take only a few seconds,  then click on the “Scan” button to perform a system scan. After scan complete delete any found hax0r malicious registries

6. Purge any leftover infections on your computer with Emsisoft Anti-Malware


Emsisoft scan (potentially) infected PC for Viruses, Trojans, Spyware, Adware, Worms, Dialers, Keyloggers and other badware.

DOWNLOAD EMSISOFT EMERGENCY KIT HERE  – The link will open in new window tab. Note that EmsiSoftEmergencyKit is huge 168 Mbs!

My mirror of EmsiSoft Emergency kit is here

It is recommended to do the SMART Scan as it is more complete, though if you're in a hurry Quick Scan might also find something ugly. Once Scan completes Quarantine any found infected items.

It is best if all of the 7 Win cleaners are run, e.g.:

(TDSSKiller, RKill, HitmanPro, AdwCleaner, RogueKiller, Little Registry Cleaner  and EmsiSoft) in a consequential order as they're shown in article). Finally a run of Malware Bytes just to make sure nothing has remained is a good idea too.

Hopefully now you should be malware free. If you know other useful Anti-Spyware tools that helped you in case of PC Malware Slowness problems (constant Hard Disk read writes), please drop a comment and I will include them in this list). 
Once badware is removed from your PC or laptop the CPU should no longer show constantly busy with some strange process in taskmgr and notebook should be much more responsive (and if you have power management enabled) it will consume less energy reducing your electricity bills 🙂

Any feedback on experience with running above bunch of anti spy programs is also mostly welcome. 

How to check who is flooding your Apache, NGinx Webserver – Real time Monitor statistics about IPs doing most URL requests and Stopping DoS attacks with Fail2Ban

Wednesday, August 20th, 2014


If you're Linux ystem administrator in Webhosting company providing WordPress / Joomla / Drupal web-sites hosting and your UNIX servers suffer from periodic denial of service attacks, because some of the site customers business is a target of competitor company who is trying to ruin your client business sites through DoS or DDOS attacks, then the best thing you can do is to identify who and how is the Linux server being hammered. If you find out DoS is not on a network level but Apache gets crashing because of memory leaks and connections to Apache are so much that the CPU is being stoned, the best thing to do is to check which IP addresses are causing the excessive GET / POST / HEAD requests in logged.

There is the Apachetop tool that can give you the most accessed webserver URLs in a refreshed screen like UNIX top command, however Apachetop does not show which IP does most URL hits on Apache / Nginx webserver. 


1. Get basic information on which IPs accesses Apache / Nginx the most using shell cmds

Before examining the Webserver logs it is useful to get a general picture on who is flooding you on a TCP / IP network level, with netstat like so:

# here is howto check clients count connected to your server
netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n

If you get an extensive number of connected various IPs / hosts (like 10000 or something huge as a number), depending on the type of hardware the server is running and the previous scaling planned for the system you can determine whether the count as huge as this can be handled normally by server, if like in most cases the server is planned to serve a couple of hundreds or thousands of clients and you get over 10000 connections hanging, then your server is under attack or if its Internet server suddenly your website become famous like someone posted an article on some major website and you suddenly received a tons of hits.

There is a way using standard shell tools, to get some basic information on which IP accesses the webserver the most with:

tail -n 500 /var/log/apache2/access.log | cut -d' ' -f1 | sort | uniq -c | sort -gr

Or if you want to keep it refreshing periodically every few seconds run it through watch command:

watch "tail -n 500 /var/log/apache2/access.log | cut -d' ' -f1 | sort | uniq -c | sort -gr"


Another useful combination of shell commands is to Monitor POST / GET / HEAD requests number in access.log :

 awk '{print $6}' access.log | sort | uniq -c | sort -n

     1 "alihack<%eval
      1 "CONNECT
      1 "fhxeaxb0xeex97x0fxe2-x19Fx87xd1xa0x9axf5x^xd0x125x0fx88x19"x84xc1xb3^v2xe9xpx98`X'dxcd.7ix8fx8fxd6_xcdx834x0c"
      1 "x16x03x01"
      1 "xe2
      2 "mgmanager&file=imgmanager&version=1576&cid=20
      6 "4–"
      7 "PUT
     22 "–"
     22 "OPTIONS
     38 "PROPFIND
   1476 "HEAD
   1539 "-"
  65113 "POST
 537122 "GET

However using shell commands combination is plenty of typing and hard to remember, plus above tools does not show you, approximately how frequenty IP hits the webserver


2. Real-time monitoring IP addresses with highest URL reqests with logtop


Real-time monitoring on IP addresses with highest URL requests is possible with no need of "console ninja skills"  through – logtop.


2.1 Install logtop on Debian / Ubuntu and deb derivatives Linux


a) Installing Logtop the debian way

LogTop is easily installable on Debian and Ubuntu in newer releases of Debian – Debian 7.0 and Ubuntu 13/14 Linux it is part of default package repositories and can be straightly apt-get-ed with:

apt-get install –yes logtop

b) Installing Logtop from source code (install on older deb based Linuxes)

On older Debian – Debian 6 and Ubuntu 7-12 servers to install logtop compile from source code – read the README installation instructions or if lazy copy / paste below:

cd /usr/local/src
mv master JulienPalard-logtop.tar.gz
tar -zxf JulienPalard-logtop.tar.gz

cd JulienPalard-logtop-*/
aptitude install libncurses5-dev uthash-dev

aptitude install python-dev swig

make python-module

python install


make install


mkdir -p /usr/bin/
cp logtop /usr/bin/

2.2 Install Logtop on CentOS 6.5 / 7.0 / Fedora / RHEL and rest of RPM based Linux-es

b) Install logtop on CentOS 6.5 and CentOS 7 Linux

– For CentOS 6.5 you need to rpm install epel-release-6-8.noarch.rpm

rpm -ivh epel-release-6-8.noarch.rpm
rpmbuild –rebuild
cd /root/rpmbuild/RPMS/noarch
rpm -ivh uthash-devel-1.9.9-6.el6.noarch.rpm

– For CentOS 7 you need to rpm install epel-release-7-0.2.noarch.rpm



Click on and download epel-release-7-0.2.noarch.rpm

rpm -ivh epel-release-7-0.2.noarch
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
yum -y install git ncurses-devel uthash-devel
git clone
cd logtop
make install


2.3 Some Logtop use examples and short explanation


logtop shows 4 columns as follows – Line number, Count, Frequency, and Actual line


The quickest way to visualize which IP is stoning your Apache / Nginx webserver on Debian?


tail -f access.log | awk {'print $1; fflush();'} | logtop




On CentOS / RHEL

tail -f /var/log/httpd/access_log | awk {'print $1; fflush();'} | logtop


Using LogTop even Squid Proxy caching server access.log can be monitored.
To get squid Top users by IP listed:


tail -f /var/log/squid/access.log | awk {'print $1; fflush();'} | logtop


Or you might visualize in real-time squid cache top requested URLs

tail -f /var/log/squid/access.log | awk {'print $7; fflush();'} | logtop



3. Automatically Filter IP addresses causing Apache / Nginx Webservices Denial of Service with fail2ban

Once you identify the problem if the sites hosted on server are target of Distributed DoS, probably your best thing to do is to use fail2ban to automatically filter (ban) IP addresses doing excessive queries to system services. Assuming that you have already installed fail2ban as explained in above link (On Debian / Ubuntu Linux) with:

apt-get install –yes fail2ban

To make fail2ban start filtering DoS attack IP addresses, you will have to set the following configurations:

vim /etc/fail2ban/jail.conf

Paste in file:

enabled = true
port = http,https
filter = http-get-dos
logpath = /var/log/apache2/WEB_SERVER-access.log
# maxretry is how many GETs we can have in the findtime period before getting narky
maxretry = 300
# findtime is the time period in seconds in which we're counting "retries" (300 seconds = 5 mins)
findtime = 300
# bantime is how long we should drop incoming GET requests for a given IP for, in this case it's 5 minutes
bantime = 300
action = iptables[name=HTTP, port=http, protocol=tcp]

Before you paste make sure you put the proper logpath = location of webserver (default one is /var/log/apache2/access.log), if you're using multiple logs for each and every of hosted websites, you will probably want to write a script to automatically loop through all logs directory get log file names and automatically add auto-modified version of above [http-get-dos] configuration. Also configure maxtretry per IP, findtime and bantime, in above example values are a bit low and for heavy loaded websites which has to serve thousands of simultaneous connections originating from office networks using Network address translation (NAT), this might be low and tuned to prevent situations, where even the customer of yours can't access there websites 🙂

To finalize fail2ban configuration, you have to create fail2ban filter file:

vim /etc/fail2ban/filters.d/http-get-dos.conf


# Fail2Ban configuration file
# Author:
# Option: failregex
# Note: This regex will match any GET entry in your logs, so basically all valid and not valid entries are a match.
# You should set up in the jail.conf file, the maxretry and findtime carefully in order to avoid false positives.
failregex = ^<HOST> -.*"(GET|POST).*
# Option: ignoreregex
# Notes.: regex to ignore. If this regex matches, the line is ignored.
# Values: TEXT
ignoreregex =

To make fail2ban load new created configs restart it:

/etc/init.d/fail2ban restart

If you want to test whether it is working you can use Apache webserver Benchmark tools such as ab or siege.
The quickest way to test, whether excessive IP requests get filtered – and make your IP banned temporary:

ab -n 1000 -c 20 http://your-web-site-dot-com/

This will make 1000 page loads in 20 concurrent connections and will add your IP to temporary be banned for (300 seconds) = 5 minutes. The ban will be logged in /var/log/fail2ban.log, there you will get smth like:

2014-08-20 10:40:11,943 fail2ban.actions: WARNING [http-get-dos] Ban
2013-08-20 10:44:12,341 fail2ban.actions: WARNING [http-get-dos] Unban

StatusNet – Start your own hosted microblogging twitter like social network on Debian GNU / Linux

Monday, July 14th, 2014
I like experimenting with free and open source projects providing social networking capabilities like twitter and facebook. Historically I have run my own social network with Elgg – Open Source Social Network Engine. I had a very positive impression from Elgg as a social engine as, there are plenty of plugins and one can use Elgg to run free alternative to very basic equivalent of facebook, problem with Elgg I had however is if is not all the time monitored it quickly fills up with spam and besides that I found it to be still buggy and not easy to update.
The other social network free software I heard of isBuddyPress which I recently installed with Multisite (MuSite) enabled.

Since BuddyPress is WordPress based and it supports all the nice wordpress plugins, my impression is social networking based on wordpress behaves much more stable and since there is Akismet for WordPress, the amount of spammer registrations is much lower than with Elgg.

Recently I started blogging much more actively and I realized everyday I learn and read too much interesting articles and I don't log them anywhere and thought I need a way besides twitter to keep flashy notes of what I'm doing reading, learning in a short notes. I don't want to use Twitter on purpose, because I don't want to improve twitter's site SEO with adding my own stuff on their website but I want to keep my notes on my own local hosted server.

As I didn't wanted to loose time with Complexity of Elgg anymore and wanted to try to something new and I know the open source microblogging social network (Twitter Equivalent) – runs StatusNet – Free and Open Source Social software. StatusNet is well known under the motto of "Decentralized Twitter"


I took the time to grab it and install it to my home brew machine If you haven't seen StatusNet so far – you can check out demo of my installation here – registration is not freely opened because, i don't want spammers to break in, however if you want to give a try drop me a mail or comment below and I will open access for you ..

There is no native statusnet package for Debian Linux (as I'm running Debian) so to install it, I've grabbed statusnet.

To install StatusNet, everything was pretty straight forward and literally following instructionsf rom INSTALL file, i.e.:

# maps to /var/www/status/
cd /var/www/status/
tar -xzf statusnet-0.9.6.tar.gz --strip-components=1
rm statusnet-0.9.6.tar.gz
cd ..
chgrp www-data status/
chmod g+w status/
cd status/
chmod a+w avatar/ background/ file/

mysqladmin -u "root" -p "sql-root-password" create statusnet
mysql -u root -p
GRANT ALL on statusnet.* TO 'statusnetuser'@'localhost' IDENTIFIED BY 'statusnet-secret-password';

To Change default behaviour of URls to be more SEO friendly and not to show .php in URL (e.g. add so called fancy URLs – described in INSTALL):

cp htaccess.sample .htaccess

Then had to configure a VirtualHost under a subdomain or you can alternatively install and access it in browser via

An important note to open here is you have to set the URLs via which will be accessed further before proceeding with the install, if you will be using HTTPS here is time to configure it and test it before proceeding with install …  Just be warned that if you don't set the URLs properly now and try to modify them further you will get a lot of issues hard to solve which will cost you a lot of time and nervee ..

If you want to enable twitter bridging in Statusnet you will need to get Twitter consumer and secret keys, to get that you have to create new application on afterwards you will be taken to a page containing Consumer Key & Consumer Secret string.
StatusNet installation will auto generate config.php, you can further edit it manually with text editor. Content of my current statusnet config.php is here.

Most important options to change are:

$config['daemon']['user'] = 'www-data';
$config['daemon']['group'] = 'www-data';

www-data is user with which Apache is running by default on Debian Linux.

$config['site']['profile'] = 'private';

By default Status.Net will be set to run as private – e.g. it will be fitted for priv. use – messages posted will not publicly be visible. Here the possible options to choose between are:

$config['site']['profile'] = 'private';
$config['site']['profile'] = 'community';
$config['site']['profile'] = 'singleuser';
$config['site']['profile'] = 'public';

singleuser is pretty self explanatory, setting public option will open registration for any user on the internet – probably your network will quickly be filled with spam – so beware with this option. community will make statusnet publicly visible but, registration will only possible via use creation / invitation to join the network from admin.

vi /var/www/status/config.php
$config['site']['fancy'] = true;

Then to enable twitter to statusnet bridge add to config.php

vi /var/www/status/config.php

$config['twitter']['enabled'] = true;
$config['twitterimport']['enabled'] = true;
$config['avatar']['path'] = '/avatar';
$config['twitter']['consumer_key'] = 'XXXXXXXX';
$config['twitter']['consumer_secret'] = 'XXXXXXXX';
# disable sharing location by default
$config['location']['sharedefault'] = 'false';

Notice, I decided to disable statusnet sharedefault folder, because i don't have a lot of free space to provide to users. If you want to let users be allowed to share files (you the space for that), you might want to set a maximum quote of uploaded files (to prevent your webserver from being DoSed – for example by too many huge uploads), here is some reasonable settings:

$config['attachments']['file_quota'] = 7000000;
$config['attachments']['thumb_width'] = 400;
$config['attachments']['thumb_height'] = 300;


If you want to get the best out of performance of your new statusnet microblogging service, after each modification of config.php be sure to run:


php scripts/checkschema.php

Running checkschema.php is also useful, to check whether adding new plugins to check whether plugin will not throw an error.

Here is some extra useful config.php plugins to enable:

addPlugin('Gravatar', array());

If you expect to have quickly growing base of users it is recommended to also check out whether your MySQL is tuned with mysqltuner and optimize it for performance

Another useful think you would like to do is to increased the number of Statusnet avatars in the 'following' – 'followers' – 'groups' sections on my profile page by editing




line 36 in both files.
To get the full list of possible variables that can be set in config.php run in terminal:

 php scripts/setconfig.php -a

If you happen to encounter some oddities and issues with StatusNet after installation, this is most likely to some PHP hardering on compile time or some PHP.ini functions disabled for security or some missing component to install which is described as a prerequirement in statusnet INSTALL file

to debug the issues enable statusnet logging by adding in config.php

$config['site']['logdebug'] = true;
$config['site']['logfile'] = '/var/log/statusnet.log';

By default logs produced will be quite verbose and there will be plenty of information you will probably not need and will lead to a lot of "noise", to get around this, there is the LogFilter Plugin for some reasonable logging use in config.php:

addPlugin('LogFilter', array( 'priority' => array(LOG_ERR => true,
LOG_INFO => true,
LOG_DEBUG => false),
'regex' => array('/About to push/i' => false,
'/twitter/i' => false,
'/Successfully handled item/i' => false)

If you want tokeep log of statusnet it is a good idea to rorate logs periodically to keep them from growing too big, to do this create in /etc/logrotate.d new files /etc/logrotate.d/statusnet with following content:

/var/log/statusnet/*.log {
rotate 7
create 770 www-data www-data
/path/to/statusnet/scripts/ > /dev/null
/path/to/statusnet/scripts/ > /dev/null

You will probably want to to add new Links, next to StatusNet main navigation links for logged in users, this is possible by adding new line to




You will have to add a PHP context like:

                              _m('MENU','Pc-Freak.Net Blog'),
                              _('A pC Freak Blog'),

Once you're done with installation, make sure you change permissions or move install.php from /var/www/status, otherwise someone might overwrite your config.php and mess your installation …

chmod 000 /var/www/status/install.php There is plenty of other things to do with StatusNet (Support for communication with Jabber XMPP protocol, notification via SMS etc. There are also some plugins to add more statusnet functionality.

Enjoy micro blogging ! 🙂

How to configure equivalent of Linux /etc/resolv.conf search in MS Windows – DNS Suffix

Thursday, June 26th, 2014


Linux's default file that defines what DNS servers will be used /etc/resolv.conf typically contains directives with the default search domain or domains; used for FQDN (Fully Qualified Domain Name) completion when no domain suffix is supplied as part of the  DNS query. Lets say sub-domains under  has to be accessed (in /etc/resolv.conf) there is:


That is very handy whether you have to ssh or open in web browser (sites) or multiple servers each residing under a single main domain name (for example:,, etc.) by typing in browser or SSH by only passing the sub-domain name i.e.:



ssh user@server1
ssh user@server2

Here is /etc/resolv.conf from

# cat /etc/resolv.conf



Here is example of what I mean, ascii-games is a sub-domain of ( and is resolved with no need to type full FQDN


# host ascii-games has address

The DNS server knows that all failed to resolve queries by set DNS should be searched (resolved) under the defined search domain, i.e. each DNS query for server2, serverX (would try to be resolved as a subdomain of

Therefore, a very good question is what is Microsoft Windows (2000, 2003, 8) OS equivalent way to define search into /etc/resolv.conf?

In Windows the same /etc/resolv.conf hosts search is done using the so called "DNS Suffixes".

DNS Suffixes are used for resolv of (domain name strings with no dots).

Adding a new DNS Suffix in Windows is done from



Control Panel -> Network and Sharing Center -> Change Adapter Settings


Here select LAN card Adapter used to bring Internet to Win host,be it Local Area Connection or

Wireless Network Connection

 and choose:






Network Connection Properties

dialog select

Internet Protocol Version 4 (TCP/IPv4)

and again click on




On next dialog click on


Advanced (button) -> DNS (tab)


In field

DNS Suffix for this connection

fill in host which you would like to resolve with no need for FQDN and press the


(exactly like adding search in  /etc/resolv.conf on Linux host). Add multiple hosts DNS Suffix, if you want to access subdomains naming from multiple base domain.