Posts Tagged ‘fine’

Build and install Docker Squid Open Proxy image on Kubernetes cluster

Tuesday, October 30th, 2018


As Docker containerization is starting to become a standard for installing a brand new servers especially servers who live in Self-Made Clusters based on Orchestration technologites like Kubernetes (k8s) (check out –,

Recently, I've had the task to set-up a Squid Cache (Open Proxy) server on a custom Port number to make it harder for internet open proxy scanners to identify and ship it to a customer.

What is Squid Open Proxy?

An open proxy is a proxy server that is accessible by any Internet user, in other words anyone could access the proxy without any authentication.

Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP and other protocols.
It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages.
Squid has extensive access controls and makes a great server accelerator.
It runs on most available operating systems, including Windows and is licensed under the GNU GPL.

What is Docker?

For those who hear about Docker for a first time, Docker is an open-source software platform to create, deploy and manage virtualized application containers on a common OS such as GNU / Linux or Windows, it has a surrounding ecosystem of tools. Besides its open source version there is also a commercial version of the product by Docker Inc. the original company that developed docker and is today in active help of the project.

Docker components  – picture source

What is Kubernetes?

Kubernetes, in short, is an open source system for managing clusters of containers. To do this, it provides tools for deploying applications, scaling those application as needed, managing changes to existing containerized applications, and helps you optimize the use of the underlying hardware beneath your containers.
Kubernetes is designed to be extensible and fault-tolerant by allowing application components to restart and move across systems as needed.

Kubernetes is itself not a Platform as a Service (PaaS) tool, but it serves as more of a basic framework, allowing users to choose the types of application frameworks, languages, monitoring and logging tools, and other tools of their choice. In this way, Kubernetes can be used as the basis for a complete PaaS to run on top of; this is the architecture chosen by the OpenShift Origin open source project in its latest release.

Kubernetes architecture (shortly explained) – picture source Wikipedia

The Kubernetes project is written in the Google developed Go programming language, and you can browse its source code on GitHub.

Hence, In this article I'll give a brief introuduction on what is Docker and show you, how to easily:

a. Build Docker Image with Ubuntu, Update the system and Install Squid inside the container
using a sample Dockerfile build file

b. Run Docker Image to test deployed Ubuntu Linux and Squid on top of it works fine

c. Push Docker Image to DockerHub (Docker Images Central Official repository)

d. Deploy (Pull and Run) the new built Docker Ubuntu / Squid Open Proxy Image to the Kubernetes Cluster slave nodes – the K8S Cluster was  created using Rancher Enterprise Kubernetes Platform (check out – a bleeding edge tool for k8s quick GUI cluster creation / integration)

1. Install Docker Containerization Software Community Edition

Docker containers are similar to virtual machines, except they run as normal processes (containers), that does not use a Hypervisor of Type 1 or Type 2 and consume less resources than VMs and are easier to manage, nomatter what the OS environment is.

Docker uses cgroups and namespace to allow independent containers to run within a single Linux instance.

Docker Architecture – Picture source

Below docker install instructions are for Debian / Ubuntu Linux, the instructions for RPM package distros Fedora / CentOS / RHEL are very similar except yum or dnf tool is to be used.

a) Uninstall older versions of docker , docker-engine if present


apt-get -y remove docker docker-engine


! Previously running docker stuff such as Volumes, Images and networks will be preserved in /var/lib/docker/

b) install prerequired packages and add apt repositories for doko

apt-get update
apt-get install -y apt-transport-https ca-certificates wget software-properties-common


apt-key add gpg
rm -f gpg


Create docker.list apt sources file


echo "deb [arch=amd64] $(lsb_release -cs) stable" | sudo tee -a /etc/apt/sources.list.d/docker.list


apt-get update


c) check out the docker policy (will list you a multiple installable versoins of docker).


apt-cache policy docker-ce

  Installed: (none)
  Candidate: 17.06.0~ce-0~debian
  Version table:
     17.06.0~ce-0~debian 500
        500 stretch/stable amd64 Packages
     17.03.2~ce-0~debian-stretch 500
        500 stretch/stable amd64 Packages
     17.03.1~ce-0~debian-stretch 500
        500 stretch/stable amd64 Packages
     17.03.0~ce-0~debian-stretch 500


d) Install and run docker


apt-get -y install docker-ce
systemctl start docker



systemctl status docker


Previously running docker stuff such as Volumes, Images and networks will be preserved in /var/lib/docker/

2. Build Docker image with Ubuntu Linux OS and Squid inside

To build a docker image all you need to do is have the Dockerfile (which is docker definitions build file), an Official image of Ubuntu Linux OS (that is provided / downloaded from dockerhub repo) and a bunch of docker commands to use apt / apt-get to install the Squid Proxy inside the Docker Virtual Machine Container

In dockerfile it is common to define for use an  which is file with shell script commands definitions, that gets executed immediately after Docker fetches the OS from its remote repository on top of the newly run OS. It is pretty much like you have configured your own Linux distribution like using Linux from Scratch! to run on a bare-metal (hardware) server and part of the installation OS process you have made the Linux to run a number of scripts or commands during install not part of its regular installation process.

a) Go to and create an account for free

The docker account is necessery in order to push the built docker image later on.
Creating the account creates just few minutes time.


b) Create a Dockerfile with definitions for Squid Open Proxy setup

I'll not get into details on the syntax that Dockerfile accepts, as this is well documented on Docker Enterprise Platform official website but in general gettings the basics and starting it is up to a 30 minutes to maximum 1h time.


After playing a bit to achieve the task to have my Linux distribution OS (Ubuntu Xenial) with Squid on installed on top of it with the right configuration of SQUID Cacher to serve as Open Proxy I've ended up with the following Dockerfile.


FROM ubuntu:xenial
LABEL maintainer=""

ENV SQUID_VERSION=3.5.12-1ubuntu7 \
    SQUID_CACHE_DIR=/var/spool/squid \
    SQUID_LOG_DIR=/var/log/squid \

RUN apt-get update \
 && apt-get upgrade && apt-get dist-upgrade && DEBIAN_FRONTEND=noninteractive apt-get install -y squid=${SQUID_VERSION}* \
 && rm -rf /var/lib/apt/lists/*

COPY /sbin/
COPY squid.conf /etc/squid/squid.conf
RUN chmod 755 /sbin/

EXPOSE 3128/tcp
ENTRYPOINT [“/sbin/”]

You can download the Dockerfile here.

c) Create and squid.conf files

Apart from that I've used the following (which creates and sets necessery caching and logging directories and launches script on container set-up) permissions for SQUID proxy  file that is loaded from the Dockerfile on docker image build time.
To have the right SQUID configuration shipped up into the newly built docker container, it is necessery to prepare a template configuration file – which is pretty much a standard squid.conf file with the following SQUID Proxy configuration for Open Proxy


acl SSL_ports port 443


http_access deny !Safe_ports

http_access deny CONNECT !SSL_ports

http_access allow localhost manager
http_access deny manager


http_access allow localhost

http_access allow all

http_port 3128

coredump_dir /var/spool/squid

refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern (Release|Packages(.gz)*)$      0       20%     2880
refresh_pattern .               0       20%     4320


Once, I've created, the proper Dockerfile configuration, I've made a tiny shell script, that can create / re-create my docker image multiple times
Here is the :

docker login –username="$username" –password=""

docker build -t $DOCKER_ACC/$DOCKER_REPO:$IMG_TAG .

You can download script here

The script uses the docker login command to authenticate non-interactively to
docker build
command with properly set DOCKER_ACC (docker account – which is the username of your account as I've pointed earlier in article), then DOCKER_REPO (docker repository name) – you can get it either from a browser, after you've logged in to dockerhub or assuming you know your username, it should look like: – for example mine is hipod with repository name squid-ubuntu, my squid-ubuntu docker image build is here, you'll also need to provide the password inside the script or if you consider it a security concern, instead type manually from command line docker login and authenticate in advance before running the script, finally the last line docker push pushes to remote docker hub the new build of Ubuntu + SQUID Proxy with a predefined TAG that in my case is latest (as this is my latest build of Squid – if you need a multiple version number of Squid repository just change the tag to the version tag line number.


d) Use the script to build Squid docker image


Next run the script to make and push into docker your new image:



Please consider that in order to work with docker hub push / pull, you will need to have a firewall that allows connection to dockerhub site repo, if for some reason the push / pull fails, check closely your firewall as it is the most likely cause for failure.


3. Run the new docker image to test Squid runs as expected

To make sure the docker image runs properly, you can test it on any machine that has installed, this is done with a simple cmd:

docker run -d –restart=always -p 3128:3128 hipod/squid-ubuntu:latest


The -d option tells docker to background process / run in detached mode
 -p option tells docker to expose port (e.g. make NAT with iptables from the docker virtual container with Linux OS + SQUID listening inside the container on port 3128 to the TCP / IP 3128 server port).
You can use iptables to check the created Network Address Translation rules.

–restart=always option sets the docker restart policy (e.g. when the container is terminating it tells the container to restart the container (OS) after exit), there, you can use as a resetart policy (no, on-failure[:max_retries] , unless-stopped)

For clarity I've created one liner script you can download here.


You can use a

ps aux|grep -i docker

root     13169  0.2  0.4 591300 77984 ?        Ssl  13:50   0:09 /usr/bin/dockerd -H fd://
root     13176  0.0  0.2 408244 34972 ?        Ssl  13:50   0:03 docker-containerd –config /var/run/docker/containerd/containerd.toml
root     20875  0.0  0.0 132348   948 pts/1    S+   14:52   0:00 grep -i docker

to see it running as well as the:


docker ps -a

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                      PORTS               NAMES
d2eb7ab635cf        c5b0f61227cd        "/bin/sh -c 'apt-get…"   12 minutes ago      Exited (1) 3 minutes ago                        trusting_elion
18476f546562        c5b0f61227cd        "/bin/sh -c 'apt-get…"   37 minutes ago      Exited (1) 37 minutes ago                       admiring_wilson

To connect to the running container later you can use docker attach ID_of_container


docker attach d2eb7ab635cf



command to see the new container runs as well as attach to the newly spawned container


4. Deploying Dockerized SQUID Open Proxy Cache server to Kubernetes cluster


My task was to deploy the newly built squid doko image to remote K8s cluster which was set as a default cluster via a context in  .kube/config/  or manually set via:


kubectl config use-context

I've used the following YAML file with kubectl to deploy:


apiVersion: extensions/v1beta1
kind: Deployment
    app: squid-app-33128
  name: squid-app-33128
  replicas: 1
        app: squid-app-33128
        – name: squid
          image: hipod/squid-ubuntu:latest
          imagePullPolicy: Always
          – containerPort: 3128
            protocol: TCP
            – mountPath: /var/spool/squid
              name: squid-cache
            – mountPath: /var/log/squid
              name: squid-log
              port: 3128
            initialDelaySeconds: 40
            timeoutSeconds: 4
        – name: squid-cache
          emptyDir: {}
        – name: squid-log
          emptyDir: {}

The task included to deploy two different Open Proxy squid servers on separate ports in order to add them external cluster Ingress load balancing via Amazon AWS, thus I actually used following 2 yaml files.

1. squid-app-33128.yaml
2. squid-app-33129.yaml

The actual deployment command was like that:


# deploys pods and adds services to kubernetes cluster

kubectl create -f squid-app-33129.yaml -f squid-app-33128.yaml -f squid-service-33129.yaml squid-service-33128.yaml


You can download the 2 squid-service .yaml below:

1. squid-service-33129.yaml
2. squid-service-33128.yaml

The service is externally exposed via later configured LoadBalancer to make the 2 squid servers deployed into k8s cluster accessible from the Internet by anyoneone without authorization (as a normal open proxies) via TCP/IP ports 33128 and 33129.


Below I explained a few easy steps to follow to;

build docker image Ubuntu + Squid
test the image
deploy the image into a previously prepared k8s cluster

Though it all looks quite simplistic I should say creating the .yaml file took me long. Creating system configuration is not as simple as using the good old .conf files and getting used with the identation takes time.

Now once the LB are configured to play with k8s, you can enjoy the 2 proxy servers. If you need to do some similar task and you don't have to do it for a small fee, contact me.

How to mount NFS network filesystem to remote server via /etc/fstab on Linux

Friday, January 29th, 2016

If you have a server topology part of a project where 3 (A, B, C) servers need to be used to deliver a service (one with application server such as Jboss / Tomcat / Apache, second just as a Storage Server holding a dozens of LVM-ed SSD hard drives and an Oracle database backend to provide data about the project) and you need to access server A (application server) to server B (the Storage "monster") one common solution is to use NFS (Network FileSystem) Mount. 
NFS mount is considered already a bit of obsoleted technology as it is generally considered unsecre, however if SSHFS mount is not required due to initial design decision or because both servers A and B are staying in a serious firewalled (DMZ) dedicated networ then NTS should be a good choice.
Of course to use NFS mount should always be a carefully selected Environment Architect decision so remote NFS mount, imply  that both servers are connected via a high-speed gigabyte network, e.g. network performance is calculated to be enough for application A <-> to network storage B two sides communication not to cause delays for systems end Users.

To test whether the NFS server B mount is possible on the application server A, type something like:


mount -t nfs -o soft,timeo=900,retrans=3,vers=3, proto=tcp remotenfsserver-host:/home/nfs-mount-data /mnt/nfs-mount-point

If the mount is fine to make the mount permanent on application server host A (in case of server reboot), add to /etc/fstab end of file, following: /application/remote-application-dir-to-mount nfs   rw,bg,nolock,vers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr 1 2

If the NTFS server has a hostname you can also type hostname instead of above example sample IP, this is however not recommended as this might cause in case of DNS or Domain problems.
If you want to mount with hostname (in case if storage server IP is being commonly changed due to auto-selection from a DHCP server):

server-hostA:/application/local-application-dir-to-mount /application/remote-application-dir-to-mount nfs   rw,bg,nolock,vers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr 1 2

In above example you need to have the /application/local-application-dir-to-mount (dir where remote NFS folder will be mounted on server A) as well as the /application/remote-application-dir-to-mount
Also on server Storage B server, you have to have running NFS server with firewall accessibility from server A working.

The timeou=600 (is defined in) order to make the timeout for remote NFS accessibility 1 hour in order to escape mount failures if there is some minutes network failure between server A and server B, the rsize and wsize
should be fine tuned according to the files that are being red from remote NFS server and the network speed between the two in the example are due to environment architecture (e.g. to reflect the type of files that are being transferred by the 2)
and the remote NFS server running version and the Linux kernel versions, these settings are for Linux kernel branch 2.6.18.x which as of time of writting this article is obsolete, so if you want to use the settings check for your kernel version and
NTFS and google and experiment.

Anyways, if you're not sure about wsize and and rise, its perfectly safe to omit these 2 values if you're not familiar to it.

To finally check the NFS mount is fine,  grep it:


# mount|grep -i nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
server-hostA:/application/remote-application-dir-to-mount on /application/remote-application-dir-to-mount type nfs (rw,bg,nolock,nfsvers=3,tcp,timeo=600,rsize=32768,wsize=32768,hard,intr,addr=

That's all enjoy 🙂



How to disable WordPress Visual Editor to solve problems Editor / Post problems after upgrade to WordPress 4.0

Monday, October 27th, 2014

Recently, I've upgraded to latest as of time of writting WordPress 4.0. The upgrade went fine however after upgrade even though I've upgraded also the CKEdit for WordPressVisual Editor stopped working. To solve the issue, my logical guess was to try to disable CKEditor:

(Plugins -> Ckeditor for WordPress (Deactivate)

However even after disabling, default WP Visual Editor continued to be not showing properly – e.g. the Publish / Save Draft / Preview buttons pane as well as the usual format text menu buttons (set text to Italic, Bold, Underline Text,  Create New Paragraph etc.) was completely missing and it was impossible to write anything in the text edit box like you see in below screenshot:


I've red a lot on the internet about the issue and it seem a lot of people end up with the WordPress broken Visual Editor issue after upgrading to WP 3.9 and to WordPress 4.0. A lot of people did came to a fix, by simply disabling all WP plugins and enabling them one by one, however as I have about 50 WordPress plugins enabled in my WP blog disabling every plugins and re-enabling was too time consuming as I had to first write down all the plugins enabled and then re-enable them one by one by hand (after re-installing the wordpress version) testing after each whether the editor works or not ..
Therefore I skipped that fix and looked for another one. Other suggestions was to:

Edit wp-includes/css/editor.min.css and include at the end of file:

.mce-stack-layout{margin-top:20px}.wp-editor-container textarea.wp-editor-area{margin-top:67px;}

I've tried that one but for me this didn't work out ..

There were some people reporting certain plugins causing the visual editor issues such reported were:

  • NextScripts: Social Networks Auto-Poster
  • Google Sitemaps – Append UTW Tags
  • Google XML Sitemaps
  • TinyMCE Advanced (some suggested replacing TinyMCE and related scripts)
  • JS & CSS Script Optimizer … etc.

There were some suggestions also that the issues with Editor could be caused by the Used Blog Theme. It is true I'm using very Old WordPress theme, however as I like it so much I didn't wanted to change that one ..

Others suggested as a fix adding to site's wp-config.php:

define('CONCATENATE_SCRIPTS', false);

Unfortunately this doesn't work either.

Finally I've found the fix myself, the solution is as simple as disabling WordPress Visual Editor:

To disable WP Visual Editor:

1. Go to Upper screen right corner, after logged in to wp-admin (A drop down menu) with Edit My Profile will appear::

2. From Profile screen to appear select Disable the visual editor when writing scroll down to the bottom of page and click on Update Profile button to save new settings:


That's all now the Post / Edit of an Article will work again with text buttons only.

Windows Explorer (Open directory in command prompt preserving dir PATH) – Add Dos Prompt Here feature via tiny registry tweak

Friday, January 10th, 2014

Windows explorer dos prompt here open directory in windows command line

If you have to use Windows on system administration level, you had to use command prompt daily, thus its useful to be able to be able to open Command Line starting from desired directory with no need to copy directory Path by hand and CD to it manually.

By default Command Prompt, cmd.exe always opens itself setting a path to user home directory, reading what is defined by win system variable %USERPROFILE% or %HOMEPATH% – MS Windows equivalent of UNIX's $HOME shell variable.

To add open in  DOS Prompt Here Command Prompt option to Windows Explorer menus its necessary to apply few rules to Windows registry DB
Use above Download link and launch it and from there on clicking with right Mouse button to any directory will enable you to open directory in CMD.EXE.

Here is content of little registry tweak adding the new menu Dos Prompt Here button
Windows Registry Editor Version 5.00

@="Dos &Prompt Here"

@="cmd.exe /k cd %1"

@="Dos &Prompt Here"

@="cmd.exe /k cd %1"

Windows explorer open program files or any specific directory in windows command line

windows opening directory in command line program files screenshot win 7

This little registry code is originally for Windows 2000, anyways it is compatible with all NT technology based Windowses, Add DOS Prompt Here tweak works fine on Windows XP, Windows 7 and Windows 8 (Home, Pro and Business editions).

By Mentioning $HOME its interesting to say Windows equivalent of Linux's as it might be useful to know:

linux:~# echo $HOME


C:\\> echo %USERPROFILE%

To list all Windows Command Prompt environment variable equivalent to Linux's bash shell env / setenv is SET command), here is example output from my Winblows;

C:\Users\georgi> SET
CLASSPATH=.;C:\\Program Files (x86)\\Java\\jre6\\lib\\ext\\
CommonProgramFiles=C:\\Program Files\Common Files
CommonProgramFiles(x86)=C:\\Program Files (x86)\\Common Files
CommonProgramW6432=C:\\Program Files\\Common Files
Path=C:\\Program Files\\RA2HP\\;C:\\Windows\\system32;C:\\Windows;C:\\Windows\\S
Wbem;C:\\Windows\\System32\\WindowsPowerShell\v1.0\;C:\\Program Files\\WIDCOM
oth Software\\;C:\\Program Files\\WIDCOMM\\Bluetooth Software\\syswow64;C:\\Pr
les (x86)\\Hewlett-Packard\\HP ProtectTools Security Manager\\Bin\\;C:\\Progr
\\ActivIdentity\\ActivClient\\;C:\\Program Files (x86)\\ActivIdentity\\ActivCl
\\Program Files (x86)\\QuickTime\\QTSystem\\
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 58 Stepping 9, GenuineIntel
ProgramFiles=C:\\Program Files
ProgramFiles(x86)=C:\\Program Files (x86)
ProgramW6432=C:\\Program Files
PTSMInstallPath_X86=C:\\Program Files (x86)\\Hewlett-Packard\\HP ProtectToo
ity Manager\\
QTJAVA=C:\\Program Files (x86)\\Java\\jre6\\lib\\ext\\

Hope this little trick hopes someone out there.
I will be glad to hear of other cool useful windows registry tweaks?


Check how webpage looks with Internet Explorer on Linux and FreeBSD with Mozilla Firefox (Netrenderer Firefox plugin)

Thursday, November 1st, 2012

Simulate Internet Explorer in screenshots on GNU / Linux and FreeBSD using Netrenderer in Firefox - Internet Explorer testing tool for web developers on Linux and FreeBSD

I'm not full time web developer. But sometimes, I develop websites too or just had to do some website testing.
I'm using GNU / Linux and BSD as main server and desktop platforms for many years already and hence I don't have regular access to Windows OS and respectively Internet Explorer. In that manner of thoughts it is very useful to have a way to check if a certain website I create displays fine on Internet Explorer 6,7,8 too.

Usually whether I need to test if website displays properly its elements in Internet Explorer I do use the infamous – I guess it is almost impossible anyone is developing websites on Linux and don't know it :). Fortunately while I was googling to remind myself about the exact link location to netrenderer, I've stumbled upon Mozilla Firefox add-on extension which does precisely what website does – i.e. renders a website with HTML Web Engine compatible   to most Internet Explorer versions and creating screenshots on how a website would look under Internet Explorer. Of course the plugin is not a panace and since it only makes screenshots whether there are problems with interactivity (Javascript AJAX) of a website on IE will the plugin will be of zero use. However in general it is good to know if at least the website elements are ordered fine.
After the plugin is added in the usual way as any other plugin in FF, you can start using it with keyboard shortcuts:

Ctrl+Shift+F5/F6/F7/F8 – respectively renders the page in IE5.5, IE 6, IE 7 / IE 8 Beta 2

Pressing CTRL + Shift + FX, makes the IE screenshot of site using

I'm currently running latest Firefox version 16.0.2 and here plugin works, fine I guess on most FF releases not older than few years it should work fine too.

Below is description of the plugin, as taken from plugin website:

IE NetRendered Add-on Description

Adds buttons, tools menu and contextual menu entries to get a screenshot of the current page with IE NetRenderer.

Keyboard shortcuts are also available: Ctrl+Shift+F5/F6/F7/F8 to render the page in IE5.5/6/7/8 Beta 2 (Cmd+Shift+F* on Mac).

Really useful for webmasters which are not using Windows!

You can also access the IE NetRenderer service here:

Please note that the extension developper is not affiliated with GEOTEK, providing the IE NetRenderer service. You can visit his website here:










PHP system(); hide command output – How to hide displayed output with exec();

Saturday, April 7th, 2012

I've recently wanted to use PHP's embedded system(""); – external command execute function in order to use ls + wc to calculate the number of files stored in a directory. I know many would argue, this is not a good practice and from a performance view point it is absolutely bad idea. However as I was lazy to code ti in PHP, I used the below line of code to do the task:

echo "Hello, ";
$line_count = system("ls -1 /dir/|wc -l");
echo "File count in /dir is $line_count \n";

This example worked fine for me to calculate the number of files in my /dir, but unfortunately the execution output was also visialized in the browser. It seems this is some kind of default behaviour in both libphp and php cli. I didn't liked the behaviour so I checked online for a solution to prevent the system(); from printing its output.

What I found as a recommendations on many pages is instead of system(); to prevent command execution output one should use exec();.
Therefore I used instead of my above code:

echo "Hello, ";
$line_count = exec("ls -1 /dir/|wc -l");
echo "File count in /dir is $line_count \n";

By the way insetad of using exec();, it is also possible to just use ` (backtick) – in same way like in bash scripting's .

Hence the above code can be also written for short like this:

echo "Hello, ";
$line_count = `ls -1 /dir/|wc -l`;
echo "File count in /dir is $line_count \n";


How to protect Elgg 1.8.x Social Network platform against Registration Spam with Captcha

Friday, March 2nd, 2012

There are four major plugins as of writting of this that can be used to reduce significantly the amount of registration spam in Elgg 1.8.x <= (1.8.3)
Probably there are other plugins to protect against spam in elgg, however I personally tried just this ones to work with elgg 1.8.3..

1. Elgg Anti bot spam registartions with Text Captcha

Elgg Text Captcha 1.8 screenshot

As you can see in the picture this plugin requires, skills in maths 😉 For serious websites it also looks a bit ridiculous, besides that is actually an easy one to handle by spam bots, probably plenty of the nowdays spam bots crawling the net could trespass it.

2. Protecting elgg registration formw tih Image Capcha


Elgg Community Elgg Captcha 1.8.3 Screenshot

3. Elgg anti registration spam with Google Captcha

Elgg Google Captcha QLI Screenshot 1.8.3

4. Just a Captcha for Elgg 1.8


  •  Check and Download Elgg Just a Captcha for Elgg here

    Just a Captcha for Elgg screenshot version 1.8.3

    One note to make here is the 4 Captchas did not work together if enabled from elgg administration panel. You will have to use one at a piece.
    I haven't tested to I don't know which one is the most efficient. Anyhow I really think Image Captcha is looking best from all of them and more intuitive to the user.
    I'm quite happy Image Captcha is available and works fine in 1.8.3 in prior version 1.6.x generation, I couldn't find any decent plugin to filter login spam and my experimental social network based on elgg, got quickly filled with Spam. Now will wait and see if the Image Captcha will stop the drones.

How to hide and unhide (show) Administrator User on Windows Vista and Windows 7

Wednesday, November 23rd, 2011

I needed to show the Administrator user on one Windows 7 install.
Achieving this is done through command prompt – cmd.exe where the command prompt with the exclusive option of Run as Administrator .

cmd run as administrator

The exact command that unhides the Administrator user so further on on next windows login screen one sees Administrator user ready for use is:

C:> net user administrator /active:yes

Net user show administrator Windows 7 command

Unhiding Administrator user is always handy whether one needs to do some bunch of operations with Super User. After finishing all my required tasks with administrator I reverted back and hid the Administrator user once again like so:

C:> net user administrator /active:no

This commands also works fine on Vista and presumably on Windows XP.

Cause and solution for Qmail sent error “Requested action aborted: error in processing Server replied: 451 qq temporary problem (#4.3.0)”

Friday, October 28th, 2011

One of the qmail servers I manage today has started returning strange errors in Squirrel webmail and via POP3/IMAP connections with Thunderbird.

What was rather strange is if the email doesn’t contain a link to a webpage or and attachment, e.g. mail consists of just plain text the mail was sent properly, if not however it failed to sent with an error message of:

Requested action aborted: error in processing Server replied: 451 qq temporary problem (#4.3.0)

After looking up in the logs and some quick search in Google, I come across some online threads reporting that the whole issues are caused by malfunction of the (script checking mail for viruses).

After a close examination on what is happening I found out /usr/sbin/clamd was not running at all?!
Then I remembered a bit earlier I applied some updates on the server with apt-get update && apt-get upgrade , some of the packages which were updated were exactly clamav-daemon and clamav-freshclam .
Hence, the reason for the error:

451 qq temporary problem (#4.3.0)

was pretty obvious which is using the clamd daemon to check incoming and outgoing mail for viruses failed to respond, so any mail which contained any content which needed to go through clamd for a check and returned back to did not make it and therefore qmail returned the weird error message.
Apparently for some reason apparently the earlier update of clamav-daemon failed to properly restart, the init script /etc/init.d/clamav-daemon .

Following fix was very simple all I had to do is launch clamav-daemon again:

linux:~# /etc/inid.d/clamav-daemon restart

Afterwards the error is gone and all mails worked just fine 😉

Fix “checking build system type… Invalid configuration `x86_64-unknown-linux’: machine `x86_64-unknown’ not recognized” on ./configure

Wednesday, August 3rd, 2011

I’m trying to compile vqadmin on x86_amd64 (64 bit Debian) and I got error during ./configure . The error I got is as follows:

debian:~/vqadmin-2.3.7# ./configure --enable-cgibindir=/var/www/mail/cgi-bin -enable-htmldir=/var/www/mail/ --enable-isoqlog=y
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
/downloads/vqadmin-2.3.7/missing: Unknown `--run' option
Try `/downloads/vqadmin-2.3.7/missing --help' for more information
configure: WARNING: `missing' script is too old or missing
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking build system type... Invalid configuration `x86_64-unknown-linux': machine `x86_64-unknown' not recognized

So my compile failed with:
checking build system type… Invalid configuration `x86_64-unknown-linux’: machine `x86_64-unknown’ not recognized

Thanksfully, there is a tiny script which originally is part of the CVS project. I’ve modified a bit the script to remove few lines of code which are not necessery. The `x86_64-unknown-linux’: machine `x86_64-unknown’ not recognized fix script is here

To fix up the broken configure all required is:

debian:~/vqadmin-2.3.7# sh

Next on I could compile normally again vqadmin just fine.