Posts Tagged ‘tendency’

rc.local missing in Debian 8 Jessie and Debian 9 Stretch and newer Ubuntu 16, Fedora, CentOS Linux – Why is /etc/rc.local not working and how to make it work again

Monday, September 11th, 2017

rc.local-not-working-solve-fix-linux-startup-with-rc.local-explained-how-to-make-rc.local-working-again-on-newer-linux-distributions

If you have installed a newer version of Debian GNU / Linux such as Debian Jessie or Debian  9 Stretch or Ubuntu 16 Xenial Xerus either on a server or on a personal Desktop laptop and you want tto execute a number of extra commands next to finalization of system boot just like we GNU / Linux users used to do already for the rest 25+ years you will be surprised that /etc/rc.local is no longer available (file is completely missing!!!).

This kind of behaviour (to avoid use of /etc/rc.local and make the file not present by default right after Linux OS install) was evident across many RedHack (Redhat) distributions such as Fedora and CentOS Linux for the last number of releases and the tendency was to also happen in Debian based distros too as it often does, however there was a possibility on this RPM based distros as well as rest of Linux distros to have the /etc/rc.local manually created to work around the missing file.

But NOoooo, the smart new generation GNU / Linux architects with large brains decided to completely wipe out the execution on Linux boot of /etc/rc.local from finalization stage, SMART isn't it??

For instance If you used to eat certain food for the last 25+ years and they suddenly prohibit you to eat it because they say this is not necessery anymore how would you feel?? Crazy isn't it??

Yes I understand the idea to wipe out /etc/rc.local did have a reason as the developers are striving to constanly improve the boot speed process (and the introduction of systemd (system and service manager) in Debian 8 Jessie over the past years did changed significantly on how Linux boots (earlier used SysV boot and LSB – linux standard based init scripts), but come on guys /etc/rc.local
doesn't stone the boot process with minutes, including it will add just 2, 3 seconds extra to boot runtime, so why on earth did you decided to remove it??

What I really loved about Linux through the years was the high level of consistency and inter-operatibility, most things worked just the same way across distributions and there was some logic upgrade, but lately this kind of behaviour is changing so in many of the new things in both GUI and text mode (console) way to interact with a GNU / Linux PC all becomes messy sadly …

So the smart guys who develop Gnu / Linux distros said its time to depreciate /etc/rc.local to prevent the user to be able to execute his set of finalization commands at the end of each booted multiuser runlevel.

The good news is you can bring back (resurrect) /etc/rc.local really easy:

To so, just execute the following either in Physical /dev/tty Console or in Gnome-Terminal (for GNOME users) or for KDE GUI environment users in KDE's terminal emulator konsole:

 

cat <<EOF >/etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exit 0
EOF
chmod +x /etc/rc.local
systemctl start rc-local
systemctl status rc-local


I think above is self-explanatory /etc/rc.local file is being created and then to enable it we run systemctl start rc-local and then to check the just run rc-local service status systemctl status

You will get an output similar to below:
 

 

root@jericho:/home/hipo# systemctl start rc-local
root@jericho:/home/hipo# systemctl status rc-local
● rc-local.service – /etc/rc.local Compatibility
   Loaded: loaded (/lib/systemd/system/rc-local.service; static; vendor preset:
  Drop-In: /lib/systemd/system/rc-local.service.d
           └─debian.conf
   Active: active (exited) since Mon 2017-09-11 13:15:35 EEST; 6s ago
  Process: 5008 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/rc-local.service
sep 11 13:15:35 jericho systemd[1]: Starting /etc/rc.local Compatibility…
setp 11 13:15:35 jericho systemd[1]: Started /etc/rc.local Compatibility.

To test /etc/rc.local is working as expected you can add to print any string on boot, right before exit 0 command in /etc/rc.local

you can add for example:
 

echo "YES, /etc/rc.local IS NOW AGAIN WORKING JUST LIKE IN EARLIER LINUX DISTRIBUTIONS!!! HOORAY !!!!";


On CentOS 7 and Fedora 18 codename (Spherical Cow) or other RPM based Linux distro if /etc/rc.local is missing you can follow very similar procedures to have it enabled, make sure

/etc/rc.d/rc.local

is existing

and /etc/rc.local is properly symlined to /etc/rc.d/rc.local

Also don't forget to check whether /etc/rc.d/rc.local is set to be executable file with ls -al /etc/rc.d/rc.local

If it is not executable, make it be by running cmd:
 

chmod a+x /etc/rc.d/rc.local


If file /etc/rc.d/rc.local happens to be missing just create it with following content:

 

#!/bin/sh

# Your boot time rc.commands goes somewhere below and above before exit 0

exit 0


That's all folks rc.local not working is solved,
enjoy /etc/rc.local working again 🙂

 

How to make a mirror of website on GNU / Linux with wget / Few tips on wget site mirroring

Wednesday, February 22nd, 2012

how-to-make-mirror-of-website-on-linux-wget

Everyone who used Linux is probably familiar with wget or has used this handy download console tools at least thousand of times. Not so many Desktop GNU / Linux users like Ubuntu and Fedora Linux users had tried using wget to do something more than single files download.
Actually wget is not so popular as it used to be in earlier linux days. I've noticed the tendency for newer Linux users to prefer using curl (I don't know why).

With all said I'm sure there is plenty of Linux users curious on how a website mirror can be made through wget.
This article will briefly suggest few ways to do website mirroring on linux / bsd as wget is both available on those two free operating systems.

1. Most Simple exact mirror copy of website

The most basic use of wget's mirror capabilities is by using wget's -mirror argument:

# wget -m http://website-to-mirror.com/sub-directory/

Creating a mirror like this is not a very good practice, as the links of the mirrored pages will still link to external URLs. In other words link URL will not pointing to your local copy and therefore if you're not connected to the internet and try to browse random links of the webpage you will end up with many links which are not opening because you don't have internet connection.

2. Mirroring with rewritting links to point to localhost and in between download page delay

Making mirror with wget can put an heavy load on the remote server as it fetches the files as quick as the bandwidth allows it. On heavy servers rapid downloads with wget can significantly reduce the download server responce time. Even on a some high-loaded servers it can cause the server to hang completely.
Hence mirroring pages with wget without explicity setting delay in between each page download, could be considered by remote server as a kind of DoS – (denial of service) attack. Even some site administrators have already set firewall rules or web server modules configured like Apache mod_security which filter requests to IPs which are doing too frequent HTTP GET /POST requests to the web server.
To make wget delay with a 10 seconds download between mirrored pages use:

# wget -mk -w 10 -np --random-wait http://website-to-mirror.com/sub-directory/

The -mk stands for -m/-mirror and -k / shortcut argument for –convert-links (make links point locally), –random-wait tells wget to make random waits between o and 10 seconds between each page download request.

3. Mirror / retrieve website sub directory ignoring robots.txt "mirror restrictions"

Some websites has a robots.txt which restricts content download with clients like wget, curl or even prohibits, crawlers to download their website pages completely.

/robots.txt restrictions are not a problem as wget has an option to disable robots.txt checking when downloading.
Getting around the robots.txt restrictions with wget is possible through -e robots=off option.
For instance if you want to make a local mirror copy of the whole sub-directory with all links and do it with a delay of 10 seconds between each consequential page request without reading at all the robots.txt allow/forbid rules:

# wget -mk -w 10 -np -e robots=off --random-wait http://website-to-mirror.com/sub-directory/

4. Mirror website which is prohibiting Download managers like flashget, getright, go!zilla etc.

Sometimes when try to use wget to make a mirror copy of an entire site domain subdirectory or the root site domain, you get an error similar to:

Sorry, but the download manager you are using to view this site is not supported.
We do not support use of such download managers as flashget, go!zilla, or getright

This message is produced by the site dynamic generation language PHP / ASP / JSP etc. used, as the website code is written to check on the browser UserAgent sent.
wget's default sent UserAgent to the remote webserver is:
Wget/1.11.4

As this is not a common desktop browser useragent many webmasters configure their websites to only accept well known established desktop browser useragents sent by client browsers.
Here are few typical user agents which identify a desktop browser:
 

  • Mozilla/5.0 (Windows NT 6.1; rv:6.0) Gecko/20110814 Firefox/6.0
  • Mozilla/5.0 (X11; Linux i686; rv:6.0) Gecko/20100101 Firefox/6.0
  • Mozilla/6.0 (Macintosh; I; Intel Mac OS X 11_7_9; de-LI; rv:1.9b4) Gecko/2012010317 Firefox/10.0a4
  • Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.2a1pre) Gecko/20110324 Firefox/4.2a1pre

etc. etc.

If you're trying to mirror a website which has implied some kind of useragent restriction based on some "valid" useragent, wget has the -U option enabling you to fake the useragent.

If you get the Sorry but the download manager you are using to view this site is not supported , fake / change wget's UserAgent with cmd:

# wget -mk -w 10 -np -e robots=off \
--random-wait
--referer="http://www.google.com" \--user-agent="Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.6) Gecko/20070725 Firefox/2.0.0.6" \--header="Accept:text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5" \--header="Accept-Language: en-us,en;q=0.5" \--header="Accept-Encoding: gzip,deflate" \--header="Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7" \--header="Keep-Alive: 300"

For the sake of some wget anonimity – to make wget permanently hide its user agent and pretend like a Mozilla Firefox running on MS Windows XP use .wgetrc like this in home directory.

5. Make a complete mirror of a website under a domain name

To retrieve complete working copy of a site with wget a good way is like so:

# wget -rkpNl5 -w 10 --random-wait www.website-to-mirror.com

Where the arguments meaning is:
-r – Retrieve recursively
-k – Convert the links in documents to make them suitable for local viewing
-p – Download everything (inline images, sounds and referenced stylesheets etc.)
-N – Turn on time-stamping
-l5 – Specify recursion maximum depth level of 5

6. Make a dynamic pages static site mirror, by converting CGI, ASP, PHP etc. to HTML for offline browsing

It is often websites pages are ending in a .php / .asp / .cgi … extensions. An example of what I mean is for instance the URL http://php.net/manual/en/tutorial.php. You see the url page is tutorial.php once mirrored with wget the local copy will also end up in .php and therefore will not be suitable for local browsing as .php extension is not understood how to interpret by the local browser.
Therefore to copy website with a non-html extension and make it offline browsable in HTML there is the –html-extension option e.g.:

# wget -mk -w 10 -np -e robots=off \
--random-wait \
--convert-links http://www.website-to-mirror.com

A good practice in mirror making is to set a download limit rate. Setting such rate is both good for UP and DOWN side (the local host where downloading and remote server). download-limit is also useful when mirroring websites consisting of many enormous files (documental movies, some music etc.).
To set a download limit to add –limit-rate= option. Passing by to wget –limit-rate=200K would limit download speed to 200KB.

Other useful thing to assure wget has made an accurate mirror is wget logging. To use it pass -o ./my_mirror.log to wget.
 

WordPress blog MySQL data maintainance valuable plugin WP-OPTIMIZE

Tuesday, April 12th, 2011

The more my blog is growing the slower it becomes, this is a tendency I’ve noticed since a couple of weeks.

Very fortunately while reading some random articles online I’ve came across a super valuable wordpress plugin called WP-OPTIMIZE

I think it’s best if I present instead of taking the time to explain what the WP-optimize does for a wordpress powered blog:

WP-Optimize is a WordPress 2.9++ database cleanup and optimization tool. It doesn’t require PhpMyAdmin to optimize your database tables. It allows you to remove post revisions, comments in the spam queue, un-approved comments within few clicks.

Additionally you can rename any username to another username too.

For example: If you want to rename default ‘admin’ username to ‘someothername’; just put ‘admin’ (without quotes) to the Old username field and the new username to the New username field, then click “PROCESS”)

Now in short to rephrase above text, during MySQL database requests a lot of database starts needing the so called MySQL optimization , this operation becomes necessery especially for databases which are very actively used and is related to the basic design of the mysql server.

Very often many tables starts having a lot of garbage (repetitive) data and therefore read and writes from the SQL server gets slower and slower day by day.

Thus the MySQL server has it’s famous OPTIMIZE TABLE command which does wipe out all the garbage unnecessery data stored in a tables/s and hence seriously impacts the later read/write table operations.

Now to go back to wordpress the same optimization issues, very often are a cause for a performance bottleneck and some smart guy has came with the great idea to create a specific wordpress plugin to fix such an optimize table issues

The WP-Optimize plugin has 5 major options which somehow could have a positive impact on blog’s performance, these are:
 

  • Remove all Post revisions
  • Remove all auto draft posts
  • Clean marked Spam comments
  • lean Unapproved comments
  • Optimize database tables

Apart from the nice performance boost applicaiton the wp-optimize plugin has one super valuable function, it could change the default wordpress blog administrator user admin to some other desired username, for example rename the default blog administrator username “admin” user to “john”.

From a security perspective this is another must have feature in every wordpress blog as it can kill many of the possible brute force attacks with some web brute force softwares like Hydra

Installing and using wp-optimize is a real piece of cake, here is a very quick few command lines to install it:

host:~# cd /var/www/blog/wp-content/plugins/
host:/var/www/blog/wp-content/plugins:# wget https://www.pc-freak.net/files/wp-optimize.0.9.2.zip
host:/var/www/blog/wp-content/plugins:# unzip wp-optimize.0.9.2.zip
...

To launch WP-OPTIMIZE and fix many speed bottlenecks caused by unoptimized tables, or just hanging around in database old spam comments, go and login with your adminsitrator to wordpress.

In the left pane menu starting up with Dashboard somewhere near the end of the vertical menu you will notice a button reading WP-Optimize .
Click over the Wp-Optimize button and a screen like the one in below’s screenshot will appear in your browser screen:

wp optimize plugin database optimization options screen

I have personally decided to use just 2 of all the 5 available primary plugin options, I decided to select only:
 

  • Clean marked Spam comments
  • Optimize database tables

Scrolling below the PROCEED button I could also notice a number of tables which were missing optimization and hence required an optimize table to be executed on each one of them.
Further on I pressed the PROCESS button and after a couple of minutes (2, 3 minutes) of waiting the wp-optimize plugin was done it’s selected tasks:

In the screenshot below, you see all my blog tables which were optimized by the plugin:
WP-Optimize optimized blog tables screenshot

It’s interesting to say few words about the Remove All Posts revisions plugin option, as many might be wondering what this plugin option really means.

When writting posts, wordpress has an option to restore to a certain point of the write out and makes a sort of different versions in time of each of it’s written posts.

Therefore later restoration if something gots deleted by mistake is facilitated, this is actually the all meaning of Remove All Posts revisions

With time and the increase wp posts the amount of Posts Revisions is growing on and on and just taking space and possibly making the access to the database entries slower, thus it might be a good idea to also remove them as, it’s absolutely rare to use the wp post restoration function.
However I’ve personally decided to not wipe out the old posts revisions as someday i might need it and I’m a bit scared that it might cause me some database issues.

I’ll be glad if somebody has tried the Posts Revisions wp-Optimize funciton and is happy with the results.

Facebook and Youtube Culture has a negative impact on people in young years development and teenagers

Tuesday, December 13th, 2011

Facebook and Youtube helps young people get stupid

Facebook and Youtube has become for just a few years a defacto standard service for 80% of computer users in our age.
This is true and it seems there is growing tendency for people to adopt new easy to use services and a boom in the social networks.
We’ve seen that with the fast adoption of the anti-human freedom program Skype , the own privacy breaching FaceBook as well as the people interests tracking service YouTube.

We’ve seen similar high adoption rates in earlier times as well with the already dying (if not dead MySpace), with the early yahoo mail boom etc and in even earlier times with the AltaVista search engine use.
However this time it appears Youtube and FaceBook are here to stay with us and become standard online services for longer times …
Many people who work in office all day staring in a computer screen as well as growing teenagers and practically anyone in the developed and the development world is using those services heavily for (in between 5 to 10 hours a day or more). The Software as a Service users spends approximately half of their time spend on the internet in Youtube or and Facebook.

Its true Youtube can be massively educative with this global database of videos on all kind of topics and in some cases facebook can be considered helpful in keeping in touch people or keeping a catalog of pictures easily accessible from everywhere, however when few services becomes more used and influential than other provided services on the internet this makes these services harmful to the communities and destroyes cultures. The concentration of most of the human popuplation who uses high technologies around few online services creates a big electronic monopoly. In other words the tendency, we see of amalgamation of businesses in real world (building of big malls and destroyment of small and middle sized shops is being observed in the Internet space.

Besides that Facebook and Youtube and Twitter are highly contrary to the true hacker spirit and creates a big harm for intellectuals and other kind of tight and technical community culture by creating one imaginative casual disco culture without any deepness of thought or spirit.

Its observable that most of the people that are heavily using those services are turning into (if not exaggerate) a brainless consumer zombies, a crowd of pathless people who watch videos and pictures and write meaningless commentaries all day long.

You have as a result a “unified dumbness” (dumbness which unifies people).

Even if we can accept the grown and fully formed character people are aware of the threats of using Youtube or Facebook, this is definitely not the case with the growing people which are still in a process of building personality and personal likings.
The harmful resuls that the so called Social networks create can be seen almost everywhere, most of the cafeterias I visit the bar tender uses facebook or youtube all da long, most of random people I see outside in a coffee or university or any public institution where internet access is available they are again in Youtube and Facebook. The result is people almost did not use the Net but just hang around in those few services wasting network bandwidth and loading networks and computer equipment and spending energy for nothing. The wasteful computer and Internet deepens the ecologic problems as energy is spend on nonsense and not goal oriented tasks but on “empty” false entertainment.
Hence the whole original idea of internet for many is changed and comes to few words ( Skype, Youtube, Facebook etc.

Besides that youngsters instead of reading some classical valuable books, are staring in the computer screen most of their cognitive time at only this few “services” and are learned to become more a consumers than self opinion thinkers and inventors.
I have not lately met any growing real thinking man. I’ve seen already by own experience the IQ level of younger generations than mine (I’m 28) is getting downer and downer. Where I see as a main cause the constant interaction with technology built in a way to restrict, a consumers technology so to say.
Facebook and Youtube puts in young and growing man’s mind, the wrong idea that they should be limited choice people always praising what is newest and brightest (without taking in consideration any sight effects). These services lead people to the idea that one should always be with the crowds and never have a solid own opinion or solid state on lifely matters. As said own opionion is highly mitigated especially in facebook where all young people try to look not what they really are but copy / paste some trendy buzz words, modern style or just copying the today’s hearoes of the day. This as one can imagine prevents a person of getting a strong unique self identity and preference on things.

Many of the older people or computer illiterates can hardly recognize the severe problems, as they’re not aware on the technical side of things and does not realize how much security compromising as well as binding the constant exposure to those online hives are.
The purpose of this small post is hence just a small attempt to try to raise up some awareness of the potential problems, we as society might face very soon if we continue to follow the latest buzz trends instead of stop for a moment have a profound think on what is the moral consequences of giving so much power on Internet medias like Facebook and Youtube? …