Posts Tagged ‘crontab’

DNS Monitoring: Check and Alert if DNS nameserver resolver of Linux machine is not properly resolving shell script. Monitor if /etc/resolv.conf DNS runs Okay

Thursday, March 14th, 2024

linux-monitor-check-dns-is-resolving-fine

If you happen to have issues occasionally with DNS resolvers and you want to keep up an eye on it and alert if DNS is not properly resolving Domains, because sometimes you seem to have issues due to network disconnects, disturbances (modifications), whatever and you want to have another mean to see whether a DNS was reachable or unreachable for a time, here is a little bash shell script that does the "trick".

Script work mechacnism is pretty straight forward as you can see we check what are the configured nameservers if they properly resolve and if they're properly resolving we write to log everything is okay, otherwise we write to the log DNS is not properly resolvable and send an ALERT email to preconfigured Email address.

Below is the check_dns_resolver.sh script:

 

#!/bin/bash
# Simple script to Monitor DNS set resolvers hosts for availability and trigger alarm  via preset email if any of the nameservers on the host cannot resolve
# Use a configured RESOLVE_HOST to try to resolve it via available configured nameservers in /etc/resolv.conf
# if machines are not reachable send notification email to a preconfigured email
# script returns OK 1 if working correctly or 0 if there is issue with resolving $RESOLVE_HOST on $SELF_HOSTNAME and mail on $ALERT_EMAIL
# output of script is to be kept inside DNS_status.log

ALERT_EMAIL='your.email.address@email-fqdn.com';
log=/var/log/dns_status.log;
TIMEOUT=3; DNS=($(grep -R nameserver /etc/resolv.conf | cut -d ' ' -f2));  

SELF_HOSTNAME=$(hostname –fqdn);
RESOLVE_HOST=$(hostname –fqdn);

for i in ${DNS[@]}; do dns_status=$(timeout $TIMEOUT nslookup $RESOLVE_HOST  $i); 

if [[ “$?” == ‘0’ ]]; then echo "$(date "+%y.%m.%d %T") $RESOLVE_HOST $i on host $SELF_HOST OK 1" | tee -a $log; 
else 
echo "$(date "+%y.%m.%d %T")$RESOLVE_HOST $i on host $SELF_HOST NOT_OK 0" | tee -a $log; 

echo "$(date "+%y.%m.%d %T") $RESOLVE_HOST $i DNS on host $SELF_HOST resolve ERROR" | mail -s "$RESOLVE_HOST /etc/resolv.conf $i DNS on host $SELF_HOST resolve ERROR";

fi

 done

Download check_dns_resolver.sh here set the script to run via a cron job every lets say 5 minutes, for example you can set a cronjob like this:
 

# crontab -u root -e
*/5 * * * *  check_dns_resolver.sh 2>&1 >/dev/null

 

Then Voila, check the log /var/log/dns_status.log if you happen to run inside a service downtime and check its output with the rest of infrastructure componets, network switch equipment, other connected services etc, that should keep you in-line to proof during eventual RCA (Root Cause Analysis) if complete high availability system gets down to proof your managed Linux servers was not the reason for the occuring service unavailability.

A simplified variant of the check_dns_resolver.sh can be easily integrated to do Monitoring with Zabbix userparameter script and DNS Check Template containing few Triggers, Items and Action if I have time some time in the future perhaps, I'll blog a short article on how to configure such DNS zabbix monitoring, the script zabbix variant of the DNS monitor script is like this:

[root@linux-server bin]# cat check_dns_resolver.sh 
#!/bin/bash
TIMEOUT=3; DNS=($(grep -R nameserver /etc/resolv.conf | cut -d ' ' -f2));  for i in ${DNS[@]}; do dns_status=$(timeout $TIMEOUT nslookup $(hostname –fqdn) $i); if [[ “$?” == ‘0’ ]]; then echo "$i OK 1"; else echo "$i NOT OK 0"; fi; done

[root@linux-server bin]#


Hope this article, will help someone to improve his Unix server Infrastucture monitoring.

Enjoy and Cheers !

Find when cron.daily cron.weekly and cron.monthly run on Redhat / CentOS / Debian Linux and systemd-timers

Wednesday, March 25th, 2020

Find-when-cron.daily-cron.monthly-cron.weekly-run-on-Redhat-CentOS-Debian-SuSE-SLES-Linux-cron-logo

 

The problem – Apache restart at random times


I've noticed today something that is occuring for quite some time but was out of my scope for quite long as I'm not directly involved in our Alert monitoring at my daily job as sys admin. Interestingly an Apache HTTPD webserver is triggering alarm twice a day for a short downtime that lasts for 9 seconds.

I've decided to investigate what is triggering WebServer restart in such random time and investigated on the system for any background running scripts as well as reviewed the system logs. As I couldn't find nothing there the only logical place to check was cron jobs.
The usual
 

crontab -u root -l


Had no configured cron jobbed scripts so I digged further to check whether there isn't cron jobs records for a script that is triggering the reload of Apache in /etc/crontab /var/spool/cron/root and /var/spool/cron/httpd.
Nothing was found there and hence as there was no anacron service running but /usr/sbin/crond the other expected place to look up for a trigger even was /etc/cron*

 

1. Configured default cron execution times, every day, every hour every month

 

# ls -ld /etc/cron.*
drwxr-xr-x 2 root root 4096 feb 27 10:54 /etc/cron.d/
drwxr-xr-x 2 root root 4096 dec 27 10:55 /etc/cron.daily/
drwxr-xr-x 2 root root 4096 dec  7 23:04 /etc/cron.hourly/
drwxr-xr-x 2 root root 4096 dec  7 23:04 /etc/cron.monthly/
drwxr-xr-x 2 root root 4096 dec  7 23:04 /etc/cron.weekly/

 

After a look up to each of above directories, finally I found the very expected logrorate shell script set to execute from /etc/cron.daily/logrotate and inside it I've found after the log files were set to be gzipped and moved to execute WebServer restart with:

systemctl reload httpd 

 

My first reaction was to ponder seriously why the script is invoking systemctl reload httpd instead of the good oldschool

apachectl -k graceful

 

But it seems on Redhat and CentOS since RHEL / CentOS version 6.X onwards systemctl reload httpd is supposed to be identical and a substitute for apachectl -k graceful.
Okay the craziness of innovation continued as obviously the reload was causing a Downtime to be visible in the Zabbix HTTPD port Monitoring graph …
Now as the problem was identified the other logical question poped up how to find out what is the exact timing scheduled to run the script in that unusual random times each time ??
 

2. Find out cron scripts timing Redhat / CentOS / Fedora / SLES

 

/etc/cron.{daily,monthly,weekly} placed scripts's execution method has changed over the years, causing a chaos just like many Linux standard things we know due to the inclusion of systemd and some other additional weird OS design changes. The result is the result explained above scripts are running at a strange unexpeted times … one thing that was intruduced was anacron – which is also executing commands periodically with a different preset frequency. However it is considered more thrustworhty by crond daemon, because anacron does not assume the machine is continuosly running and if the machine is down due to a shutdown or a failure (if it is a Virtual Machine) or simply a crond dies out, some cronjob necessery for overall set environment or application might not run, what anacron guarantees is even though that and even if crond is in unworking defunct state, the preset scheduled scripts will still be served.
anacron's default file location is in /etc/anacrontab.

A standard /etc/anacrontab looks like so:
 

[root@centos ~]:# cat /etc/anacrontab
# /etc/anacrontab: configuration file for anacron
 
# See anacron(8) and anacrontab(5) for details.
 
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22
 
#period in days   delay in minutes   job-identifier   command
1    5    cron.daily        nice run-parts /etc/cron.daily
7    25    cron.weekly        nice run-parts /etc/cron.weekly
@monthly 45    cron.monthly        nice run-parts /etc/cron.monthly

 

START_HOURS_RANGE : The START_HOURS_RANGE variable sets the time frame, when the job could started. 
The jobs will start during the 3-22 (3AM-10PM) hours only.

  • cron.daily will run at 3:05 (After Midnight) A.M. i.e. run once a day at 3:05AM.
  • cron.weekly will run at 3:25 AM i.e. run once a week at 3:25AM.
  • cron.monthly will run at 3:45 AM i.e. run once a month at 3:45AM.

If the RANDOM_DELAY env var. is set, a random value between 0 and RANDOM_DELAY minutes will be added to the start up delay of anacron served jobs. 
For instance RANDOM_DELAY equels 45 would therefore add, randomly, between 0 and 45 minutes to the user defined delay. 

Delay will be 5 minutes + RANDOM_DELAY for cron.daily for above cron.daily, cron.weekly, cron.monthly config records, i.e. 05:01 + 0-45 minutes

A full detailed explanation on automating system tasks on Redhat Enterprise Linux is worthy reading here.

!!! Note !!! that listed jobs will be running in queue. After one finish, then next will start.
 

3. SuSE Enterprise Linux cron jobs not running at desired times why?


in SuSE it is much more complicated to have a right timing for standard default cron jobs that comes preinstalled with a service 

In older SLES release /etc/crontab looked like so:

 

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly


As time of writting article it looks like:

 

SHELL=/bin/sh
PATH=/usr/bin:/usr/sbin:/sbin:/bin:/usr/lib/news/bin
MAILTO=root
#
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
#
-*/15 * * * *   root  test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/null 2>&1

 

 


This runs any scripts placed in /etc/cron.{hourly, daily, weekly, monthly} but it may not run them when you expect them to run. 
/usr/lib/cron/run-crons compares the current time to the /var/spool/cron/lastrun/cron.{time} file to determine if those jobs need to be run.

For hourly, it checks if the current time is greater than (or exactly) 60 minutes past the timestamp of the /var/spool/cron/lastrun/cron.hourly file.

For weekly, it checks if the current time is greater than (or exactly) 10080 minutes past the timestamp of the /var/spool/cron/lastrun/cron.weekly file.

Monthly uses a caclucation to check the time difference, but is the same type of check to see if it has been one month after the last run.

Daily has a couple variations available – By default it checks if it is more than or exactly 1440 minutes since lastrun.
If DAILY_TIME is set in the /etc/sysconfig/cron file (again a suse specific innovation), then that is the time (within 15minutes) when daily will run.

For systems that are powered off at DAILY_TIME, daily tasks will run at the DAILY_TIME, unless it has been more than x days, if it is, they run at the next running of run-crons. (default 7days, can set shorter time in /etc/sysconfig/cron.)
Because of these changes, the first time you place a job in one of the /etc/cron.{time} directories, it will run the next time run-crons runs, which is at every 15mins (xx:00, xx:15, xx:30, xx:45) and that time will be the lastrun, and become the normal schedule for future runs. Note that there is the potential that your schedules will begin drift by 15minute increments.

As you see this is very complicated stuff and since God is in the simplicity it is much better to just not use /etc/cron.* for whatever scripts and manually schedule each of the system cron jobs and custom scripts with cron at specific times.


4. Debian Linux time start schedule for cron.daily / cron.monthly / cron.weekly timing

As the last many years many of the servers I've managed were running Debian GNU / Linux, my first place to check was /etc/crontab which is the standard cronjobs file that is setting the { daily , monthly , weekly crons } 

 

 debian:~# ls -ld /etc/cron.*
drwxr-xr-x 2 root root 4096 фев 27 10:54 /etc/cron.d/
drwxr-xr-x 2 root root 4096 фев 27 10:55 /etc/cron.daily/
drwxr-xr-x 2 root root 4096 дек  7 23:04 /etc/cron.hourly/
drwxr-xr-x 2 root root 4096 дек  7 23:04 /etc/cron.monthly/
drwxr-xr-x 2 root root 4096 дек  7 23:04 /etc/cron.weekly/

 

debian:~# cat /etc/crontab 
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin# Example of job definition:
# .—————- minute (0 – 59)
# |  .————- hour (0 – 23)
# |  |  .———- day of month (1 – 31)
# |  |  |  .——- month (1 – 12) OR jan,feb,mar,apr …
# |  |  |  |  .—- day of week (0 – 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name command to be executed
17 *    * * *    root    cd / && run-parts –report /etc/cron.hourly
25 6    * * *    root    test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.daily )
47 6    * * 7    root    test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.weekly )
52 6    1 * *    root    test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.monthly )

What above does is:

– Run cron.hourly once at every hour at 1:17 am
– Run cron.daily once at every day at 6:25 am.
– Run cron.weekly once at every day at 6:47 am.
– Run cron.monthly once at every day at 6:42 am.

As you can see if anacron is present on the system it is run via it otherwise it is run via run-parts binary command which is reading and executing one by one all scripts insude /etc/cron.hourly, /etc/cron.weekly , /etc/cron.mothly

anacron – few more words

Anacron is the canonical way to run at least the jobs from /etc/cron.{daily,weekly,monthly) after startup, even when their execution was missed because the system was not running at the given time. Anacron does not handle any cron jobs from /etc/cron.d, so any package that wants its /etc/cron.d cronjob being executed by anacron needs to take special measures.

If anacron is installed, regular processing of the /etc/cron.d{daily,weekly,monthly} is omitted by code in /etc/crontab but handled by anacron via /etc/anacrontab. Anacron's execution of these job lists has changed multiple times in the past:

debian:~# cat /etc/anacrontab 
# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root

# These replace cron's entries
1    5    cron.daily    run-parts –report /etc/cron.daily
7    10    cron.weekly    run-parts –report /etc/cron.weekly
@monthly    15    cron.monthly    run-parts –report /etc/cron.monthly

In wheezy and earlier, anacron is executed via init script on startup and via /etc/cron.d at 07:30. This causes the jobs to be run in order, if scheduled, beginning at 07:35. If the system is rebooted between midnight and 07:35, the jobs run after five minutes of uptime.
In stretch, anacron is executed via a systemd timer every hour, including the night hours. This causes the jobs to be run in order, if scheduled, beween midnight and 01:00, which is a significant change to the previous behavior.
In buster, anacron is executed via a systemd timer every hour with the exception of midnight to 07:00 where anacron is not invoked. This brings back a bit of the old timing, with the jobs to be run in order, if scheduled, beween 07:00 and 08:00. Since anacron is also invoked once at system startup, a reboot between midnight and 08:00 also causes the jobs to be scheduled after five minutes of uptime.
anacron also didn't have an upstream release in nearly two decades and is also currently orphaned in Debian.

As of 2019-07 (right after buster's release) it is planned to have cron and anacron replaced by cronie.

cronie – Cronie was forked by Red Hat from ISC Cron 4.1 in 2007, is the default cron implementation in Fedora and Red Hat Enterprise Linux at least since Version 6. cronie seems to have an acive upstream, but is currently missing some of the things that Debian has added to vixie cron over the years. With the finishing of cron's conversion to quilt (3.0), effort can begin to add the Debian extensions to Vixie cron to cronie.

Because cronie doesn't have all the Debian extensions yet, it is not yet suitable as a cron replacement, so it is not in Debian.
 

5. systemd-timers – The new crazy systemd stuff for script system job scheduling


Timers are systemd unit files with a suffix of .timer. systemd-timers was introduced with systemd so older Linux OS-es does not have it.
 Timers are like other unit configuration files and are loaded from the same paths but include a [Timer] section which defines when and how the timer activates. Timers are defined as one of two types:

 

  • Realtime timers (a.k.a. wallclock timers) activate on a calendar event, the same way that cronjobs do. The option OnCalendar= is used to define them.
  • Monotonic timers activate after a time span relative to a varying starting point. They stop if the computer is temporarily suspended or shut down. There are number of different monotonic timers but all have the form: OnTypeSec=. Common monotonic timers include OnBootSec and OnActiveSec.

     

     

    For each .timer file, a matching .service file exists (e.g. foo.timer and foo.service). The .timer file activates and controls the .service file. The .service does not require an [Install] section as it is the timer units that are enabled. If necessary, it is possible to control a differently-named unit using the Unit= option in the timer’s [Timer] section.

    systemd-timers is a complex stuff and I'll not get into much details but the idea was to give awareness of its existence for more info check its manual man systemd.timer

Its most basic use is to list all configured systemd.timers, below is from my home Debian laptop
 

debian:~# systemctl list-timers –all
NEXT                         LEFT         LAST                         PASSED       UNIT                         ACTIVATES
Tue 2020-03-24 23:33:58 EET  18s left     Tue 2020-03-24 23:31:28 EET  2min 11s ago laptop-mode.timer            lmt-poll.service
Tue 2020-03-24 23:39:00 EET  5min left    Tue 2020-03-24 23:09:01 EET  24min ago    phpsessionclean.timer        phpsessionclean.service
Wed 2020-03-25 00:00:00 EET  26min left   Tue 2020-03-24 00:00:01 EET  23h ago      logrotate.timer              logrotate.service
Wed 2020-03-25 00:00:00 EET  26min left   Tue 2020-03-24 00:00:01 EET  23h ago      man-db.timer                 man-db.service
Wed 2020-03-25 02:38:42 EET  3h 5min left Tue 2020-03-24 13:02:01 EET  10h ago      apt-daily.timer              apt-daily.service
Wed 2020-03-25 06:13:02 EET  6h left      Tue 2020-03-24 08:48:20 EET  14h ago      apt-daily-upgrade.timer      apt-daily-upgrade.service
Wed 2020-03-25 07:31:57 EET  7h left      Tue 2020-03-24 23:30:28 EET  3min 11s ago anacron.timer                anacron.service
Wed 2020-03-25 17:56:01 EET  18h left     Tue 2020-03-24 17:56:01 EET  5h 37min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service

 

8 timers listed.


N ! B! If a timer gets out of sync, it may help to delete its stamp-* file in /var/lib/systemd/timers (or ~/.local/share/systemd/ in case of user timers). These are zero length files which mark the last time each timer was run. If deleted, they will be reconstructed on the next start of their timer.

Summary

In this article, I've shortly explain logic behind debugging weird restart events etc. of Linux configured services such as Apache due to configured scripts set to run with a predefined scheduled job timing. I shortly explained on how to figure out why the preset default install configured cron jobs such as logrorate – the service that is doing system logs archiving and nulling run at a certain time. I shortly explained the mechanism behind cron.{daily, monthy, weekly} and its execution via anacron – runner program similar to crond that never misses to run a scheduled job even if a system downtime occurs due to a crashed Docker container etc. run-parts command's use was shortly explained. A short look at systemd.timers was made which is now essential part of almost every new Linux release and often used by system scripts for scheduling time based maintainance tasks.

How to force logrorate process logs / Make logrotate changes take effect immediately

Sunday, April 10th, 2016

how-to-force-logrorate-to-process-logs-make-logrorate-changes-take-effect-immediately-log-rotate-300x299

Dealing with logrorate as admins we need to change or add new log-rorate configurations (on most Linux distributions configs are living uder
/etc/logrotate.d/
 

logrotate uses crontab to work. It's scheduled work, not as daemon, so usually no need to reload its configuration.
When the crontab executes logrotate, it will use your new config file automatically.

Most of the logrotate setups I've seen on various distros runs out of the /etc/cron.daily

$ ls -l /etc/cron.daily/logrotate 
-rwxr-xr-x 1 root root 180 May 18  2014 /etc/cron.daily/logrotate

Here is content of cron job scheduled script:

$ cat /etc/cron.daily/logrorate

#!/bin/sh /usr/sbin/logrotate /etc/logrotate.conf EXITVALUE=$? if [ $EXITVALUE != 0 ]; then /usr/bin/logger -t logrotate "ALERT exited abnormally with [$EXITVALUE]" fi exit 0

Configurations change to lograte configs takes effect on next crontab run,
but what If you need to test your config you can also execute logrotate
on your own with below ommand:

 

logrotate -vf /etc/logrotate.conf 

If you encounter some issues with just modified or newly added logrorate script to check out the status of last logrorate executing bunch of log-rotate scripts run on Debian  / Ubuntu etc. deb based Linux:

cat /var/lib/logrotate/status

Or on RHEL, Fedora, CentOS Linux


cat /var/lib/logrotate.status

logrotate state -- version 2

 

"/var/log/syslog" 2016-4-9
"/var/log/dpkg.log" 2016-4-1
"/var/log/unattended-upgrades/unattended-upgrades.log" 2012-9-20
"/var/log/unattended-upgrades/unattended-upgrades-shutdown.log" 2013-5-17
"/var/log/apache2/mailadmin.www.pc-freak.net-access.log" 2012-9-19
"/var/log/snort/portscan.log" 2012-9-12
"/var/log/apt/term.log" 2016-4-1
"/var/log/squid/access.log" 2015-3-21
"/var/log/mysql/mysql-slow.log" 2016-4-9
"/var/log/debug" 2016-4-3
"/var/log/mysql.log" 2016-4-9
"/var/log/squid/store.log" 2015-3-21
"/var/log/apache2/mailadmin.www.pc-freak.net-error.log" 2012-9-19
"/var/log/daemon.log" 2016-4-3
"/var/log/munin/munin-update.log" 2016-4-9
"/var/log/unattended-upgrades/unattended-upgrades*.log" 2013-5-16
"/var/log/razor-agent.log" 2015-2-19
"/var/log/btmp" 2016-4-1
"/var/log/squid/*.log" 2014-11-24
"/var/log/munin/munin-graph.log" 2016-4-9
"/var/log/mysql/mysql.log" 2012-9-12
"/var/log/munin/munin-html.log" 2016-4-9
"/var/log/clamav/freshclam.log" 2016-4-3
"/var/log/munin/munin-node.log" 2016-1-23
"/var/log/mail.info" 2016-4-3
"/var/log/apache2/other_vhosts_access.log" 2016-4-3
"/var/log/exim4/rejectlog" 2012-9-12
"/var/log/squid/cache.log" 2015-3-21
"/var/log/messages" 2016-4-3
"/var/log/stunnel4/stunnel.log" 2012-9-19
"/var/log/apache2/php_error.log" 2012-10-21
"/var/log/ConsoleKit/history" 2016-4-1
"/var/log/rsnapshot.log" 2013-4-15
"/var/log/iptraf/*.log" 2012-9-12
"/var/log/snort/alert" 2012-10-17
"/var/log/privoxy/logfile" 2016-4-3
"/var/log/auth.log" 2016-4-3
"/var/log/postgresql/postgresql-8.4-main.log" 2012-10-21
"/var/log/apt/history.log" 2016-4-1
"/var/log/pm-powersave.log" 2012-11-1
"/var/log/proftpd/proftpd.log" 2016-4-3
"/var/log/proftpd/xferlog" 2016-4-1
"/var/log/zabbix-agent/zabbix_agentd.log" 2016-3-25
"/var/log/alternatives.log" 2016-4-7
"/var/log/mail.log" 2016-4-3
"/var/log/kern.log" 2016-4-3
"/var/log/privoxy/errorfile" 2013-5-28
"/var/log/aptitude" 2015-5-6
"/var/log/apache2/access.log" 2016-4-3
"/var/log/wtmp" 2016-4-1
"/var/log/pm-suspend.log" 2012-9-20
"/var/log/snort/portscan2.log" 2012-9-12
"/var/log/mail.warn" 2016-4-3
"/var/log/bacula/log" 2013-5-1
"/var/log/lpr.log" 2012-12-12
"/var/log/mail.err" 2016-4-3
"/var/log/tor/log" 2016-4-9
"/var/log/fail2ban.log" 2016-4-3
"/var/log/exim4/paniclog" 2012-9-12
"/var/log/tinyproxy/tinyproxy.log" 2015-3-25
"/var/log/munin/munin-limits.log" 2016-4-9
"/var/log/proftpd/controls.log" 2012-9-19
"/var/log/proftpd/xferreport" 2012-9-19
"/var/spool/qscan/qmail-queue.log" 2013-5-15
"/var/log/user.log" 2016-4-3
"/var/log/apache2/error.log" 2016-4-3
"/var/log/exim4/mainlog" 2012-10-16
"/var/log/privoxy/jarfile" 2013-5-28
"/var/log/cron.log" 2016-4-3
"/var/log/clamav/clamav.log" 2016-4-3

 

The timestamp date next to each of the rotated service log is when the respective log was last rorated

It is also a handy thing to rorate only a certain service log, lets say clamav-server, mysql-server, apache2 and nginx
 


logrorate /etc/logrorate.d/clamav-server
logrorate /etc/logrorate.d/mysql-server
logrotate /etc/logrotate.d/nginx

How to set a crontab to execute commands on a seconds time interval on GNU / Linux and FreeBSD

Sunday, October 30th, 2011

crontab-execute-cron-jobs-every-second-on-linux-cron-logo
Have you ever been in need to execute some commands scheduled via a crontab, every let’s say 5 seconds?, naturally this is not possible with crontab, however adding a small shell script to loop and execute a command or commands every 5 seconds and setting it up to execute once in a minute through crontab makes this possible.
Here is an example shell script that does execute commands every 5 seconds:

#!/bin/bash
command1_to_exec='/bin/ls';
command2_to_exec='/bin/pwd';
for i in $(echo 1 2 3 4 5 6 7 8 9 10 11); do
sleep 5;
$command1_to_exec; $command2_to_exec;
done

This script will issue a sleep every 5 seconds and execute the two commands defined as $command1_to_exec and $command2_to_exec

Copy paste the script to a file or fetch exec_every_5_secs_cmds.sh from here

The script can easily be modified to execute on any seconds interval delay, the record to put on cron to use with this script should look something like:

# echo '* * * * * /path/to/exec_every_5_secs_cmds.sh' | crontab -

Where of course /path/to/exec_every_5_secs_cmds.sh needs to be modified to a proper script name and path location.

Another way to do the on a number of seconds program / command schedule without using cron at all is setting up an endless loop to run/refresh via /etc/inittab with a number of predefined commands inside. An example endless loop script to run via inittab would look something like:

while [ 1 ]; do
/bin/ls
sleep 5;
done

To run the above sample never ending script using inittab, one needs to add to the end of inittab, some line like:

mine:234:respawn:/path/to/script_name.sh

A quick way to add the line from consone would be with echo:

echo 'mine:234:respawn:/path/to/script' >> /etc/inittab

Of course the proper paths, should be put in:

Then to load up the newly added inittab line, inittab needs to be reloaded with cmd:

# init q

I've also red, some other methods suggested to run programs on a periodic seconds basis using just cron, what I found in stackoverflow.com's  as a thread proposed as a solution is:

* * * * * /foo/bar/your_script
* * * * * sleep 15; /foo/bar/your_script
* * * * * sleep 30; /foo/bar/your_script
* * * * * sleep 45; /foo/bar/your_script

One guy, even suggested a shorted way with cron:

0/15 * * * * * /path/to/my/script

Yesterday, Today, Tomorrow

Sunday, December 2nd, 2007

Yesterday I spend a lot of time outside with Lily. We went to the fountain we watched film at home. The film was called”Wild Hogs” it was supposed to be a fun commedy (only supposed to be). This week is going to be a taugh one.We have to present a project at Marketing Research.

I have to write a 600 words resume about International Enterprice,also we have to make a presentation in Culture. Today in the morning i was on a Liturgy again. God’s grace ishere ! The week passed without serious server issues (Thanks God). Today I checked some logs of one of the serversand I observed oddities there. I checked the crontab and I realized it’s because of a crontab. The dumped databaseis a HUGE one 2.6G (bzipped).

I asked in irc.freenode.net #mysql, and the guys there pointed me to a similar issuewhich was supposed to be an MySQL bug when dumping large database. Since the dumping databases were of a type MyISAMI ofcourse could have used mysqlhotcopy.

But in the end the solution to the problem was removing “–opt” option fromthe backup opts of mysqldump and passing “–skip-opt” to it (I suspect this would slow the dumping process a lot).But I don’t care it is much better (a slow dumping), than hanging the whole Webserver and interrupting the site’s visibilityover the Internet.

Btw I started playing Quake 2, it’s cool but a little annoying there are too many tunnels and veryoften after I kill most of the bad guys I spend a lot of time searching for keys and stuff .. :).END—–

Day After Day

Thursday, February 22nd, 2007

I forgot I had German in the morning so I get to sleep at 3:00 a.m. In the morning my mother wake me up to ask my did I have lectures. I said yes in 10:00 then I rembembered about the German. I tried as fast as possible to stand up wash my teeths and go to the college. After the German we had some Introduction to Management classes. One of our programmers Pesho called me and said we have a problem with few of the scripts setupped to be run on crontab on one of the servers, then also the other programmer Milen called and said he had a problem with some other script which loops one of the webservers … having problems is really unpleasant. At least I’m not so sleepy as I should be for the small quantity of time I’ve slept in the night.We went to a coffee with Narf and Nomen in the lunch break after that Management again. And now I’m doing some stuff on the servers. I’m little tired now and maybe I’ll sleep. Praise the Lord for I’m spiritually okay most of the time today. Thanks God :]END—–

How to fix Thinkpad R61i trackpoint (mouse pointer) hang ups in GNU / Linux

Wednesday, February 1st, 2012

Earlier I've blogged on How to Work Around periodically occuring TrackPoint Thinkpad R61 issues on GNU / Linux . Actually I thought the fix I suggested there is working but I was wrong as the problems with the trackpoint reappeared at twice or thrice a day.

My suggested fix was the use of one script that does periodically change the trackpoint speed and sensitivity to certain numbers.

The fix script to the trackpoint hanging issue is here

Originally I wrote the script has to be set to execute through crontab on a periods like:

0,30 * * * * /usr/sbin/restart_trackpoint.sh >/dev/null 2>&1

Actually the correct values for the crontab if you use my restart_trackpoint.sh script are:

0,5,10,15,20,25,30,35,40,45,50,55,58 * * * * /usr/sbin/restart_trackpoint.sh >/dev/null 2>&3

ig it has to be set the script is issued every 5 minutes to minimize the possibility for the Thinkpad trackpoint hang up issue.

One other thing that helps if trackpoint stucks is setting in /etc/rc.local is psmouse module to load with resetafter= parameter:

echo '/sbin/rmmod psmouse; /sbin/modprobe psmouse resetafter=30' >> /etc/rc.local

 

Where does Debian GNU / Linux Apache + PHP stores session files?

Tuesday, November 22nd, 2011

In order to debug some PHP session problems on Debian, I needed to check the count of existing session files.
When PHP is compiled from source usually, by default sessions are stored in /tmp directory, however this is not the case on Debian.

Debian’s PHP session directory is different, there the sessions are stored in the directory:

/var/lib/php5

I’ve discovered the session directory location by reading Debian’s cron shell script, which delete session files on every 30 minutes.

Here is the file content:

debian~# cat /etc/cron.d/php5
# /etc/cron.d/php5: crontab fragment for php5
# This purges session files older than X, where X is defined in seconds
# as the largest value of session.gc_maxlifetime from all your php.ini
# files, or 24 minutes if not defined. See /usr/lib/php5/maxlifetime

# Look for and purge old sessions every 30 minutes
09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] &&
[ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -delete

To check the amount of existing PHP opened session files:

debian:~# ls -1 /var/lib/php5|wc -l
14049

How to add cron jobs from command line or bash scripts / Add crontab jobs in a script

Saturday, July 9th, 2011

I’m currently writting a script which is supposed to be adding new crontab jobs and do a bunch of other mambo jambo.

By so far I’ve been aware of only one way to add a cronjob non-interactively like so:

                  linux:~# echo '*/5 * * * * /root/myscript.sh' | crontab -

Though using the | crontab – would work it has one major pitfall, I did completely forgot | crontab – OVERWRITES CURRENT CRONTAB! with the crontab passed by with the echo command.
One must be extremely careful if he decides to use the above example as you might loose your crontab definitions permanently!

Thanksfully it seems there is another way to add crontabs non interactively via a script, as I couldn’t find any good blog which explained something different from the classical example with pipe to crontab –, I dropped by in the good old irc.freenode.net to consult the bash gurus there 😉

So I entered irc and asked the question how can I add a crontab via bash shell script without overwritting my old existing crontab definitions less than a minute later one guy with a nickname geirha was kind enough to explain me how to get around the annoying overwridding.

The solution to the ovewrite was expected, first you use crontab to dump current crontab lines to a file and then you append the new cron job as a new record in the file and finally you ask the crontab program to read and insert the crontab definitions from the newly created files.
So here is the exact code one could run inside a script to include new crontab jobs, next to the already present ones:

linux:~# crontab -l > file; echo '*/5 * * * * /root/myscript.sh >/dev/null 2>&1' >> file; crontab file

The above definition as you could read would make the new record of */5 * * * * /root/myscript.sh >/dev/null be added next to the existing crontab scheduled jobs.

Now I’ll continue with my scripting, in the mean time I hope this will be of use to someone out there 😉