Improve wordpress admin password encryption authentication keys security with WordPress Unique Authentication Keys and Salts


October 9th, 2020

wordpress-improve-security-logo-linux

Having a wordpress blog or website with an admistrator and access via a Secured SSL channel is common nowadays. However there are plenty of SSL encryption leaks already out there and many of which are either slow to be patched or the hosting companies does not care enough to patch on time the libssl Linux libraries / webserver level. Taking that in consideration many websites hosted on some unmaintained one-time run not-frequently updated Linux servers are still vulneable and it might happen that, if you paid for some shared hosting in the past and someone else besides you hosted the website and forget you even your wordpress installation is still living on one of this SSL vulnerable hosts. In situations like that malicious hackers could break up the SSL security up to some level or even if the SSL is secured use MITM (MAN IN THE MIDDLE) attack to simulate your well secured and trusted SSID Name WIFi network to  redirects the network traffic you use (via an SSL transparent Proxy) to connect to WordPress Administrator Dashbiard via https://your-domain.com/wp-admin. Once your traffic is going through the malicious hax0r even if you haven't used the password to authenticate every time, e.g. you have saved the password in browser and WordPress Admin Panel authentication is achieved via a Cookie the cookies generated and used one time by Woddpress site could be easily stealed one time and later from the vicious 1337 h4x0r and reverse the hash with an interceptor Tool and login to your wordpress …

Therefore to improve the wordpress site security it very important to have configured WordPress Unique Authentication Keys and Salts (known also as the WordPress security keys).

They're used by WordPress installation to have a uniquely generated different key and Salt from the default one to the opened WordPress Blog / Site Admin session every time.

So what are the Authentication Unique Keys and Salts and why they are Used?

Like with almost any other web application, when PHP session is opened to WordPress, the code creates a number of Cookies stored locally on your computer.

Two of the cookies created are called:

 wordpress_[hash]
wordpress_logged_in_[hash]

First  cookie is used only in the admin pages (WordPress dashboard), while the second cookie is used throughout WordPress to determine if you are logged in to WordPress or not. Note: [hash] is a random hashed value typically assigned to your session, therefore in reality the cookies name would be named something like wordpress_ffc02f68bc9926448e9222893b6c29a9.

WordPress session stores your authentication details (i.e. WordPress username and password) in both of the above mentioned cookies.

The authentication details are hashed, hence it is almost impossible for anyone to reverse the hash and guess your password through a cookie should it be stolen. By almost impossible it also means that with today’s computers it is practically unfeasible to do so.

WordPress security keys are made up of four authentication keys and four hashing salts (random generated data) that when used together they add an extra layer to your cookies and passwords. 

The authentication details in these cookies are hashed using the random pattern specified in the WordPress security keys. I will not get into too much details but as you might have heard in Cryptography Salts and Keys are important – an indepth explanation on Salts Cryptography (here). A good reading for those who want to know more on how does the authentication based and salts work is on stackexchange.

How to Set up Salt and Key Authentication on WordPress
 

To be used by WP Salts and Key should be configured under wp-config.php usually they look like so:

wordpress-website-blog-salts-keys-wp-config-screenshot-linux

!!! Note !!!  that generating (manually or generated via a random generator program), the definition strings you have to use a random string value of more than 60 characters to prevent predictability 

The default on any newly installed WordPress Website is to have the 4 definitions with _KEY and the four _SALTs to be unconfigured strings looks something like:

default-WordPress-security-keys-and-salts-entries-in-wordPress-wp-config-php-file

Most people never ever take a look at wp-config.php as only the Web GUI Is used for any maintainance, tasks so there is a great chance that if you never heard specifically by some WordPress Security Expert forum or some Security plugin (such as WP Titan Anti Spam & Security) installed to report the WP KEY / SALT you might have never noticed it in the config.

There are 8 WordPress security keys in current WP Installs, but not all of them have been introduced at the same time.
Historically they were introduced in WP versions in below order:

WordPress 2.6: AUTH_KEY, SECURE_AUTH_KEY, LOGGED_IN_KEY
WordPress 2.7: NONCE_KEY
WordPress 3.0: AUTH_SALT, SECURE_AUTH_SALT, LOGGED_IN_SALT, NONCE_SALT

Setting a custom random generated values is an easy task as there is already online Wordpress Security key Random generator.
You can visit above address and you will get an automatic randomly generated values which could be straight copy / pasted to your wp-config.php.

Howeever if you're a paranoic on the guessability of the random generator algorithm, I would advice you use the generator and change some random values yourself on each of the 8 line, the end result in the configuration should be something similar to:

 

define('AUTH_KEY',         '|w+=W(od$V|^hy$F5w)g6O-:e[WI=NHY/!Ez@grd5=##!;jHle_vFPqz}D5|+87Q');
define('SECURE_AUTH_KEY',  'rGReh.<%QBJ{DP )p=BfYmp6fHmIG~ePeHC[MtDxZiZD;;_OMp`sVcKH:JAqe$dA');
define('LOGGED_IN_KEY',    '%v8mQ!)jYvzG(eCt>)bdr+Rpy5@t fTm5fb:o?@aVzDQw8T[w+aoQ{g0ZW`7F-44');
define('NONCE_KEY',        '$o9FfF{S@Z-(/F-.6fC/}+K 6-?V.XG#MU^s?4Z,4vQ)/~-[D.X0<+ly0W9L3,Pj');
define('AUTH_SALT',        ':]/2K1j(4I:DPJ`(,rK!qYt_~n8uSf>=4`{?LC]%%KWm6@j|aht@R.i*ZfgS4lsj');
define('SECURE_AUTH_SALT', 'XY{~:{P&P0Vw6^i44Op*nDeXd.Ec+|c=S~BYcH!^j39VNr#&FK~wq.3wZle_?oq-');
define('LOGGED_IN_SALT',   '8D|2+uKX;F!v~8-Va20=*d3nb#4|-fv0$ND~s=7>N|/-2]rk@F`DKVoh5Y5i,w*K');
define('NONCE_SALT',       'ho[<2C~z/:{ocwD{T-w+!+r2394xasz*N-V;_>AWDUaPEh`V4KO1,h&+c>c?jC$H');

 


Wordpress-auth-key-secure-auth-salt-Linux-wordpress-admin-security-hardening

Once above defines are set, do not forget to comment or remove old AUTH_KEY / SECURE_AUTH_KEY / LOGGED_IN_KEY / AUTH_SALT / SECURE_AUTH_SALT / LOGGED_IN_SALT /NONCE_SALT keys.

The values are configured one time and never have to be changed, WordPress installation automatic updates or Installed WP Plugins will not tamper the value with time.
You should never expand or show your private generated keys to anyone otherwise this could be used to hack your website site.
It is also a good security practice to change this keys, especially if you have some suspects someone has somehow stolen your wp-onfig keys. 
 

Closure

Having AUTH KEYs and Properly configured is essential step to improve your WordPress site security. Anytime having any doubt for a browser hijacked session (or if you have logged in) to your /wp-admin via unsecured public Computer with a chance of a stolen site cookies you should reset keys / salts to a new random values. Setting the auth keys is not a panacea and frequent WP site core updates and plugins should be made to secure your install. Always do frequent audits to WP owned websites with a tool such as WPScan is essential to keep your WP Website unhacked.

 

 

Deny DHCP Address by MAC on Linux


October 8th, 2020

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the www.pc-freak.net and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4

 

option domain-name "pcfreak.lan";
option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;
max-lease-time 891200;
authoritative;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
}
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
        fixed-address 192.168.0.200;
}
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
/home/hipo/info
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;


The Deny DHCP Address by MAC is described on isc.org distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;


As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;


One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.

 

root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid
 

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on https://web.pc-freak.net.

 

Set all logs to log to to physical console /dev/tty12 (tty12) on Linux


August 12th, 2020

tty linux-logo how to log everything to last console terminal tty12

Those who administer servers from the days of birth of Linux and who used actively GNU / Linux over the years or any other UNIX knows how practical could be to configure logging of all running services / kernel messages / errors and warnings on a physical console.

Traditionally from the days I was learning Linux basics I was shown how to do this on an old Debian Sarge 3.0 Linux without systemd and on all Linux distributions Redhat 9.0 / Calderas and Mandrakes I've used either as a home systems or for servers. I've always configured output of all messages to go to the last easy to access console /dev/tty12 (for those who never use it console switching under Linux plain text console mode is done with key combination of CTRL + ALT + F1 .. F12.

In recent times however with the introduction of systemd pretty much things changed as messages to console are not handled by /etc/inittab which was used to add and refresh physical consoles tty1, tty2 … tty7 (the default added one on Linux were usually 7), but I had to manually include more respawn lines for each console in /etc/inittab.
Nowadays as of year 2020 Linux distros /etc/inittab is no longer there being obsoleted and console print out of INPUT / OUTPUT messages are handled by systemd.
 

1. Enable Physical TTYs from TTY8 till TTY12 etc.


The number of default consoles existing in most Linux distributions I've seen is still from tty1 to tty7. Hence to add more tty consoles and be ready to be able to switch out  not only towards tty7 but towards tty12 once you're connected to the server via a remote ILO (Integrated Lights Out) / IdRAC (Dell Remote Access Controller) / IPMI / IMM (Imtegrated Management Module), you have to do it by telling systemd issuing below systemctl commands:
 

 

 # systemctl enable getty@tty8.service Created symlink /etc/systemd/system/getty.target.wants/getty@tty8.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty9.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty9.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty10.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty10.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty11.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty11.service -> /lib/systemd/system/getty@.service.

systemctl enable getty@tty12.service

Created symlink /etc/systemd/system/getty.target.wants/getty@tty12.service -> /lib/systemd/system/getty@.service.


Once the TTYS tty7 to tty12 are enabled you will be able to switch to this consoles either if you have a physical LCD / CRT monitor or KVM switch connected to the machine mounted on the Rack shelf once you're in the Data Center or will be able to see it once connected remotely via the Management IP Interface (ILO) remote console.
 

2. Taking screenshot of the physical console TTY with fbcat


For example below is a screenshot of the 10th enabled tty10:

tty10-linux-screenshot-fbcat-how-to-screenshot-console

As you can in the screenshot I've used the nice tool fbcat that can be used to make a screenshot of remote console. This is very useful especially if remote access via a SSH client such as PuTTY / MobaXterm is not there but you have only a physical attached monitor access on a DCs that are under a heavy firewall that is preventing anyone to get to the system remotely. For example screenshotting the physical console in case if there is a major hardware failure occurs and you need to dump a hardware error message to a flash drive that will be used to later be handled to technicians to analyize it and exchange the broken server hardware part.

Screenshots of the CLI with fbcat is possible across most Linux distributions where as usual.

In Debian you have to first instal the tool via :
 

# apt install –yes fbcat


and on RedHats / CentOS / Fedoras

# yum install -y fbcat


Taking screenshot once tool is on the server of whatever you have printed on console is as easy as

# fbcat > tty_name.ppm


Note that you might want to convert the .ppm created picture to png with any converter such as imagemagick's convert command or if you have a GUI perhaps with GNU Image Manipulation Tool (GIMP).

3. Enabling every rsyslog handled message to log to Physical TTY12


To make everything such as errors, notices, debug, warning messages  become instantly logging towards above added new /dev/tty12.

Open /etc/rsyslog.conf and to the end of the file append below line :
 

daemon,mail.*;\
   news.=crit;news.=err;news.=notice;\
   *.=debug;*.=info;\
   *.=notice;*.=warn   /dev/tty12


To make rsyslog load its new config restart it:

 

# systemctl status rsyslog

 

 

 

rsyslog.service – System Logging Service
   Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-08-10 04:09:36 EEST; 2 days ago
     Docs: man:rsyslogd(8)
           https://www.rsyslog.com/doc/
 Main PID: 671 (rsyslogd)
    Tasks: 4 (limit: 4915)
   Memory: 12.5M
   CGroup: /system.slice/rsyslog.service
           └─671 /usr/sbin/rsyslogd -n -iNONE

 

авг 12 00:00:05 pcfreak rsyslogd[671]:  [origin software="rsyslogd" swVersion="8.1901.0" x-pid="671" x-info="https://www.rsyslo
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

 

systemctl restart rsyslog


That's all folks navigate by pressing simultaneously CTRL + ALT + F12 to get to TTY12 or use ALT + LEFT / ALT + RIGHT ARROW (console switch commands) till you get to the console where everything should be now logged.

Enjoy and if you like this article share to tell your sysadmin friends about this nice hack  ! 🙂

 

 

 

Check server Internet connectivity Speedtest from Linux terminal CLI


August 7th, 2020

check-server-console-speedtest

If you are a system administrator of a dedicated server and you have no access to Xserver Graphical GNOME / KDE etc. environment and you wonder how you can track the bandwidth connectivity speed of remote system to the internet and you happen to have a modern Linux distribution, here is few ways to do a speedtest.
 

1. Use speedtest-cli command line tool to test connectivity

 


speedtest-cli is a tiny tool written in python, to use it hence you need to have python installed on the server.
It is available both for Redhat Linux distros and Debians / Ubuntus etc. in the list of standard installable packages.

a) Install speedtest-cli on Fedora / CentOS / RHEL
 

On CentOS / RHEL / Scientific Linux lower than ver 8:

 

 

$ sudo yum install python

On CentOS 8 / RHEL 8 user type the following command to install Python 3 or 2:

 

 

$sudo yum install python3
$ sudo yum install python2

 

 

 


On Fedora Linux version 22+

 

 

$ sudo dnf install python
$ sudo dnf install pytho3

 


Once python is at place download speedtest.py or in case if link is not reachable download mirrored version of speedtest.py on pc-freak.net here
 

 

 

$ wget -O speedtest-cli https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py
$ chmod +x speedtest-cli

 


Then it is time to run script speedtest-screenshot-linux-terminal-console-cli-cmd
To test enabled Bandwidth on the server

 

 

$ python speedtest-cli


b) Install speedtest-cli on Debian

On Latest Debian 10 Buster speedtest is available out of the box in regular .deb repositories, so fetch it with apt
 

 

# apt install –yes speedtest-cli

 


You can give now speedtest-cli a try with –bytes arguments to get speed values in bytes instead of bits or if you want to generate an image with test results in picture just like it will appear if you use speedtest.net inside a gui browser, use the –share option

speedtest-screenshot-linux-terminal-console-cli-cmd-options

 

 

 

2. Getting connectivity results of all defined speedtest test City Locations


Speedtest has a list of servers through which a Upload and Download speed is tested, to run speedtest-cli to test with each and every server and get a better picture on what kind of connectivity to expect from your server towards the closest region capital cities, fetch speedtest-servers.php list and use a small shell loop below is how:

 

 

 

 

 

root@pcfreak:~#  wget http://www.speedtest.net/speedtest-servers.php
–2020-08-07 16:31:34–  http://www.speedtest.net/speedtest-servers.php
Преобразувам www.speedtest.net (www.speedtest.net)… 151.101.2.219, 151.101.66.219, 151.101.130.219, …
Connecting to www.speedtest.net (www.speedtest.net)|151.101.2.219|:80… успешно свързване.
HTTP изпратено искане, чакам отговор… 301 Moved Permanently
Адрес: https://www.speedtest.net/speedtest-servers.php [следва]
–2020-08-07 16:31:34–  https://www.speedtest.net/speedtest-servers.php
Connecting to www.speedtest.net (www.speedtest.net)|151.101.2.219|:443… успешно свързване.
HTTP изпратено искане, чакам отговор… 307 Temporary Redirect
Адрес: https://c.speedtest.net/speedtest-servers-static.php [следва]
–2020-08-07 16:31:35–  https://c.speedtest.net/speedtest-servers-static.php
Преобразувам c.speedtest.net (c.speedtest.net)… 151.101.242.219
Connecting to c.speedtest.net (c.speedtest.net)|151.101.242.219|:443… успешно свързване.
HTTP изпратено искане, чакам отговор… 200 OK
Дължина: 211695 (207K) [text/xml]
Saving to: ‘speedtest-servers.php’
speedtest-servers.php                  100%[==========================================================================>] 206,73K  –.-KB/s    in 0,1s
2020-08-07 16:31:35 (1,75 MB/s) – ‘speedtest-servers.php’ saved [211695/211695]

Once file is there with below loop we extract all file defined servers id="" 's 
 

root@pcfreak:~# for i in $(cat speedtest-servers.php | egrep -Eo 'id="[0-9]{4}"' |sed -e 's#id="##' -e 's#"##g'); do speedtest-cli  –server $i; done
Retrieving speedtest.net configuration…
Testing from Vivacom (83.228.93.76)…
Retrieving speedtest.net server list…
Retrieving information for the selected server…
Hosted by Telecoms Ltd. (Varna) [38.88 km]: 25.947 ms
Testing download speed……………………………………………………………………..
Download: 57.71 Mbit/s
Testing upload speed…………………………………………………………………………………………
Upload: 93.85 Mbit/s
Retrieving speedtest.net configuration…
Testing from Vivacom (83.228.93.76)…
Retrieving speedtest.net server list…
Retrieving information for the selected server…
Hosted by GMB Computers (Constanta) [94.03 km]: 80.247 ms
Testing download speed……………………………………………………………………..
Download: 35.86 Mbit/s
Testing upload speed…………………………………………………………………………………………
Upload: 80.15 Mbit/s
Retrieving speedtest.net configuration…
Testing from Vivacom (83.228.93.76)…

…..

 


etc.

For better readability you might want to add the ouput to a file or even put it to run periodically on a cron if you have some suspcion that your server Internet dedicated lines dies out to some general locations sometimes.
 

3. Testing UPlink speed with Download some big file from source location


In the past a classical way to test the bandwidth connectivity of your Internet Service Provider was to fetch some big file, Linux guys should remember it was almost a standard to roll a download of Linux kernel source .tar file with some test browser as elinks / lynx / w3c.
speedtest-screenshot-kernel-org-shot1 speedtest-screenshot-kernel-org-shot2
or if those are not at hand test connectivity on remote free shell servers whatever file downloader as wget or curl was used.
Analogical method is still possible, for example to use wget to get an idea about bandwidtch connectivity, let it roll below 500 mb from speedtest.wdc01.softlayer.com to /dev/null few times:

 

$ wget –output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip

$ wget –output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip

$ wget –output-document=/dev/null http://speedtest.wdc01.softlayer.com/downloads/test500.zip

 

# wget -O /dev/null –progress=dot:mega http://cachefly.cachefly.net/10mb.test ; date
–2020-08-07 13:56:49–  http://cachefly.cachefly.net/10mb.test
Resolving cachefly.cachefly.net (cachefly.cachefly.net)… 205.234.175.175
Connecting to cachefly.cachefly.net (cachefly.cachefly.net)|205.234.175.175|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 10485760 (10M) [application/octet-stream]
Saving to: ‘/dev/null’

     0K …….. …….. …….. …….. …….. …….. 30%  142M 0s
  3072K …….. …….. …….. …….. …….. …….. 60%  179M 0s
  6144K …….. …….. …….. …….. …….. …….. 90%  204M 0s
  9216K …….. ……..                                    100%  197M=0.06s

2020-08-07 13:56:50 (173 MB/s) – ‘/dev/null’ saved [10485760/10485760]

Fri 07 Aug 2020 01:56:50 PM UTC


To be sure you have a real picture on remote machine Internet speed it is always a good idea to run download of random big files on a certain locations that are well known to have a very stable Internet bandwidth to the Internet backbone routers.

4. Using Simple shell script to test Internet speed


Fetch and use speedtest.sh

 


wget https://raw.github.com/blackdotsh/curl-speedtest/master/speedtest.sh && chmod u+x speedtest.sh && bash speedtest.sh

 

 

5. Using iperf to test connectivity between two servers 

 

iperf is another good tool worthy to mention that can be used to test the speed between client and server.

To use iperf install it with apt and do on the server machine to which bandwidth will be tested:

 

# iperf -s 

 

On the client machine do:

 

# iperf -c 192.168.1.1 

 

where 192.168.1.1 is the IP of the server where iperf was spawned to listen.

6. Using Netflix fast to determine Internet connection speed on host


Fast

fast is a service provided by Netflix. Its web interface is located at Fast.com and it has a command-line interface available through npm (npm is a package manager for nodejs) so if you don't have it you will have to install it first with:

# apt install –yes npm

 

Note that if you run on Debian this will install you some 249 new nodejs packages which you might not want to have on the system, so this is useful only for machines that has already use of nodejs.

 

$ fast

 

     82 Mbps ↓


The command returns your Internet download speed. To get your upload speed, use the -u flag:

 

$ fast -u

 

   ⠧ 80 Mbps ↓ / 8.2 Mbps ↑

 

7. Use speedometer / iftop to measure incoming and outgoing traffic on interface


If you're measuring connectivity on a live production server system, then you might consider that the measurement output might not be exactly correct especially if you're measuring the Uplink / Downlink on a Heavy loaded webserver / Mail Server / Samba or DNS server.
If this is the case a very useful tools to consider to extract the already taken traffic used on your Incoming and Outgoing ( TX / RX ) Network interfaces
are speedometer and iftop, they're present and installable depending on the OS via yum / apt or the respective package manager.

 


To install on Debian server:

 

 

 

# apt install –yes iftop speedometer

 


The most basic use to check the live received traffic in a nice Ncurses like text graphic is with: 

 

 

 

 

# speedometer -r 


speedometer-check-received-transmitted-network-traffic-on-linux1

To generate real time ASCII art graph on RX / TX traffic do:

 

 

# speedometer -r eth0 -t eth0


speedometer-check-received-transmitted-network-traffic-on-linux

 

 

 

 

# iftop -P -i eth0

 

 


iftop-show-statistics-on-connections-screenshot-pcfreak

 

 

 

 

 

Reinstall all Debian packages with a copy of apt deb package list from another working Debian Linux installation


July 29th, 2020

Reinstall-all-Debian-packages-with-copy-of-apt-packages-list-from-another-working-Debian-Linux-installation

Few days ago, in the hurry in the small hours of the night, I've done something extremely stupid. Wanting to move out a .tar.gz binary copy of qmail installation to /var/lib/qmail with all the dependent qmail items instead of extracting to admin user /root directory (/root), I've extracted it to the main Operating system root / directrory.
Not noticing this, I've quickly executed rm -rf var with the idea to delete all directory tree under /root/var just 3 seconds later, I've realized I'm issuing the rm -rf var with the wrong location WITH a root user !!!! Being scared on what I've done, I've quickly pressed CTRL+C to immedately cancel the deletion operation of my /var.

wrong-system-var-rm-linux-dont-do-that-ever-or-your-system-will-end-up-irreversably-damaged

But as you can guess, since the machine has an Slid State Drive drive and SSD memory drive are much more faster in I/O operations than the classical ATA / SATA disks. I was not quick enough to cancel the operation and I've noticed already some part of my /var have been R.I.P-pped in the heaven of directories.

This was ofcourse upsetting so for a while I rethinked the situation to get some ideas on what I can do to recover my system ASAP!!! and I had the idea of course to try to reinstall All my installed .deb debian packages to restore system closest to the normal, before my stupid mistake.

Guess my unpleasent suprise when I have realized dpkg and respectively apt-get apt and aptitude package management tools cannot anymore handle packages as Debian Linux's package dependency database has been damaged due to missing dpkg directory 

 

/var/lib/dpkg 

 

Oh man that was unpleasent, especially since I've installed plenty of stuff that is custom on my Mate based desktop and, generally reinstalling it updating the sytem to the latest Debian security updates etc. will be time consuming and painful process I wanted to omit.

So of course the logical thing to do here was to try to somehow recover somehow a database copy of /var/lib/dpkg  if that was possible, that of course led me to the idea to lookup for a way to recover my /var/lib/dpkg from backup but since I did not maintained any backup copy of my OS anywhere that was not really possible, so anyways I wondered whether dpkg does not keep some kind of database backups somewhere in case if something goes wrong with its database.
This led me to this nice Ubuntu thred which has pointed me to the part of my root rm -rf dpkg db disaster recovery solution.
Luckily .deb package management creators has thought about situation similar to mine and to give the user a restore point for /var/lib/dpkg damaged database

/var/lib/dpkg is periodically backed up in /var/backups

A typical /var/lib/dpkg on Ubuntu and Debian Linux looks like so:
 

hipo@jeremiah:/var/backups$ ls -l /var/lib/dpkg
total 12572
drwxr-xr-x 2 root root    4096 Jul 26 03:22 alternatives
-rw-r–r– 1 root root      11 Oct 14  2017 arch
-rw-r–r– 1 root root 2199402 Jul 25 20:04 available
-rw-r–r– 1 root root 2199402 Oct 19  2017 available-old
-rw-r–r– 1 root root       8 Sep  6  2012 cmethopt
-rw-r–r– 1 root root    1337 Jul 26 01:39 diversions
-rw-r–r– 1 root root    1223 Jul 26 01:39 diversions-old
drwxr-xr-x 2 root root  679936 Jul 28 14:17 info
-rw-r—– 1 root root       0 Jul 28 14:17 lock
-rw-r—– 1 root root       0 Jul 26 03:00 lock-frontend
drwxr-xr-x 2 root root    4096 Sep 17  2012 parts
-rw-r–r– 1 root root    1011 Jul 25 23:59 statoverride
-rw-r–r– 1 root root     965 Jul 25 23:59 statoverride-old
-rw-r–r– 1 root root 3873710 Jul 28 14:17 status
-rw-r–r– 1 root root 3873712 Jul 28 14:17 status-old
drwxr-xr-x 2 root root    4096 Jul 26 03:22 triggers
drwxr-xr-x 2 root root    4096 Jul 28 14:17 updates

Before proceeding with this radical stuff to move out /var/lib/dpkg/info from another machine to /var mistakenyl removed oned. I have tried to recover with the well known:

  • extundelete
  • foremost
  • recover
  • ext4magic
  • ext3grep
  • gddrescue
  • ddrescue
  • myrescue
  • testdisk
  • photorec

Linux file deletion recovery tools from a USB stick loaded with a Number of LiveCD distributions, i.e. tested recovery with:

  • Debian LiveCD
  • Ubuntu LiveCD
  • KNOPPIX
  • SystemRescueCD
  • Trinity Rescue Kit
  • Ultimate Boot CD


but unfortunately none of them couldn't recover the deleted files … 

The reason why the standard file recovery tools could not recover ?

My assumptions is after I've done by rm -rf var; from sysroot,  issued the sync (- if you haven't used it check out man sync) command – that synchronizes cached writes to persistent storage and did a restart from the poweroff PC button, this should have worked, as I've recovered like that in the past) in a normal Sys V System with a normal old fashioned blocks filesystem as EXT2 . or any other of the filesystems without a journal, however as the machine run a EXT4 filesystem with a journald and journald, this did not work perhaps because something was not updated properly in /lib/systemd/systemd-journal, that led to the situation all recently deleted files were totally unrecoverable.

1. First step was to restore the directory skele of /var/lib/dpkg

# mkdir -p /var/lib/dpkg/{alternatives,info,parts,triggers,updates}

 

2. Recover missing /var/lib/dpkg/status  file

The main file that gives information to dpkg of the existing packages and their statuses on a Debian based systems is /var/lib/dpkg/status

# cp /var/backups/dpkg.status.0 /var/lib/dpkg/status

 

3. Reinstall dpkg package manager to make package management working again

Say a warm prayer to the Merciful God ! and do:

# apt-get download dpkg
# dpkg -i dpkg*.deb

 

4. Reinstall base-files .deb package which provides basis of a Debian system

Hopefully everything will be okay and your dpkg / apt pair will be in normal working state, next step is to:

# apt-get download base-files
# dpkg -i base-files*.deb

 

5. Do a package sanity and consistency check and try to update OS package list

Check whether packages have been installed only partially on your system or that have missing, wrong or obsolete control  data  or  files.  dpkg  should suggest what to do with them to get them fixed.

# dpkg –audit

Then resynchronize (fetch) the package index files from their sources described in /etc/apt/sources.list

# apt-get update


Do apt db constistency check:

#  apt-get check


check is a diagnostic tool; it updates the package cache and checks for broken dependencies.
 

Take a deep breath ! …

Do :

ls -l /var/lib/dpkg
and compare with the above list. If some -old file is not present don't worry it will be there tomorrow.

Next time don't forget to do a regular backup with simple rsync backup script or something like Bacula / Amanda / Time Vault or Clonezilla.
 

6. Copy dpkg database from another Linux system that has a working dpkg / apt Database

Well this was however not the end of the story … There were still many things missing from my /var/ and luckily I had another Debian 10 Buster install on another properly working machine with a similar set of .deb packages installed. Therefore to make most of my programs still working again I have copied over /var from the other similar set of package installed machine to my messed up machine with the missing deleted /var.

To do so …
On Functioning Debian 10 Machine (Working Host in a local network with IP 192.168.0.50), I've archived content of /var:

linux:~# tar -czvf var_backup_debian10.tar.gz /var

Then sftped from Working Host towards the /var deleted broken one in my case this machine's hostname is jericho and luckily still had SSHD and SFTP running processes loaded in memory:

jericho:~# sftp root@192.168.0.50
sftp> get var_backup_debian10.tar.gz

Now Before extracting the archive it is a good idea to make backup of old /var remains somewhere for example somewhere in /root 
just in case if we need to have a copy of the dpkg backup dir /var/backups

jericho:~# cp -rpfv /var /root/var_backup_damaged

 
jericho:~# tar -zxvf /root/var_backup_debian10.tar.gz 
jericho:/# mv /root/var/ /

Then to make my /var/lib/dpkg contain the list of packages from my my broken Linux install I have ovewritten /var/lib/dpkg with the files earlier backupped before  .tar.gz was extracted.

jericho:~# cp -rpfv /var /root/var_backup_damaged/lib/dpkg/ /var/lib/

 

7. Reinstall All Debian  Packages completely scripts

 

I then tried to reinstall each and every package first using aptitude with aptitude this is done with

# aptitude reinstall '~i'

However as this failed, tried using a simple shell loop like below:

for i in $(dpkg -l |awk '{ print $2 }'); do echo apt-get install –reinstall –yes $i; done

Alternatively, all .deb package reninstall is also possible with dpkg –get-selections and with awk with below cmds:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log;
awk '$1=$1' ORS=' ' list.log > newlist.log
;
apt-get install –reinstall $(cat newlist.log)

It can also be run as one liner for simplicity:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log; awk '$1=$1' ORS=' ' list.log > newlist.log; apt-get install –reinstall $(cat newlist.log)

This produced a lot of warning messages, reporting "package has no files currently installed" (virtually for all installed packages), indicating a severe packages problem below is sample output produced after each and every package reinstall … :

dpkg: warning: files list file for package 'iproute' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'brscan-skey' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libapache2-mod-php7.4' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-readline' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'linux-headers-4.19.0-5-amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libbonoboui2-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'gksu' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblogging-stdlog0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'python-gconf' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-cli' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mixer.app' missing; assuming package has no files currently installed

After some attempts I found a way to be able to work around the warning message, for each package by simply reinstalling the package reporting the issue with

apt –reinstall $package_name


Though reinstallation started well and many packages got reinstalled, unfortunately some packages such as apache2-mod-php5.6 and other php related ones  started failing during reinstall ending up in unfixable states right after installation of binaries from packages was successfully placed in its expected locations on disk. The failures occured during the package setup stage ( dpkg –configure $packagename) …

The logical thing to do is a recovery attempt with something like the usual well known by any Debian admin:

apt-get install –fix-missing

As well as Manual requesting to reconfigure (issue re-setup) of all installed packages also did not produced a positive result

dpkg –configure -a

But many packages were still failing due to dpkg inability to execute some post installation scripts from respective .deb files.
To work around that and continue installing the rest of packages I had to manually delete all files related to the failing package located under directory 

/var/lib/dpkg/info#

For example to omit the post installation failure of libapache2-mod-php5.6 and have a succesful install of the package next time I tried reinstall, I had to delete all /var/lib/dpkg/info/libapache2-mod-php5.6.postrm, /var/lib/dpkg/info/libapache2-mod-php5.6.postinst scripts and even sometimes everything like libapache2-mod-php5.6* that were present in /var/lib/dpkg/info dir.

The problem with this solution, however was the package reporting to install properly, but the post install script hooks were still not in placed and important things as setting permissions of binaries after install or applying some configuration changes right after install was missing leading to programs failing to  fully behave properly or even breaking up even though showing as finely installed …

The final solution to this problem was radical.
I've used /var/lib/dpkg database (directory) from ther other working Linux machine with dpkg DB OK found in var_backup_debian10.tar.gz (linux:~# host with a working dpkg database) and then based on the dpkg package list correct database responding on jericho:~# to reinstall each and every package on the system using Debian System Reinstaller script taken from the internet.
Debian System Reinstaller works but to reinstall many packages, I've been prompted again and again whether to overwrite configuration or keep the present one of packages.
To Omit the annoying [Y / N ] text prompts I had made a slight modification to the script so it finally looked like this:
 

#!/bin/bash
# Debian System Reinstaller
# Copyright (C) 2015 Albert Huang
# Copyright (C) 2018 Andreas Fendt

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.

# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# —
# This script assumes you are using a Debian based system
# (Debian, Mint, Ubuntu, #!), and have sudo installed. If you don't
# have sudo installed, replace "sudo" with "su -c" instead.

pkgs=`dpkg –get-selections | grep -w 'install$' | cut -f 1 |  egrep -v '(dpkg|apt)'`

for pkg in $pkgs; do
    echo -e "\033[1m   * Reinstalling:\033[0m $pkg"    

    apt-get –reinstall -o Dpkg::Options::="–force-confdef" -o Dpkg::Options::="–force-confold" -y install $pkg || {
        echo "ERROR: Reinstallation failed. See reinstall.log for details."
        exit 1
    }
done

 

 debian-all-packages-reinstall.sh working modified version of Albert Huang and Andreas Fendt script  can be also downloaded here.

Note ! Omitting the text confirmation prompts to install newest config or keep maintainer configuration is handled by the argument:

 

-o Dpkg::Options::="–force-confold


I however still got few NCurses Console selection prompts during the reinstall of about 3200+ .deb packages, so even with this mod the reinstall was not completely automatic.

Note !  During the reinstall few of the packages from the list failed due to being some old unsupported packages this was ejabberd, ircd-hybrid and a 2 / 3 more.
This failure was easily solved by completely purging those packages with the usual

# dpkg –purge $packagename

and reruninng  debian-all-packages-reinstall.sh on each of the failing packages.

Note ! The failing packages were just old ones left over from Debian 8 and Debian 9 before the apt-get dist-upgrade towards 10 Duster.
Eventually I got a success by God's grance, after few hours of pains and trials, ending up in a working state package database and a complete set of freshly reinstalled packages.

The only thing I had to do finally is 2 hours of tampering why GNOME did not automatically booted after the system reboot due to failing gdm
until I fixed that I've temprary used ligthdm (x-display-manager), to do I've

dpkg –reconfigure gdm3

lightdm-x-display-manager-screenshot-gdm3-reconfige

 to work around this I had to also reinstall few libraries, reinstall the xorg-server, reinstall gdm and reinstall the meta package for GNOME, using below set of commands:
 

apt-get install –reinstall libglw1-mesa libglx-mesa0
apt-get install –reinstall libglu1-mesa-dev
apt install –reinstallgsettings-desktop-schemas
apt-get install –reinstall xserver-xorg-video-intel
apt-get install –reinstall xserver-xorg
apt-get install –reinstall xserver-xorg-core
apt-get install –reinstall task-desktop
apt-get install –reinstall task-gnome-desktop

 

As some packages did not ended re-instaled on system because on the original host from where /var/lib/dpkg db was copied did not have it I had to eventually manually trigger reinstall for those too:

 

apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes thunderbird
apt-get install –reinstall –yes audacity
apt-get install –reinstall –yes gajim
apt-get install –reinstall –yes slack remmina
apt-get install –yes k3b
pt-get install –yes gbgoffice
pt-get install –reinstall –yes skypeforlinux
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes libcurl3-gnutls libcurl3-nss
apt-get install –yes virtualbox-5.2
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes alsa-tools-gui
apt-get install –reinstall –yes gftp
apt install ./teamviewer_15.3.2682_amd64.deb –yes

 

Note that some of above packages requires a properly configured third party repositories, other people might have other packages that are missing from the dpkg list and needs to be reinstalled so just decide according to your own case of left aside working system present binaries that doesn't belong to any dpkg installed package.

After a bit of struggle everything is back to normal Thanks God! 🙂 !
 

 

Send TCP / UDP strings and commands to Local and Remote Applications without netcat with Bash


July 24th, 2020

bash-open-network-tcp-udp-connections-from-shell-gnu-bash-shell-shell-logo

Did you ever needed to send TCP / UDP packets manually to send commands to local or remote applications, having a fully functional BASH Shell but not having the luxury to have NC (Netcat Swiss Army Knife of Networking) tool?
This happens if you have some Linux based embeded device as Arduino or a Linux server with a high security PCI requirement which can't affort to have Netcat in place or another portable hardware with a Linux kernel, that needs to communicate in UDP for any reason but you don't want to waste additional 28KB or physically you have access to a Linux device that doesn't have netcat but you want to be able to send UDP externally …

SInce some time in newer GNU Bash's releases support for TCP / UDP data sending is described in Bash's Manual and should be working it is not as good as you might expect but for a small things it could save you the day.

The syntax to use it is:
 

 /dev/protocol/IP/PORT


To open new socket connection to a UDP / TCP protocol with bash you have to simply open a new Shell handler (lets say 3) to:

 

/dev/tcp/your-url.com/80


or

/dev/udp/83.228.93.76/53

 

1. Get GOOGLE HTML Source with simple BASH / Getting URL Index with bash sockets


If you happen to have access to a machine where no network downloader tool or a text browser such as curl, wget, lynx, links is available but you want to dump the content of a index.html or any other URL with simply bash you can do it like so:
 

exec 3<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\n\r\n" >&3 

cat <&3

If you need to open a connection to a Internet Domain with bash and store the output into a separate .html file:

exec 3<>/dev/tcp/www.google.com/80
echo -e "GET / HTTP/1.1\r\n\r\n" >&3 

cat <&3 | tee -a output.html


Note that this will work only if you're logged into into an interactive shell.
If you want instead do it from a shell script (and omit usage) of wget etc. use something like bash_sockets_google_connect.sh basic script :

#!/bin/bash
exec 3<>/dev/tcp/www.google.com/80
printf "GET /\r\n\r\n" >&3
while IFS= read -r -u3 -t2 line || [[ -n “$line” ]]; do echo "$line"; done
exec 3>&-
exec 3<&-

 

2. Sending UDP protocol data via bash socket


To send test  variables or commands data to localhost (127.0.0.1) UDP listening service:
 

echo 'TEST COMMAND' > /dev/udp/127.0.0.1/538

 

echo "Any UDP data" > /dev/udp/127.0.0.1/3000


If you happent to have netcat or running on a bash shell that doesn't properly support TCP / UDP sending you can always do it netcat way:

echo "Command" | nc -u -w0 127.0.0.1 3000


Of course this little hack is useful just for simple things and eventually for more comlex stuff and scripting you would like to use a fully functional HTML reader ( W3C compliant Web Browser )
still  for a quick dirty stuff Bash socketing from the console rocks pretty much ! 🙂

 

How to install GUI on CentOS 7 Minimal and set Gnome Graphical Environment to automatically load on system boot


July 22nd, 2020

centos-linux-logo

I have installed CentOS 7.7 Minimal Server Linux on a VirtualBox Virtual Environment as a test bed machine.

The system got installed easily succesfully with the standard CentOS python based graphical installer, however I needed to place various software which was not there
and for that of course I needed to have a network enabled.

To make network working instead of the default Network NAT configuration for the Virtual Machine I needed to use the Network to be Attached to a Bridged Adapter in order to make
my Windows machine to provide network and (internet) access to VirtualMachine.

virtualbox-virtualmachine-bridged-networking-configuration-screenshot

Then to make networking work after booting into CentOS I had to manually fetch IP via DHCP protocol with command:
 

[root@centos :~]# dhclient enp0s3

 


ethernet0-interface-dhclient-get-ip-linux

To make the setting permanent I had to also of course modify /etc/sysconfig/network-scripts/ifcfg-enp0s3 file and change 

 

ONBOOT=no

 

 

 

to 

ONBOOT=yes


enable-dhclient-centos-linux-shot

On next reboot CentOS boots normally with networking as expected

As by default CentOS Minimal does not provide any graphical environment however I needed to have it in my VM in order to be able to use VboxLinuxAdditions.run (VirtualBox Guest Additions plugins) that enabled the CentOS Operating System to show in Virtualbox in fullscreen and to enable the Copy / Paste buffers to work from The Hypervisor (Windows in that case) and the Guest VM (the CentOS VM).

In CentOS terminology metapackages (a  grouped package under a certain name, alias) are called simply groups) there is a "GNOME Desktop" group that can be used to install the GNOME Graphical Command from that point on with yum, like so:
 

[root@centos :~]# yum -y groups install "GNOME Desktop"


In a while the graphical environment will be in place, the command will install about 1300+ RPM packages, this will take about 5 minutes or so depending on your bandwidth connectivity. Once all is installed and configured succesfully you can use the good old startx command to launch GNOME.

 

 

[root@centos :~]# startx


centos7-linux-graphical-environment-screenshot

This of course will make Xserver and GNOME to run one time and on next reboot, you will end up in a plain text mode environment, so perhaps you will need to make the autolaunch of GNOME environment automatically on each boot in CentOS just like in most modern Linux distributions that use SYSTEMD to handle runlevels, you will need to configure it by changing the systemd default configured target via systemctl:

 

[root@centos :~]# systemctl list-units –type target | egrep "eme|res|gra|mul" 
graphical.target       loaded active active Graphical Interface
multi-user.target      loaded active active Multi-User System

 

[root@centos :~]# systemctl set-default graphical.target
multi-user.target

 

[root@centos :~]# systemctl set-default
graphical.target

 

[root@centos :~]# systemctl set-default graphical.target
graphical.target


Next step was to enable the Guest Additions to do so I had to install in advance 2 RPM packages kernel-headers and kernel-devel


[root@centos :~]# yum install -y kernel-headers kernel-devel

Then I had to mount and run the VboxLinuxAdditions.run script to enable them, i.e.:

 

[root@centos :~]# mkdir /mnt/cdrom
[root@centos :~]# mount /dev/cdrom /mnt/cdrom

[root@centos :~]# cd /mnt/cdrom/
[root@centos :~]# sh VboxLinuxAdditions.run

 


virtualbox-linux-additions-install-screenshot-centos-7-linux

 

 

 

Get daily E-Mail Reports statistics on postfix Linux mail server


July 14th, 2020

https://pc-freak.net/images/Postfix-email-server-logo.svg-1

I've had today a task at work to monitor a postfix mail send and received emails (MAIL FROM / RPCT TO) and get out a simple statistics on what kind of emails are coming and going out from the Postfix SMTP on a server?

Below is shortly explained how I did it plus you will learn how you can use something more advanced to get server mail count, delivery status, errors etc. daily.
 

1. Using a simple script to process /var/log/messages

For that I made a small script to do the trick, the script simply checks mail delivery logged information from /var/log/maillog process a bit sort and logs in a separate log daily.

#!/bin/sh
# Process /var/log/maillog extract from= and to= mails sort
# And log mails to $LOGF
# Author Georgi Georgiev 14.07.2020

DATE_FORM=$(date +'%m_%d_%y_%H_%M_%S_%h_%m');
LOG='/home/gge/mail_from_to-mails';
LOGF="$LOG.$DATE_FORM.log";
CUR_DATE=$(date +'%m_%d_%y_%T');
echo "Processing /var/log/maillog";
echo "Processing /var/log/maillog" > $LOGF;
echo >>$LOGF
echo "!!! $CUR_DATE # Sent MAIL FROM: addresses: !!!" >> $LOGF;
grep -E 'from=' /var/log/maillog|sed -e 's#=# #g'|awk '{ print $8 }'|sed -e 's#<# #g' -e 's#># #g' -e 's#\,##'|sort -rn|uniq >> $LOGF;

echo "!!! $CUR_DATE # Receive RCPT TO: addresses !!!" >>$LOGF;
grep -E 'to=' /var/log/maillog|sed -e 's#=# #g'|awk '{ print $8 }'|sed -e 's#<# #g' -e 's#># #g' -e 's#\,##'|sort -rn|uniq >> $LOGF;


You can get a copy of the mail_from_to_collect_mails_postfix.sh script here.

I've set the script to run via a crond scheduled job once early in the mornthing and I'll leave it like that for 5 days or so to get a good idea on what are the mailboxes that are receiving incoming mail.

The cron I've set to use is as follows:

# crontab -u root -l 
05 03 * * *     sh /home/gge/mail_from_to.sh >/dev/null 2>&1

 

This will be necessery later for a Email Server planned migration to relay its mail via another MTA host.

 

2. Getting More Robust Postifx Mail Statistics from logs


My little script is of course far from best solution to get postfix mail statistics from logs.

If you want something more professional and you need to have a daily report on what mails sent to mail server and mails sent from the MTA to give you information about the Email delivery queue status, number of successful and failed emails from a mail sender / recipient and a whole bunch of useful info you can use something more advanced such as pflogsumm perl script to get daily / weekly monthly mail delivery statistics.

What can pflogsumm do for you ?

 

 

Pflogsumm is a log analyzer/summarizer for the Postfix MTA. It is
designed to provide an overview of Postfix activity, with just enough
detail to give the administrator a “heads up” for potential trouble
spots and fixing any SMTP and email related issues.

Pflogsumm generates summaries and, in some cases, detailed reports of
mail server traffic volumes rejected and bounced email and server
warnings, errors, and panics.

At the time of writting this article it is living on jimsun.linxnet.com just in case if pflogsumm.pl's official download location disappears at some time in future here is pflogsumm-1.1.3.tar.gz mirror stored on pc-freak.net

– Install pflogsumm

Use of pflogsumm is pretty straight forward, you download unarchive the script to some location such as /usr/local/bin/pflogsumm.pl  add the script executable flag and you run it to create a Postfix Mail Log statistics report for you

wget http://jimsun.linxnet.com/downloads/pflogsumm-1.1.3.tar.gz -O /usr/local/src/pflogsumm-1.1.3.tar.gz

 

# mkdir -p /usr/local/src/
# cd /usr/local/src/
# tar -zxvf pflogsumm-1.1.3.tar.gz
# cd pflogsumm-1.1.3/

# mv /usr/local/pflogsumm-1.1.3/pflogsumm.pl /usr/local/bin/pflogsumm
# chmod a+x /usr/local/bin/pflogsumm


That's all, assuming you have perl installed on the system with some standard modules, we're now good to go: 

To give it a test report to the command line:

# /usr/local/bin/pflogsumm -d today /var/log/maillog

pflogsumm-log-summary-screenshot-linux-received-forwarded-bounced-rejected

To generate mail server use report and launch to some email of choice do:

# /usr/local/bin/pflogsumm -d today /var/log/maillog | mail -s Mailstats your-mail@your-domain.com


To make pflogsumm report everyday various interesting stuff such as (message deferrals, message bounce, details, smtp delivery failures, fatal errors, recipients by message size etc. add some cronjob like below to the server:

# /usr/sbin/pflogsumm -d yesterday /var/log/maillog | mail -s Mailstats | mail -s Mailstats your-mail@your-domain.com

If you need a GUI graphical mail monitoring in a Web Browser, you will need to install a webserver with a perl / cgi support,  RRDTools and MailGraph.

linux-monitoring-mail-server-with-mailgraph.cgi

How to check how many processor and volume groups IBM AIX eServer have


July 13th, 2020

how-many-cpus-are-on-commands-Linux-sysadmin-and-user-show-know-AIX-logo
In daily sysadmin duties I have been usually administrating GNU / Linux or FreeBSD servers.
However now in my daily sysadmin jobs I've been added to do some minor sysadmin activities on  a few IBM AIX eServers UNIX machines.

As the eServers were completely unknown to me and I logged in for a first time I needed a way to get idea on what kind of hardware I'm logging in so I wanted to get information about the Central Processing UNIT CPUs on the host.

On Linux I'm used to do a cat /proc/cpuinfo or do dmidecode etc. to get the number of CPUs, however AIX does not have /proc/cpuinfo and has its own way to get information about the system hardware.
As I've red in the IBM AIX's RedBook to get system information on AIX there is the lscfg command.
 

aix:/# lscfg
INSTALLED RESOURCE LIST

The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
*   = Diagnostic support not available.

  Model Architecture: chrp
  Model Implementation: Multiple Processor, PCI bus

+ sys0                                                            System Object
+ sysplanar0                                                      System Planar
* vio0                                                            Virtual I/O Bus
* vscsi3           U8205.E6B.068D6AP-V4-C21-T1                    Virtual SCSI Client Adapter
* vscsi2           U8205.E6B.068D6AP-V4-C20-T1                    Virtual SCSI Client Adapter
* vscsi1           U8205.E6B.068D6AP-V4-C11-T1                    Virtual SCSI Client Adapter
* hdisk1           U8205.E6B.068D6AP-V4-C11-T1-L8100000000000000  Virtual SCSI Disk Drive
* vscsi0           U8205.E6B.068D6AP-V4-C10-T1                    Virtual SCSI Client Adapter
* hdisk0           U8205.E6B.068D6AP-V4-C10-T1-L8100000000000000  Virtual SCSI Disk Drive
* ent3             U8205.E6B.068D6AP-V4-C5-T1                     Virtual I/O Ethernet Adapter (l-lan)
* ent2             U8205.E6B.068D6AP-V4-C4-T1                     Virtual I/O Ethernet Adapter (l-lan)
* ent1             U8205.E6B.068D6AP-V4-C3-T1                     Virtual I/O Ethernet Adapter (l-lan)
* ent0             U8205.E6B.068D6AP-V4-C2-T1                     Virtual I/O Ethernet Adapter (l-lan)
* vsa0             U8205.E6B.068D6AP-V4-C0                        LPAR Virtual Serial Adapter
* vty0             U8205.E6B.068D6AP-V4-C0-L0                     Asynchronous Terminal
+ L2cache0                                                        L2 Cache
+ mem0                                                            Memory
+ proc0                                                           Processor
+ proc4                                                           Processor


To get the number of processors on the host I've had to use:

 

aix:/# lscfg|grep -i proc
  Model Implementation: Multiple Processor, PCI bus
+ proc0                                                           Processor
+ proc4                                                           Processor


Another way to get the CPU number is with:

aix:/# lsdev -C -c processor
proc0 Available 00-00 Processor
proc4 Available 00-04 Processor

 

aix:/# lsattr -EH -l proc4
attribute   value          description           user_settable

 

frequency   3720000000     Processor Speed       False
smt_enabled true           Processor SMT enabled False
smt_threads 4              Processor SMT threads False
state       enable         Processor state       False
type        PowerPC_POWER7 Processor type        False

aix:/# lsattr -EH -l proc0
attribute   value          description           user_settable

 

frequency   3720000000     Processor Speed       False
smt_enabled true           Processor SMT enabled False
smt_threads 4              Processor SMT threads False
state       enable         Processor state       False
type        PowerPC_POWER7 Processor type        False


As you can see each of the processor is multicore has 2 Cores and each of the cores have for Threads, to get the overall number of CPUs on the system including the threaded Virtual CPUs:

aix:/# bindprocessor -q
The available processors are:  0 1 2 3 4 5 6 7


This specific machine has overall of 8 CPUs cores.

lscfg can be used to get various useful other info of the iron:

aix:/# lscfg -s
INSTALLED RESOURCE LIST

 

The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
*   = Diagnostic support not available.

  Model Architecture: chrp
  Model Implementation: Multiple Processor, PCI bus

+ sys0
        System Object
+ sysplanar0
        System Planar
* vio0
        Virtual I/O Bus
* vscsi3           U8305…………….
        Virtual SCSI Client Adapter
* vscsi2           U8305…………….
        Virtual SCSI Client Adapter
* vscsi1           U8305…………….
        Virtual SCSI Client Adapter
* hdisk1           U8305…………….
        Virtual SCSI Disk Drive
* vscsi0           U8305……………..
        Virtual SCSI Client Adapter
* hdisk0           U8305…………….
        Virtual SCSI Disk Drive
* ent3             U8305…………….
        Virtual I/O Ethernet Adapter (l-lan)
* ent2             U8305.E6B…………….
        Virtual I/O Ethernet Adapter (l-lan)
* ent1             U8305.E6B…………….
        Virtual I/O Ethernet Adapter (l-lan)
* ent0             U8305.E6B…………….
        Virtual I/O Ethernet Adapter (l-lan)
* vsa0             U8305.E7B…………….
        LPAR Virtual Serial Adapter
* vty0             U8305.E7B…………….
        Asynchronous Terminal
+ L2cache0
        L2 Cache
+ mem0
        Memory
+ proc0
        Processor
+ proc4
        Processor

aix:/# lscfg -p
INSTALLED RESOURCE LIST

The following resources are installed on the machine.

  Model Architecture: chrp
  Model Implementation: Multiple Processor, PCI bus

  sys0                                                            System Object
  sysplanar0                                                      System Planar
  vio0                                                            Virtual I/O Bus
  vscsi3           U8305.E7B…………….V6-C40-T1                    Virtual SCSI Client Adapter
  vscsi2           U8305.E7B…………….V6-C40-T1                     Virtual SCSI Client Adapter
  vscsi1           U8305.E7B…………….V6-C40-T1                    Virtual SCSI Client Adapter
  hdisk1           U8305.E7B…………….V6-C40-T1-L8500000000000000  Virtual SCSI Disk Drive
  vscsi0           U8305.E7B…………….V6-C40-T1                    Virtual SCSI Client Adapter
  hdisk0           U8305.E7B…………….V6-C40-T1-L8500000000000000  Virtual SCSI Disk Drive
  ent3             U8305.E7B…………….V6-C40-T1                     Virtual I/O Ethernet Adapter (l-lan)
  ent2             U8305.E7B…………….V6-C40-T1                     Virtual I/O Ethernet Adapter (l-lan)
  ent1             U8305.E7B…………….V6-C40-T1                     Virtual I/O Ethernet Adapter (l-lan)
  ent0             U8305.E7B…………….V6-C40-T1                     Virtual I/O Ethernet Adapter (l-lan)
  vsa0             U8305.E7B.069D7AP-V5-C1                        LPAR Virtual Serial Adapter
  vty0             U8305.E7B.069D7AP-V5-D1-L0                     Asynchronous Terminal
  L2cache0                                                        L2 Cache
  mem0                                                            Memory
  proc0                                                           Processor
  proc4                                                           Processor

  PLATFORM SPECIFIC

  Name:  IBM,8305-E7B
    Model:  IBM,8305-E7B
    Node:  /
    Device Type:  chrp

  Name:  openprom
    Model:  IBM,AL730_158
    Node:  openprom

  Name:  interrupt-controller
    Model:  IBM, Logical PowerPC-PIC, 00
    Node:  interrupt-controller@0
    Device Type:  PowerPC-External-Interrupt-Presentation

  Name:  vty
    Node:  vty@30000000
    Device Type:  serial
    Physical Location: …………………………………………..

  Name:  l-lan
    Node:  l-lan@30000002
    Device Type:  network
    Physical Location: …………………………………………..

  Name:  l-lan
    Node:  l-lan@30000003
    Device Type:  network
    Physical Location: …………………………………………..

  Name:  l-lan
    Node:  l-lan@30000004
    Device Type:  network
    Physical Location: …………………………………………..

  Name:  l-lan
    Node:  l-lan@30000005
    Device Type:  network
    Physical Location: …………………………………………..

  Name:  v-scsi
    Node:  v-scsi@3000005a
    Device Type:  vscsi
    Physical Location: …………………………………………..

  Name:  v-scsi
    Node:  v-scsi@3000005b
    Device Type:  vscsi
    Physical Location: …………………………………………..

  Name:  v-scsi
    Node:  v-scsi@30000014
    Device Type:  vscsi
    Physical Location: ………………………………..

  Name:  v-scsi
    Node:  v-scsi@30000017
    Device Type:  vscsi
    Physical Location: …………………………………

 


Another useful command I found is to list the equivalent of Linux's LVM Logical Volumes configured on the system, below is how:

aix:/# lspv hdisk0
00f68c6a84acb0d5 rootvg active hdisk1 00f69d6a85400468 dsvg active

To get more info on a volume group:

aix:/# lspv hdisk0 PHYSICAL VOLUME: hdisk0 VOLUME GROUP: rootvg PV IDENTIFIER: 00f68d6a85acb0d5 VG IDENTIFIER 00f68d6a00004c0000000131353444a5 PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: yes PP SIZE: 32 megabyte(s) LOGICAL VOLUMES: 12 TOTAL PPs: 959 (30688 megabytes) VG DESCRIPTORS: 2 FREE PPs: 493 (15776 megabytes) HOT SPARE: no USED PPs: 466 (14912 megabytes) MAX REQUEST: 256 kilobytes FREE DISTRIBUTION: 191..00..00..110..192 USED DISTRIBUTION: 01..192..191..82..00 MIRROR POOL: None


You can get which local configured partition is set on which ( PV )Physical Volume

aix:/# lspv -l hdisk0
hdisk0:
LV NAME               LPs     PPs     DISTRIBUTION          MOUNT POINT
lg_dumplv             64      64      00..64..00..00..00    N/A
hd8                   1       1       00..00..01..00..00    N/A
hd6                   16      16      00..16..00..00..00    N/A
hd2                   166     166     00..45..89..32..00    /usr
hd4                   29      29      00..11..18..00..00    /
hd3                   40      40      00..04..04..32..00    /tmp
hd9var                55      55      00..00..37..18..00    /var
hd10opt               74      74      00..37..37..00..00    /opt
hd1                   8       8       00..07..01..00..00    /home
hd5                   1       1       01..00..00..00..00    N/A

Linux Send Monitoring Alert Emails without Mail Server via relay SMTP with ssmtp / msmtp


July 10th, 2020

ssmtp-linux-server-sending-email-without-a-local-mail-server-mta-relay-howto

If you have to setup a new Linux server where you need to do a certain local running daemons monitoring with a custom scripts on the local machine Nagios / Zabbix / Graphana etc. that should notify about local running custom programs or services in case of a certain criteria is matched or you simply want your local existing UNIX accounts to be able to send outbound Emails to the Internet.

Then usually you need to install a fully functional SMTP Email server that was Sendmail or QMAIL in old times in early 21st century andusually postfix or Exim in recent days and configure it to use as as a Relay mail server some Kind of SMTP.

The common Relay smtp setting would be such as Google's smtp.gmail.com, Yahoo!'s  smtp.mail.yahoo.com relay host, mail.com or External configured MTA Physical server with proper PTR / MX records or a SMTP hosted on a virtual machine living in Amazon's AWS or m$ Azure that is capable to delivere EMails to the Internet.

Configuring the local installed Mail Transport Agent (MTA) as a relay server is a relatively easy task to do but of course why should you have a fully stacked MTA service with a number of unnecessery services such as Email Queue, Local created mailboxes, Firewall rules, DNS records, SMTP Auth, DKIM keys etc. and even the ability to acccept any emails back in case if you just want to simply careless send and forget with a confirmation that remote email was send successfully?

This is often the case for some machines and especially with the inclusion of technologies such as Kubernettes / Clustered environments / VirtualMachines small proggies such as ssmtp / msmtp that could send mail without a Fully functional mail server installed on localhost ( 127.0.0.1 ) is true jams.

ssmtp program is Simple Send-only sendMail emulator  has been around in Debian GNU / Linux, Ubuntu, CentOS and mostly all Linuxes for quite some a time but recently the Debian package has been orphaned so to install it on a deb based server host you need to use instead msmtp.
 

1. Install ssmtp on CentOS / Fedora / RHEL Linux

In RPM distributions you can't install until epel-release repository is enabled.

[root@centos:~]# yum –enablerepo=extras install epel-release

[root@centos:~]# yum install ssmtp


2. Install ssmp / msmtp Debian / Ubuntu Linux

If you run older version of Debian based distribution the package to install is ssmtp, e.g.:

root@debian:~# apt-get install –yes ssmtp


On Newer Debians as of Debian 10.0 Buster onwards install instead

root@debian:~# apt install –yes msmtp-mta

can save you a lot of effort to keep an eye on a separately MTA hanging around and running as a local service eating up resources that could be spared.
 

3. Configure Relay host for ssmtp


A simple configuration to make ssmtp use gmail.com SMTP servers as a relay host below:

linux:~# cat << EOF > /etc/ssmtp/ssmtp.conf
# /etc/ssmtp/ssmtp.conf
# The user that gets all the mails (UID < 1000, usually the admin)
root=user@host.name
# The full hostname.  Must be correctly formed, fully qualified domain name or GMail will reject connection.
hostname=host.name
# The mail server (where the mail is sent to), both port 465 or 587 should be acceptable
# See also https://support.google.com/mail/answer/78799
mailhub=smtp.gmail.com:587
#mailhub=smtp.host.name:465

# The address where the mail appears to come from for user authentication.
rewriteDomain=gmail.com
# Email 'From header's can override the default domain?

FromLineOverride=YES

# Username/Password
AuthUser=username@gmail.com
AuthPass=password
AuthMethod=LOGIN
# Use SSL/TLS before starting negotiation
UseTLS=YES
UseTLS=Yes
UseSTARTTLS=Yes
logfile        ~/.msmtp.log

EOF

This configuration is very basic and it is useful only if you don't want to get delivered mails back as this functionality is also supported even though rarely used by most.

One downside of ssmtp is mail password will be plain text, so make sure you set proper permissions to /etc/ssmtp/ssmtp.conf
 

– If your Gmail account is secured with two-factor authentication, you need to generate a unique App Password to use in ssmtp.conf. You can do so on your App Passwords page. Use Gmail username (not the App Name) in the AuthUser line and use the generated 16-character password in the AuthPass line, spaces in the password can be omitted.

– If you do not use two-factor authentication, you need to allow access to unsecure apps.
 

4. Configuring different msmtp for separate user profiles


SSMTP is capable of respecting multiple relays for different local UNIX users assuming each of whom has a separate home under /home/your-username

To set a certain user lets say georgi to relay smtp sent emails with mail or mailx command create ~/.msmtprc

 

linux:~# vim ~/.msmtprc


Append configuration like:

# Set default values for all following accounts.
defaults
port 587
tls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
account gmail
host smtp.gmail.com
from <user>@gmail.com
auth on
user <user>
passwordeval gpg –no-tty -q -d ~/.msmtp-gmail.gpg
# Set a default account

account default : gmail


To add it for any different user modify the respective fields and set the different Mail hostname etc.
 

5. Using mail address aliases


msmtp also supports mail aliases, to make them work you will need to have file /etc/msmptrc with
 

aliases               /etc/aliases


Standard aliasses them should work 

linux:~# cat /etc/aliases
# Example aliases file
     
# Send root to Joe and Jane
root: georgi_georgiev@example.com, georgi@example.com
   
# Send everything else to admin
default: admin@domain.example

 

6. Get updated when your Debian servers have new packages to update 

msmpt can be used for multiple stuff one example use would be to use it together with cron to get daily updates if there are new debian issued security or errata update pending packages, to do so you can use the apticron shell script.

To use it on debian install the apticron pack:
 

root@debian:~# apt-get install –yes apticron

apticron has the capability to:

 * send daily emails about pending upgrades in your system;
 * give you the choice of receiving only those upgrades not previously notified;
 * automatically integrate to apt-listchanges in order to give you by email the
   new changes of the pending upgrade packages;
 * handle and warn you about packages put on hold via aptitude/dselect,
   avoiding unexpected package upgrades (see #137771);
 * give you all these stuff in a simple default installation;

 

To configure it you have to place a config copy the one from /usr/lib/apticron/apticron.conf to /etc/apticron/apticron.conf

The only important value to modify in the config is the email address to which an apt-listchanges info for new installable debs from the apt-get dist-upgrade command. Output from them will be be send to the configured EMAIL field  in apticron.conf.
 

EMAIL="<your-user@email-addr-domain.com>"


The timing at which the offered new pending package update reminder will be sent is controlled by /etc/cron.d/apticron
 

debian:~# cat /etc/cron.d/apticron
# cron entry for apticron

48 * * * * root if test -x /usr/sbin/apticron; then /usr/sbin/apticron –cron; else true; fi

apticron will use the local previous ssmtp / msmpt program to deliver to configured mailbox.
To manually trigger apticron run:
 

root@debian:~# if test -x /usr/sbin/apticron; then /usr/sbin/apticron –cron; else true; fi


7. Test whether local mail send works to the Internet

To test mail sent we can use either mail / mailx or sendmail command or some more advanced mailer as alpine or mutt.

Below is few examples.

linux:~$ echo -e "Subject: this is the subject\n\nthis is the body" | mail user@your-recipient-domain.com

To test attachments to mail also works run:

linux:~$ mail -s "Subject" recipient-email@domain.com < mail-content-to-attach.txt

or

Prepare the mail you want to send and send it with sendmail

linux:~$ vim test-mail.txt
To:username@example.com
From:youraccount@gmail.com
Subject: Test Email
This is a test mail.

linux:~$ sendmail -t < test-mail.txt

Sending encoded atacchments with uuencode is also possible but you will need sharutils Deb / RPM package installed.

To attach lets say 2 simple text files uuencoded:

linux:~$ uuencode file.txt myfile.txt | sendmail user@example.com

echo "

To: username@domain.com From: username@gmail.com Subject: A test Hello there." > test.mail

linux:~$ cat test.mail | msmtp -a default <username>@domain.com


That's all folks, hope you learned something, if you know of some better stuff like ssmtp please shar e it.