Archive for the ‘Curious Facts’ Category

Improve SSL security: Generate and add Diffie Hellman key to SSL certificate for stronger line encryption

Wednesday, June 10th, 2020

Diffie–Hellman key exchange (DH) is a method of securely exchanging cryptographic keys over a public channel and was one of the first public-key protocols as conceived by Ralph Merkle and named after Whitfield Diffie and Martin Hellman. DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography.

Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trusted courier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher.

DH has been widely used on the Internet for improving the authentication encryption among parties. The only note is it useful if both the communication sides A and B are at your control, as what DH does is just strenghten the already established connection between client A and B and not protect from Man in the Middle Attacks. If some malicious user could connect to B pretending it is A the encryption will be established.

diffie-hellman-explained

Alternatively, the Diffie-Hellman key exchange can be combined with an algorithm like the Digital Signature Standard (DSS) to provide authentication, key exchange, confidentiality and check the integrity of the data. In such a situation, RSA is not necessary for securing the connection.

TLS, which is a protocol that is used to secure much of the internet, can use the Diffie-Hellman exchange in three different ways: anonymous, static and ephemeral. In practice, only ephemeral Diffie-Hellman should be implemented, because the other options have security issues.

Anonymous Diffie-Hellman – This version of the Diffie-Hellman key exchange doesn’t use any authentication, leaving it vulnerable to man-in-the-middle attacks. It should not be used or implemented.

Static Diffie-Hellman – Static Diffie-Hellman uses certificates to authenticate the server. It does not authenticate the client by default, nor does it provide forward secrecy.

Ephemeral Diffie-Hellman – This is considered the most secure implementation because it provides perfect forward secrecy. It is generally combined with an algorithm such as DSA or RSA to authenticate one or both of the parties in the connection.

Ephemeral Diffie-Hellman uses different key pairs each time the protocol is run. This gives the connection perfect forward secrecy, because even if a key is compromised in the future, it can’t be used to decrypt all of the past messages.

diffie-hellman-dh-revised

DH encryption key could be generated with the openssl command and could be generated depending on your preference using a 1024 / 2048 or 4096 bit encryption.
Of course it is best to have the strongest encryption possible i.e 4096.

The Logjam attack 

The Diffie-Hellman key exchange was designed on the basis of the discrete logarithm problem being difficult to solve. The most effective publicly known mechanism for finding the solution is the number field sieve algorithm.

The capabilities of this algorithm were taken into account when the Diffie-Hellman key exchange was designed. By 1992, it was known that for a given group, G, three of the four steps involved in the algorithm could potentially be computed beforehand. If this progress was saved, the final step could be calculated in a comparatively short time.

This wasn’t too concerning until it was realized that a significant portion of internet traffic uses the same groups that are 1024 bits or smaller. In 2015, an academic team ran the calculations for the most common 512-bit prime used by the Diffie-Hellman key exchange in TLS.

They were also able to downgrade 80% of TLS servers that supported DHE-EXPORT, so that they would accept a 512-bit export-grade Diffie-Hellman key exchange for the connection. This means that each of these servers is vulnerable to an attack from a well-resourced adversary.

The researchers went on to extrapolate their results, estimating that a nation-state could break a 1024-bit prime. By breaking the single most-commonly used 1024-bit prime, the academic team estimated that an adversary could monitor 18% of the one million most popular HTTPS websites.

They went on to say that a second prime would enable the adversary to decrypt the connections of 66% of VPN servers, and 26% of SSH servers. Later in the report, the academics suggested that the NSA may already have these capabilities.

“A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.”

Despite this vulnerability, the Diffie-Hellman key exchange can still be secure if it is implemented correctly. As long as a 2048-bit key is used, the Logjam attack will not work. Updated browsers are also secure from this attack.

Is the Diffie-Hellman key exchange safe?

While the Diffie-Hellman key exchange may seem complex, it is a fundamental part of securely exchanging data online. As long as it is implemented alongside an appropriate authentication method and the numbers have been selected properly, it is not considered vulnerable to attack.

The Diffie-Hellman key exchange was an innovative method for helping two unknown parties communicate safely when it was developed in the 1970s. While we now implement newer versions with larger keys to protect against modern technology the protocol itself looks like it will continue to be secure until the arrival of quantum computing and the advanced attacks that will come with it.

Here is how easy it is to add this extra encryption to make the SSL tunnel between A and B stronger.

On a Linux / Mac / BSD OS machine install and use openssl client like so:
 

# openssl dhparam -out dhparams1.pem 2048
Generating DH parameters, 2048 bit long safe prime, generator 2
This is going to take a long time
……………………………………………………….+………..+………………………………………………………+


…..
…. ………………..++*++*

Be aware that the Diffie-Hellman key exchange would be insecure if it used numbers as small as those in our example. We are only using such small numbers to demonstrate the concept in a simpler manner.

# cat dhparams1.pem
—–BEGIN DH PARAMETERS—–
MIIBCAKCAQEAwG85wZPoVAVhwR23H5cF81Ml4BZTWuEplrmzSMOR9UNMnKjURf10
JX9xe/ZaqlwMxFYwZLyqtFQB2zczuvp1j+tKkSi4/TbD6Qm6gtsTeRghqunfypjS
+c4dNOVSbo/KLuIB5jDT31iMUAIDJF8OBUuqazRsg4pmYVHFm1KLHCcgcTk5kXqh
m8vXoCTlaLlmicC9pRTgQLuAQRXAF8LnVLCUvGlsyynTdc0yUFePWkmeYHMYAmWo
aBS6AMFNDvOxCubWv9cULkOouhPzd8k0wWYhUrrxMJXc1bSDFCBA7DiRCLPorefd
kCcNJFrh7rgy1lmu00d3I5S9EPH/EyoGSwIBAg==
—–END DH PARAMETERS—–


Copy the generated DH PARAMETERS headered key string to your combined .PEM certificate pair at the end of the file and save it

# vim /etc/haproxy/cert/ssl-cert.pem
….
—–BEGIN DH PARAMETERS—–
MIIBCAKCAQEAwG85wZPoVAVhwR23H5cF81Ml4BZTWuEplrmzSMOR9UNMnKjURf10
JX9xe/ZaqlwMxFYwZLyqtFQB2zczuvp1j+tKkSi4/TbD6Qm6gtsTeRghqunfypjS
+c4dNOVSbo/KLuIB5jDT31iMUAIDJF8OBUuqazRsg4pmYVHFm1KLHCcgcTk5kXqh
m8vXoCTlaLlmicC9pRTgQLuAQRXAF8LnVLCUvGlsyynTdc0yUFePWkmeYHMYAmWo
aBS6AMFNDvOxCubWv9cULkOouhPzd8k0wWYhUrrxMJXc1bSDFCBA7DiRCLPorefd
kCcNJFrh7rgy1lmu00d3I5S9EPH/EyoGSwIBAg==
—–END DH PARAMETERS—–

…..

Restart the WebServer or Proxy service wher Diffie-Hellman key was installed and Voila you should a bit more secure.

Find when cron.daily cron.weekly and cron.monthly run on Redhat / CentOS / Debian Linux and systemd-timers

Wednesday, March 25th, 2020

Find-when-cron.daily-cron.monthly-cron.weekly-run-on-Redhat-CentOS-Debian-SuSE-SLES-Linux-cron-logo

The problem – Apache restart at random times


I've noticed today something that is occuring for quite some time but was out of my scope for quite long as I'm not directly involved in our Alert monitoring at my daily job as sys admin. Interestingly an Apache HTTPD webserver is triggering alarm twice a day for a short downtime that lasts for 9 seconds.

I've decided to investigate what is triggering WebServer restart in such random time and investigated on the system for any background running scripts as well as reviewed the system logs. As I couldn't find nothing there the only logical place to check was cron jobs.
The usual
 

crontab -u root -l


Had no configured cron jobbed scripts so I digged further to check whether there isn't cron jobs records for a script that is triggering the reload of Apache in /etc/crontab /var/spool/cron/root and /var/spool/cron/httpd.
Nothing was found there and hence as there was no anacron service running but /usr/sbin/crond the other expected place to look up for a trigger even was /etc/cron*

1. Configured default cron execution times, every day, every hour every month

# ls -ld /etc/cron.*
drwxr-xr-x 2 root root 4096 feb 27 10:54 /etc/cron.d/
drwxr-xr-x 2 root root 4096 dec 27 10:55 /etc/cron.daily/
drwxr-xr-x 2 root root 4096 dec  7 23:04 /etc/cron.hourly/
drwxr-xr-x 2 root root 4096 dec  7 23:04 /etc/cron.monthly/
drwxr-xr-x 2 root root 4096 dec  7 23:04 /etc/cron.weekly/

After a look up to each of above directories, finally I found the very expected logrorate shell script set to execute from /etc/cron.daily/logrotate and inside it I've found after the log files were set to be gzipped and moved to execute WebServer restart with:

systemctl reload httpd 

My first reaction was to ponder seriously why the script is invoking systemctl reload httpd instead of the good oldschool

apachectl -k graceful

But it seems on Redhat and CentOS since RHEL / CentOS version 6.X onwards systemctl reload httpd is supposed to be identical and a substitute for apachectl -k graceful.
Okay the craziness of innovation continued as obviously the reload was causing a Downtime to be visible in the Zabbix HTTPD port Monitoring graph …
Now as the problem was identified the other logical question poped up how to find out what is the exact timing scheduled to run the script in that unusual random times each time ??
 

2. Find out cron scripts timing Redhat / CentOS / Fedora / SLES

/etc/cron.{daily,monthly,weekly} placed scripts's execution method has changed over the years, causing a chaos just like many Linux standard things we know due to the inclusion of systemd and some other additional weird OS design changes. The result is the result explained above scripts are running at a strange unexpeted times … one thing that was intruduced was anacron – which is also executing commands periodically with a different preset frequency. However it is considered more thrustworhty by crond daemon, because anacron does not assume the machine is continuosly running and if the machine is down due to a shutdown or a failure (if it is a Virtual Machine) or simply a crond dies out, some cronjob necessery for overall set environment or application might not run, what anacron guarantees is even though that and even if crond is in unworking defunct state, the preset scheduled scripts will still be served.
anacron's default file location is in /etc/anacrontab.

A standard /etc/anacrontab looks like so:
 

[root@centos ~]:# cat /etc/anacrontab
# /etc/anacrontab: configuration file for anacron
 
# See anacron(8) and anacrontab(5) for details.
 
SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22
 
#period in days   delay in minutes   job-identifier   command
1    5    cron.daily        nice run-parts /etc/cron.daily
7    25    cron.weekly        nice run-parts /etc/cron.weekly
@monthly 45    cron.monthly        nice run-parts /etc/cron.monthly

START_HOURS_RANGE : The START_HOURS_RANGE variable sets the time frame, when the job could started. 
The jobs will start during the 3-22 (3AM-10PM) hours only.

  • cron.daily will run at 3:05 (After Midnight) A.M. i.e. run once a day at 3:05AM.
  • cron.weekly will run at 3:25 AM i.e. run once a week at 3:25AM.
  • cron.monthly will run at 3:45 AM i.e. run once a month at 3:45AM.

If the RANDOM_DELAY env var. is set, a random value between 0 and RANDOM_DELAY minutes will be added to the start up delay of anacron served jobs. 
For instance RANDOM_DELAY equels 45 would therefore add, randomly, between 0 and 45 minutes to the user defined delay. 

Delay will be 5 minutes + RANDOM_DELAY for cron.daily for above cron.daily, cron.weekly, cron.monthly config records, i.e. 05:01 + 0-45 minutes

A full detailed explanation on automating system tasks on Redhat Enterprise Linux is worthy reading here.

!!! Note !!! that listed jobs will be running in queue. After one finish, then next will start.
 

3. SuSE Enterprise Linux cron jobs not running at desired times why?


in SuSE it is much more complicated to have a right timing for standard default cron jobs that comes preinstalled with a service 

In older SLES release /etc/crontab looked like so:

SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# run-parts
01 * * * * root run-parts /etc/cron.hourly
02 4 * * * root run-parts /etc/cron.daily
22 4 * * 0 root run-parts /etc/cron.weekly
42 4 1 * * root run-parts /etc/cron.monthly


As time of writting article it looks like:

SHELL=/bin/sh
PATH=/usr/bin:/usr/sbin:/sbin:/bin:/usr/lib/news/bin
MAILTO=root
#
# check scripts in cron.hourly, cron.daily, cron.weekly, and cron.monthly
#
-*/15 * * * *   root  test -x /usr/lib/cron/run-crons && /usr/lib/cron/run-crons >/dev/null 2>&1


This runs any scripts placed in /etc/cron.{hourly, daily, weekly, monthly} but it may not run them when you expect them to run. 
/usr/lib/cron/run-crons compares the current time to the /var/spool/cron/lastrun/cron.{time} file to determine if those jobs need to be run.

For hourly, it checks if the current time is greater than (or exactly) 60 minutes past the timestamp of the /var/spool/cron/lastrun/cron.hourly file.

For weekly, it checks if the current time is greater than (or exactly) 10080 minutes past the timestamp of the /var/spool/cron/lastrun/cron.weekly file.

Monthly uses a caclucation to check the time difference, but is the same type of check to see if it has been one month after the last run.

Daily has a couple variations available – By default it checks if it is more than or exactly 1440 minutes since lastrun.
If DAILY_TIME is set in the /etc/sysconfig/cron file (again a suse specific innovation), then that is the time (within 15minutes) when daily will run.

For systems that are powered off at DAILY_TIME, daily tasks will run at the DAILY_TIME, unless it has been more than x days, if it is, they run at the next running of run-crons. (default 7days, can set shorter time in /etc/sysconfig/cron.)
Because of these changes, the first time you place a job in one of the /etc/cron.{time} directories, it will run the next time run-crons runs, which is at every 15mins (xx:00, xx:15, xx:30, xx:45) and that time will be the lastrun, and become the normal schedule for future runs. Note that there is the potential that your schedules will begin drift by 15minute increments.

As you see this is very complicated stuff and since God is in the simplicity it is much better to just not use /etc/cron.* for whatever scripts and manually schedule each of the system cron jobs and custom scripts with cron at specific times.


4. Debian Linux time start schedule for cron.daily / cron.monthly / cron.weekly timing

As the last many years many of the servers I've managed were running Debian GNU / Linux, my first place to check was /etc/crontab which is the standard cronjobs file that is setting the { daily , monthly , weekly crons } 

 debian:~# ls -ld /etc/cron.*
drwxr-xr-x 2 root root 4096 фев 27 10:54 /etc/cron.d/
drwxr-xr-x 2 root root 4096 фев 27 10:55 /etc/cron.daily/
drwxr-xr-x 2 root root 4096 дек  7 23:04 /etc/cron.hourly/
drwxr-xr-x 2 root root 4096 дек  7 23:04 /etc/cron.monthly/
drwxr-xr-x 2 root root 4096 дек  7 23:04 /etc/cron.weekly/

debian:~# cat /etc/crontab 
# /etc/crontab: system-wide crontab
# Unlike any other crontab you don't have to run the `crontab'
# command to install the new version when you edit this file
# and files in /etc/cron.d. These files also have username fields,
# that none of the other crontabs do.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin# Example of job definition:
# .—————- minute (0 – 59)
# |  .————- hour (0 – 23)
# |  |  .———- day of month (1 – 31)
# |  |  |  .——- month (1 – 12) OR jan,feb,mar,apr …
# |  |  |  |  .—- day of week (0 – 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name command to be executed
17 *    * * *    root    cd / && run-parts –report /etc/cron.hourly
25 6    * * *    root    test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.daily )
47 6    * * 7    root    test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.weekly )
52 6    1 * *    root    test -x /usr/sbin/anacron || ( cd / && run-parts –report /etc/cron.monthly )

What above does is:

– Run cron.hourly once at every hour at 1:17 am
– Run cron.daily once at every day at 6:25 am.
– Run cron.weekly once at every day at 6:47 am.
– Run cron.monthly once at every day at 6:42 am.

As you can see if anacron is present on the system it is run via it otherwise it is run via run-parts binary command which is reading and executing one by one all scripts insude /etc/cron.hourly, /etc/cron.weekly , /etc/cron.mothly

anacron – few more words

Anacron is the canonical way to run at least the jobs from /etc/cron.{daily,weekly,monthly) after startup, even when their execution was missed because the system was not running at the given time. Anacron does not handle any cron jobs from /etc/cron.d, so any package that wants its /etc/cron.d cronjob being executed by anacron needs to take special measures.

If anacron is installed, regular processing of the /etc/cron.d{daily,weekly,monthly} is omitted by code in /etc/crontab but handled by anacron via /etc/anacrontab. Anacron's execution of these job lists has changed multiple times in the past:

debian:~# cat /etc/anacrontab 
# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
HOME=/root
LOGNAME=root

# These replace cron's entries
1    5    cron.daily    run-parts –report /etc/cron.daily
7    10    cron.weekly    run-parts –report /etc/cron.weekly
@monthly    15    cron.monthly    run-parts –report /etc/cron.monthly

In wheezy and earlier, anacron is executed via init script on startup and via /etc/cron.d at 07:30. This causes the jobs to be run in order, if scheduled, beginning at 07:35. If the system is rebooted between midnight and 07:35, the jobs run after five minutes of uptime.
In stretch, anacron is executed via a systemd timer every hour, including the night hours. This causes the jobs to be run in order, if scheduled, beween midnight and 01:00, which is a significant change to the previous behavior.
In buster, anacron is executed via a systemd timer every hour with the exception of midnight to 07:00 where anacron is not invoked. This brings back a bit of the old timing, with the jobs to be run in order, if scheduled, beween 07:00 and 08:00. Since anacron is also invoked once at system startup, a reboot between midnight and 08:00 also causes the jobs to be scheduled after five minutes of uptime.
anacron also didn't have an upstream release in nearly two decades and is also currently orphaned in Debian.

As of 2019-07 (right after buster's release) it is planned to have cron and anacron replaced by cronie.

cronie – Cronie was forked by Red Hat from ISC Cron 4.1 in 2007, is the default cron implementation in Fedora and Red Hat Enterprise Linux at least since Version 6. cronie seems to have an acive upstream, but is currently missing some of the things that Debian has added to vixie cron over the years. With the finishing of cron's conversion to quilt (3.0), effort can begin to add the Debian extensions to Vixie cron to cronie.

Because cronie doesn't have all the Debian extensions yet, it is not yet suitable as a cron replacement, so it is not in Debian.
 

5. systemd-timers – The new crazy systemd stuff for script system job scheduling


Timers are systemd unit files with a suffix of .timer. systemd-timers was introduced with systemd so older Linux OS-es does not have it.
 Timers are like other unit configuration files and are loaded from the same paths but include a [Timer] section which defines when and how the timer activates. Timers are defined as one of two types:

 

  • Realtime timers (a.k.a. wallclock timers) activate on a calendar event, the same way that cronjobs do. The option OnCalendar= is used to define them.
  • Monotonic timers activate after a time span relative to a varying starting point. They stop if the computer is temporarily suspended or shut down. There are number of different monotonic timers but all have the form: OnTypeSec=. Common monotonic timers include OnBootSec and OnActiveSec.

    For each .timer file, a matching .service file exists (e.g. foo.timer and foo.service). The .timer file activates and controls the .service file. The .service does not require an [Install] section as it is the timer units that are enabled. If necessary, it is possible to control a differently-named unit using the Unit= option in the timer’s [Timer] section.

    systemd-timers is a complex stuff and I'll not get into much details but the idea was to give awareness of its existence for more info check its manual man systemd.timer

Its most basic use is to list all configured systemd.timers, below is from my home Debian laptop
 

debian:~# systemctl list-timers –all
NEXT                         LEFT         LAST                         PASSED       UNIT                         ACTIVATES
Tue 2020-03-24 23:33:58 EET  18s left     Tue 2020-03-24 23:31:28 EET  2min 11s ago laptop-mode.timer            lmt-poll.service
Tue 2020-03-24 23:39:00 EET  5min left    Tue 2020-03-24 23:09:01 EET  24min ago    phpsessionclean.timer        phpsessionclean.service
Wed 2020-03-25 00:00:00 EET  26min left   Tue 2020-03-24 00:00:01 EET  23h ago      logrotate.timer              logrotate.service
Wed 2020-03-25 00:00:00 EET  26min left   Tue 2020-03-24 00:00:01 EET  23h ago      man-db.timer                 man-db.service
Wed 2020-03-25 02:38:42 EET  3h 5min left Tue 2020-03-24 13:02:01 EET  10h ago      apt-daily.timer              apt-daily.service
Wed 2020-03-25 06:13:02 EET  6h left      Tue 2020-03-24 08:48:20 EET  14h ago      apt-daily-upgrade.timer      apt-daily-upgrade.service
Wed 2020-03-25 07:31:57 EET  7h left      Tue 2020-03-24 23:30:28 EET  3min 11s ago anacron.timer                anacron.service
Wed 2020-03-25 17:56:01 EET  18h left     Tue 2020-03-24 17:56:01 EET  5h 37min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service

8 timers listed.


N ! B! If a timer gets out of sync, it may help to delete its stamp-* file in /var/lib/systemd/timers (or ~/.local/share/systemd/ in case of user timers). These are zero length files which mark the last time each timer was run. If deleted, they will be reconstructed on the next start of their timer.

Summary

In this article, I've shortly explain logic behind debugging weird restart events etc. of Linux configured services such as Apache due to configured scripts set to run with a predefined scheduled job timing. I shortly explained on how to figure out why the preset default install configured cron jobs such as logrorate – the service that is doing system logs archiving and nulling run at a certain time. I shortly explained the mechanism behind cron.{daily, monthy, weekly} and its execution via anacron – runner program similar to crond that never misses to run a scheduled job even if a system downtime occurs due to a crashed Docker container etc. run-parts command's use was shortly explained. A short look at systemd.timers was made which is now essential part of almost every new Linux release and often used by system scripts for scheduling time based maintainance tasks.

Check when Windows Active Directory user expires and set user password expire to Never

Thursday, January 9th, 2020

micorosoft-windows-10-logo-net-user-command-check-expiry-dates

If you're working for a company that is following high security / PCI Security Standards and you're using m$ Windows OS that belongs to the domain it is useful to know when your user is set to expiry
to know how many days are left until you'll be forced to change your Windows AD password.
In this short article I'll explain how to check Windows AD last password set date / date expiry date and how you can list expiry dates for other users, finally will explain how to set your expiry date to Never
to get rid of annoying change password every 90 days.

 

1. Query domain Username for Password set / Password Expires set dates

To know this info you need to know the Password expiration date for Active Directory user account, to know it just open Command Line Prompt cmd.exe

And run command:
 

NET USER Your-User-Name /domain


net-user-domain-command-check-AD-user-expiry

Note that, many companies does only connect you to AD for security reason only on a VPN connect with something like Cisco AnyConnect Secure Mobility Client whatever VPN connect tool is used to encrypt the traffic between you and the corporate DMZ-ed network

Below is basic NET USER command usage args:

Net User Command Options
 

Item          Explanation

net user    Execute the net user command alone to show a very simple list of every user account, active or not, on the computer you're currently using.

username    This is the name of the user account, up to 20 characters long, that you want to make changes to, add, or remove. Using username with no other option will show detailed information about the user in the Command Prompt window.

password    Use the password option to modify an existing password or assign one when creating a new username. The minimum characters required can be viewed using the net accounts command. A maximum of 127 characters is allowed1.
*    You also have the option of using * in place of a password to force the entering of a password in the Command Prompt window after executing the net user command.

/add    Use the /add option to add a new username on the system.
options    See Additional Net User Command Options below for a complete list of available options to be used at this point when executing net user.

/domain    This switch forces net user to execute on the current domain controller instead of the local computer.

/delete    The /delete switch removes the specified username from the system.

/help    Use this switch to display detailed information about the net user command. Using this option is the same as using the net help command with net user: net help user.
/?    The standard help command switch also works with the net user command but only displays the basic command syntax. Executing net user without options is equal to using the /? switch.

 

2. Listing all Active Directory users last set date / never expires and expiration dates


If you have the respective Active Directory rights and you have the Remote Server Administration Tools for Windows (RSAT Tools), you are able to do also other interesting stuff,

such as

– using PowerShell to list all user last set dates, to do so use Open Power Shell and issue:
 

get-aduser -filter * -properties passwordlastset, passwordneverexpires |ft Name, passwordlastset, Passwordneverexpires


get-aduser-properties-passwordlastset-passwordneverexpires1

This should show you info as password last set date and whether password expiration is set for account.

– Using PS to get only the password expirations for all AD existing users is with:

Get-ADUser -filter {Enabled -eq $True -and PasswordNeverExpires -eq $False} –Properties "DisplayName", "msDS-UserPasswordExpiryTimeComputed" |
Select-Object -Property "Displayname",@{Name="ExpiryDate";Expression={[datetime]::FromFileTime($_."msDS-UserPasswordExpiryTimeComputed")}}


If you need the output data to get stored in CSV file delimitered format you can add to above PS commands
 

| export-csv YOUR-OUTPUT-FILE.CSV

3. Setting a user password to never Expiry

If the user was created with NET USER command by default it will have been created to have a password expiration. 
However if you need to create new users for yourself (assuming you have the rights), with passwords that never expire on lets say Windows Server 2016 – (if you don't care about security so much), use:
 

NET USER "Username" /Add /Active:Yes

WMIC USERACCOUNT WHERE "Name='Username' SET PasswordExpires=False

NET-USER-ADD_Active-yes-Microsoft-Windows-screenshot

NET-USER-set-password-policy-to-Never-expiry-MS-Windows

To view the general password policies, type following:
 

NET ACCOUNTS


NET-ACCOUNTS-view-default-Microsoft-Windows-password-policy
 

Check weather forecast from console (terminal) on GNU / Linux and FreeBSD howto

Friday, August 23rd, 2019

how to get weather forecast prognosis from command line text terminal / console on Linux and FreeBSD

Doing everything in Linux console / terminal is a question perhaps every Linux / BSD hacker wants to do as Graphical user interface and using web search or using Graphical Environment plugins is an unneded complexity + googling or duckduckgoing for weather to check your next vacation destination city has been more and more of a terrible experience (for me) as I'm not a big fan of using the OS in a GUI.
In that manner of thoughts, as a Linux console geek and hard core ASCII art fan. I was recently happy to find that  possible to check weather forecast in tty console or Linux terminal in a beautiful ascii art way easily through a Web wttr.in service – a web application weather forecast service that supports displaying the current and few days in future, weather forecast either in browser as a plain text or from the command line by simply accessing it with your favourite web access / transfer tool such as;
wget / curl or any of your favourite text browser elinks / lynx / w3m or if on *BSDs use fetch command.

Install Curl data transfer tool if it is not already


Wget is installed by default across most Linux distributions and fetch is present by default on BSDs, displaying it in text browser would perhaps be never used but if you decide to give it a try maybe try with elinks (to get colorful output), w3m and lynx will display a black and white results.

In case if you miss curl, install it:

On Debian distro

aptitude install -y curl


or Fedora

yum install -y curl


Of course to use wttr.in as it is Internet based Weather Forecast service the minimum you need to have is to have Internet connection to your Linux / BSD desktop computer.

Text based Weather Forecast Web App currently supports:

display the current weather as well as a 3-day weather forecast, split into morning, noon, evening and night

  • Temperature is displayed for morning, noon, evening and night (includes temperature range, wind speed and direction, viewing distance, precipitation amount and probability)
  • Provide results for Weather based on City / town / village location
  • Supports display of Moon Phases Forecast in calendar days
  • Supports multilingual names (Bulgarian Phonetic cyrillic / Russian and other exotic UTF-8 encodings such as Chineese and Japanese),  50+ languages are currently supported
  • Has ability for prognosis for hostname (domain) location based on an its IP GeoIP location on the Globe
  • Geographical locations / landmarks such as Lakes / Mountains etc. can be easily queried
  • Query results metrics could be configured, e.g. USCS units or EU and rest of world accepted ones (SI) metric
  • Displayed result could be either in ANSI (if from terminal / console / HTML if queried from browser or in PNG – if needed)

Where wttr.in could be useful ?

The best applications use, I can think of are for server (shell) / perl scripting automation purposes, it could be useful especially in TOO HOT, TOO, COLD, TOO WET location in Small and Middle sized Data Centers Green Energy (Sun Panel) Parks / Wind Energy situated Linux monitoring hosts to track possible problems of overheats or overcolding of servers due to abnormal excessive temperatures such as the ones we experienced this summer here All across in Europe or in too Cold DC locations such as heat locations Deserts in African Countries, Saudi Arabia or Chukotka or Siberia in Russia.
Other application is as a backup option to other normal Weather report services by PHP or Python scripts that fetch data, from multiple places.
Of course since this is a third party controlled service, the downtime is due to excessive connection requests, the service could get flooded and stopped working, but I guess for any Commercial use, wttr.in creator Igor Chubin would be happy to sell a specific crafted service for any end user candidates.


Here is few examples of the beautiful returned ASCII art formatted output of wttr.in.
 

1. Getting a three days Weather Forecast prognosis for city / town location

To get what is current weather in my current city of Living, Sofia Bulgaria just pass the city to the URL address

curl http://wttr.in/Sofia

text-console-wttr.in-Weather-forecast-Sofia-for-Linux

links http://wttr.in/Dobrich

curl-Linux-show--Dobrich-Weather-forecast-in-lynx-text-browser


Default links (Linux) www text browser produces ugly black and white

2. Displaying Weather forecast with wget

wget -O- -q http://wttr.in


getting-weather-forecast-on-linux-terminal-console-with-wget-command

If you're lazy you can even omit the http:// as wget will look for HyperText Transmission Protocol by itself

wget -O- -q wttr.in

3. Getting Forecast results for a Tourist Destination


Lets get the weather forecast for the popular tourist Bulgarian destination of the Seven Rila Lakes (near Rila Monastery), situated in the Rila Mountain BG.

curl http://wttr.in/Seven+Rila+Lakes

Console-terminal-Weather-forecast-Linux-Seven-Rila-Lakes

4. Display Forecast for a specific server IP


Displaying information on specific server IP address current situated in GeoIP database, of course could be not really true, as the IP could be just a Load Balancer a router that does NAT to some internal DMZ-ed location server, but anyways it is a cool feature.

Lets get information on what is the weather on Google Global's Public DNS server IP 8.8.8.8 so commonly used to guarantee a Windows and Linux Desktop client machines Internet connectivity.
 

curl wttr.in/@8.8.8.8

wttr.in-Linux-text--forecast-service-curl-screenshot Google Public DNS location weather forecast

5. Download PNG image picture from wttr.in service


Lets say you want to get a 3 days standard Weather forecast for the popular Black Sea Resort town in Bulgaria Pomorie (a beautiful sea city which has even a functioning 5 Monks Monastery Pomorie Monastery situated near sea coast)

curl http://wttr.in/Pomorie.png
 

–2019-08-22 20:15:51–  http://wttr.in/Pomorie.png
Resolving wttr.in (wttr.in)… 5.9.243.187
Connecting to wttr.in (wttr.in)|5.9.243.187|:80… connected.
HTTP request sent, awaiting response… 200 OK
Length: 42617 (42K) [image/png]
Saving to: ‘Pomorie.png’

Pomorie.png                                     100%[=======================================================================================================>]  41.62K  –.-KB/s    in 0.07s   

2019-08-22 20:15:52 (586 KB/s) – ‘Pomorie.png’ saved [42617/42617]

Note: The generated .png is again the ASCII art produced by a direct text fetch bug in pic format

6. Displaying Current Moon Phase


If you want to enjoy a text based Moon phase picture through wttr.in 🙂

wget -O- -q wttr.in/Moon


Display-current-Phase-of-Moon-in-terminal-console-Linux-wttr.in-service

You can also get a Moon Phase prognosis for a current future date or get a previous date phase

curl wttr.in/moon@2019-09-15

Full-Moon-Weather-forecast-text-console-reporting-via-wttr.in-on-Gnu_Linux


Full Moon Madness !! – Vampires are out beaware and Enjoy the ultra kewl ASCII Colorful Art 🙂
 

7. Getting help for wttr.in terminal Waether Forecast results

$ curl wttr.in/:help
Usage:

    $ curl wttr.in          # current location
    $ curl wttr.in/muc      # weather in the Munich airport

Supported location types:

    /paris                  # city name
    /~Eiffel+tower          # any location
    /Москва                 # Unicode name of any location in any language
    /muc                    # airport code (3 letters)
    /@stackoverflow.com     # domain name
    /94107                  # area codes
    /-78.46,106.79          # GPS coordinates

Special locations:

    /moon                   # Moon phase (add ,+US or ,+France for these cities)
    /moon@2016-10-25        # Moon phase for the date (@2016-10-25)

Units:

    m                       # metric (SI) (used by default everywhere except US)
    u                       # USCS (used by default in US)
    M                       # show wind speed in m/s

View options:

    0                       # only current weather
    1                       # current weather + 1 day
    2                       # current weather + 2 days
    A                       # ignore User-Agent and force ANSI output format (terminal)
    F                       # do not show the "Follow" line
    n                       # narrow version (only day and night)
    q                       # quiet version (no "Weather report" text)
    Q                       # superquiet version (no "Weather report", no city name)
    T                       # switch terminal sequences off (no colors)

PNG options:

    /paris.png              # generate a PNG file
    p                       # add frame around the output
    t                       # transparency 150
    transparency=…        # transparency from 0 to 255 (255 = not transparent)

Options can be combined:

    /Paris?0pq
    /Paris?0pq&lang=fr
    /Paris_0pq.png          # in PNG the file mode are specified after _
    /Rome_0pq_lang=it.png   # long options are separated with underscore

Localization:

    $ curl fr.wttr.in/Paris
    $ curl wttr.in/paris?lang=fr
    $ curl -H "Accept-Language: fr" wttr.in/paris

Supported languages:

    af da de el et fr fa hu id it nb nl pl pt-br ro ru tr uk vi (supported)
    az be bg bs ca cy cs eo es fi ga hi hr hy is ja jv ka kk ko ky lt lv mk ml nl fy nn pt pt-br sk sl sr sr-lat sv sw th te uz zh zu he (in progress)

Special URLs:

    /:help                  # show this page
    /:bash.function         # show recommended bash function wttr()
    /:translation           # show the information about the translators


 

8. Comparing two cities weather from command line


One useful use of wttr.in if you plan to travel from Location city A to Location city B is to compare the temperatures with a simple bash one liner script:

diff -Naur <(curl -s http://wttr.in/Sofia ) <(curl -s http://wttr.in/Beograd )

9. Using ansiweather command to get Weather Temperature / Wind / Humidity in one line beuatiful text


If you go and install answeather Linux package

apt-get install –yes ansiweather


You will get a shell script wrapper with ANSI colors and Unicode symbols support. Weather data comes from OpenWeatherMap, this is useful if wttr.in is not working due to some URL malfunction (due to service is DoS-ed) etc.

ansiweather -l Atina

ansiweather-Atina-weather-forecast-result-linux-text-console

Lets use ansiweather to print the weather prognosis for upcoming 5 days for near port of Burgas, BG
 

ansiweather -F -l Burgas

ansiweather-print-weather-forecast-prognosis-for-5-days-in-Linux-text-terminal

10. Get all Weather current forecast for each Capital in the world


You can download and use this simple plain text file list of All Country Capitals in the World (country-capitals-all-world.txt) with ansiweather and a bash loop to get displayed each and every current day Weather Forecast in the World, here is how:

while read line; do ansiweather -l $line; sleep 3; done < country-capitals-all-world.txt


ansiweather-all-countires-capitals-result

As you can see some of the very exotic third world capitals does not return data so 'ERROR: Cannot fetch weather data' is returned.


You can also substitute ansiweather with curl wttr.in/$line to do get the beautiful ASCII art 3 days weather forecast via wttr.in

while read line; do curl http://wttr.in/$line; sleep 3; done < country-capitals-all-world.txt


I'll be happy to know other nice ASCII Art supporting Web service to enjoy from text terminal on Linux (nomatter useful or) just funny joyful prank maniacal pranks such as Watching text ASCII version remake of Star Wars Classic Movie by simply telnetting to towel.blinkenlights.nl (if you haven't so just telnet and enjoy the streamed ASCIIs ! 🙂

telnet towel.blinkenlights.nl

watch-star-wars-ascii-art-version-remake-online-with-telnet-on-linux-console-terminal

Talking about fun and ASCII, its worthy to mention hollywood Linux package

hipo@jeremiah:~/Desktop$ apt-cache show hollywood|grep -i desc -A 3
Description-en: fill your console with Hollywood melodrama technobabble
 This utility will split your console into a multiple panes of genuine
 technobabble, perfectly suitable for any Hollywood geek melodrama.
 It is particularly suitable on any number of computer consoles in the


Description-md5: 768f44c76220ea2b35f855ea34c8bc35
Homepage: http://launchpad.net/hollywood
Section: games
Priority: optional


Once installed on Debian with:

aptitude install -y hollywood

You can get in a rapid manner plenty of tmux (screen like – virtual console emulator) split screen statistics about your notebook / workstation / server CPU usage, mlocate.db status, info about plugged in machine voltage, Speedometer (statistics about Network bandwidth usage), System load avarage (CPU Count, Memory Utilization) and some other random info coming out of dmesg kernel log and more. The information displayed in splitted windows changes rapidly and (assuming you run it at home Desktop with a soundblaster) and not remotely, a james bond Agent 007 soundtrack is played on the back, that brings up one's adrenaline and makes it look even cooler.

hollywood-melodrama-technobubble-split-console-multiple-panes-for-genuine-technobubble

To give you an idea what to expect, here is shot of /usr/games/hollywood (the program start binary location) on Debian GNU / Linux running, Enjoy! 🙂
 

Why du and df reporting different on a filesystem / How to fix inconsistency between used space on FS and disk showing full strangeness

Wednesday, July 24th, 2019

linux-why-du-and-df-shows-different-result-inconsincy-explained-filesystem-full-oddity

If you're a sysadmin on a large server environment such as a couple of hundred of Virtual Machines running Linux OS on either physical host or OpenXen / VmWare hosted guest Virtual Machine, you might end up sometimes at an odd case where some mounted partition mount point reports its file use different when checked with
df
cmd than when checked with du command, like for example:
 

root@sqlserver:~# df -hT /var/lib/mysql
Filesystem   Type  Size Used Avail Use% Mounted On
/dev/sdb5      ext4    19G  3,4G    14G  20% /var/lib/mysql

Here the '-T' argument is used to show us the filesystem.

root@sqlserver:~# du -hsc /var/lib/mysql
0K    /var/lib/mysql/
0K    total

1. Simple debug on what might be the root cause for df / du inconsistency reporting

Of course the basic thing to do when in that weird situation is to be totally shocked how this is possible and to investigate a bit what is the biggest first level sub-directories that eat up the space on the mounted location, with du:

# du -hkx –max-depth=1 /var/lib/mysql/|uniq|sort -n
4       /var/lib/mysql/test
8       /var/lib/mysql/ezmlm
8       /var/lib/mysql/micropcfreak
8       /var/lib/mysql/performance_schema
12      /var/lib/mysql/mysqltmp
24      /var/lib/mysql/speedtest
64      /var/lib/mysql/yourls
144     /var/lib/mysql/narf
320     /var/lib/mysql/webchat_plus
424     /var/lib/mysql/goodfaithair
528     /var/lib/mysql/moonman
648     /var/lib/mysql/daniel
852     /var/lib/mysql/lessn
1292    /var/lib/mysql/gallery

The given output is in Kilobytes so it is a little bit hard to read, if you're used to Mbytes instead, do

 # du -hmx –max-depth=1 /var/lib/mysql/|uniq|sort -n|less

I've also investigated on the complete /var directory contents sorted by size with:

 # du -akx ./ | sort -n
5152564    ./cache/rsnapshot/hourly.2/localhost
5255788    ./cache/rsnapshot/hourly.2
5287912    ./cache/rsnapshot
7192152    ./cache


Even after finding out the bottleneck dirs and trying to clear up a bit, continued facing that inconsistently shown in two commands and if you're likely to be stunned like me and try … to move some files to a different filesystem to free up space or assigned inodes with a hope that shown inconsitency output will be fixed as it might be caused  due to some kernel / FS caching ?? and this will eventually make the mounted FS to refresh …

But unfortunately, if you try it you'll figure out clearing up a couple of Megas or Gigas will make no difference in cmd output.

In my exact case /var/lib/mysql is a separate mounted ext4 filesystem, however same issue was present also on a Network Filesystem (NFS) and thus, my first thought that this is caused by a network failure problem or NFS bug turned to be wrong.

After further short investigation on the inodes on the Filesystem, it was clear enough inodes are available:
 

# df -i /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5      1221600  2562 1219038   1% /var/lib/mysql

So the filled inodes count assumed issue also has been rejected.
P.S. (if you're not well familiar with them read manual, i.e. – man 7 inode).
 

– Remounting the mounted filesystem

To make sure the filesystem shown inconsistency between du and df is not due to some hanging network mount or bug, first logical thing I did is to remount the filesytem showing different in size, in my case this was done with:
 

# mount -o remount,rw -t ext4 /var/lib/mysql

For machines with NFS remote mounted storage locations, used:

# mount -o remount,rw -t nfs /var/www


FS remount did not solved it so I continued to ponder what oddity and of course I thought of a workaround (in case if this issues are caused by kernel bug or OS lib issue) reboot might be the solution, however unfortunately restarting the VMs was not a wanted easy to do solution, thus I continued investigating what is wrong …

Next check of course was to check, what kind of network connections are opened to the affected hosts with:
 

# netstat -tupanl


Did not found anything that might point me to the reported different Megabytes issue, so next step was to check what is the situation with currently opened files by running processes on the weird df / du reported systems with lsof, and boom there I observed oddity such as multiple files

# lsof -nP | grep '(deleted)'

COMMAND   PID   USER   FD   TYPE DEVICE    SIZE NLINK  NODE NAME
mysqld   2588  mysql    4u   REG 253,17      52     0  1495 /var/lib/mysql/tmp/ibY0cXCd (deleted)
mysqld   2588  mysql    5u   REG 253,17    1048     0  1496 /var/lib/mysql/tmp/ibOrELhG (deleted)
mysqld   2588  mysql    6u   REG 253,17       777884290     0  1497 /var/lib/mysql/tmp/ibmDFAW8 (deleted)
mysqld   2588  mysql    7u   REG 253,17       123667875     0 11387 /var/lib/mysql/tmp/ib2CSACB (deleted)
mysqld   2588  mysql   11u   REG 253,17       123852406     0 11388 /var/lib/mysql/tmp/ibQpoZ94 (deleted)

Notice that There were plenty of '(deleted)' STATE files shown in memory an overall of 438:

# lsof -nP | grep '(deleted)' |wc -l
438


As I've learned a bit online about the problem, I found it is also possible to find deleted unlinked files only without any greps (to list all deleted files in memory files with lsof args only):

# lsof +L1|less


The SIZE field (fourth column)  shows a number of files that are really hard in size and that are kept in open on filesystem and in memory, totally messing up with the filesystem. In my case this is temp files created by MYSQLD daemon but depending on the server provided service this might be apache's www-data, some custom perl / bash script executed via a cron job, stalled rsync jobs etc.
 

2. Check all the list open files with the mysql / root user as part of the the server filesystem inconsistency debugging with:

– Grep opened files on server by user

# lsof |grep mysql
mysqld    1312                       mysql  cwd       DIR               8,21       4096          2 /var/lib/mysql
mysqld    1312                       mysql  rtd       DIR                8,1       4096          2 /
mysqld    1312                       mysql  txt       REG                8,1   20336792   23805048 /usr/sbin/mysqld
mysqld    1312                       mysql  mem       REG               8,21      24576         20 /var/lib/mysql/tc.log
mysqld    1312                       mysql  DEL       REG               0,16                 29467 /[aio]
mysqld    1312                       mysql  mem       REG                8,1      55792   14886933 /lib/x86_64-linux-gnu/libnss_files-2.28.so

# lsof | grep root
COMMAND    PID   TID TASKCMD          USER   FD      TYPE             DEVICE   SIZE/OFF       NODE NAME
systemd      1                        root  cwd       DIR                8,1       4096          2 /
systemd      1                        root  rtd       DIR                8,1       4096          2 /
systemd      1                        root  txt       REG                8,1    1489208   14928891 /lib/systemd/systemd
systemd      1                        root  mem       REG                8,1    1579448   14886924 /lib/x86_64-linux-gnu/libm-2.28.so

Other command that helped to track the discrepancy between df and du different file usage on FS is:
 

# du -hxa  / | egrep '^[[:digit:]]{1,1}G[[:space:]]*'
 

3. Fixing large files kept in memory filesystem problem


What is the real reason for ending up with this file handlers opened by running backgrounded programs on the Linux OS?
It could be multiple  but most likely it is due to exceeded server / client interactions or breaking up RAM or HDD drive with writing plenty of logs on the FS without ending keeping space occupied or Programming library bugs used by hanged service leaving the FH opened on storage.

What is the solution to file system files left in memory problem?

The best solution is to first fix custom script or hanged service and then if possible to simply restart the server to make the kernel / services reload or if this is not possible just restart the problem creation processes.

Once the process is identified like in my case this was MySQL on systemd enabled newer OS distros, just do:

# systemctl restart mysqld.service


or on older init.d system V ones:

# /etc/init.d/service restart


For custom hanged scripts being listed in ps axuwef you can grep the pid and do a kill -HUP (if the script is written in a good way to recognize -HUP and restart the sub-running process properly – BE EXTRA CAREFUL IF YOU'RE RESTARTING BROKEN SCRIPTS as this might cause your running service disruptions …).

# pgrep -l script.sh
7977 script.sh


# kill -HUP PID

Now finally this should either mitigate or at best case completely solve the reported disagreement between df and du, after which the calculated / reported disk space should be back to normal and show up approximately the same (note that size changes a bit as mysql service is writting data) constantly extending the size between the two checks.

# df -hk /var/lib/mysql; du -hskc /var/lib/mysql
Filesystem       Inodes  IUsed   IFree IUse% Mounted on
/dev/sdb5        19097172 3472744 14631296  20% /var/lib/mysql
3427772    /var/lib/mysql
3427772    total

What we learned?

What I've explained in this article is why and how it comes that 'zoombie' files reside on a filesystem
appearing to be eating disk space on a mounted local or network partition, giving strange inconsistent
reports, leading to system service disruptions and impossibility to have correctly shown information on used
disk space on mounted drive.

I went through with some standard logic on debugging service / filesystem / inode issues up explainat, that led me to the finding about deleted files being kept in filesystem and producing the filesystem strange sized / showing not correct / filled even after it was extended with tune2fs and was supposed to have extra 50GBs.

Finally it was explained shortly how to HUP / restart hanging script / service to fix it.

Some few good readings that helped to fix the issue:

What to do when du and df report different usage is here
df in linux not showing correct free space after file removal is here
Why do “df” and “du” commands show different disk usage?
 

Reset gnome forgotten keyring password – Fix annoying reoccuring keyring password prompt

Wednesday, September 26th, 2018

gnome-keyring-password-error-fix-solution-howto-gnupg-error-after-changing-user-password-linux-desktop-user

If you're on Debian Linux and have a user account and you changed the password you might be unpleasantly surprised by a constantly occuring prompt to reinput the keyring stored old password.
You might be wondering how to reset the gnome keyring password to stop that annoying pop-up prompt from bittering your days.
The simplest fix is to delete all stored passwords and reset the keyring stored values. That's in case if you don't have other important passwords saved.

This is done by simply creating a backup of the old keyring just in case if you have something important stored you can do that with:

cd ~/.local/share/keyrings/
cp login.keyring login.keyring.backup


Then delete the keyring store file:

rm  -f ~/.local/share/keyrings/login.keyring

Under some GNU / Linux distrubutions such as Linux Mint deleting the keyring file will not work on such an alternative method is to use seahorse (a frontend program to GnuPG (GNU Privacy Guard), that is doing key management  for GNOME desktop users.

hipo@jericho:~$ seahorse

seahorse-gnu-gpg-and-password-management-gui-tool-gnome-desktop-environment-debian-linux-screenshot
 

For older Linux distributions like Ubuntu 12.10 e.g. in GNOME 2, the correct path to keyring file is ~/.gnome2/keyrings/
 

rm -f ~/.gnome2/keyrings/*

Lenovo ThinkCentre Edge as an External monitor with Debian Linux on Thinkpad T420 / How to use ThinkCentre Edge as external display on Linux

Saturday, September 22nd, 2018

thinkcentre-use-as-external-monitor-on-debian-gnu-linux-ubuntu

Here is a case I bring my Thinkpad T420 notebook to office place and there was plenty of monitors free but all were quite modern and had support only for Display Port / DVI and HDMI (High Definition Multimedia Interface), e.g. there was no Monitor to support regular VGA port …

 I've conntected my Debian 9 Stretch Linux with a DisplayPort cable to one of the LG monitors but the external monitor video screen did not raised the screen kept  black just like nothing is connected to it.

So i did a quick research online to see whether and how I can make the display port on Lenovo thinkpad t420 work with Linux after consulting few resources online e.g. Hacksr Display Port on Thinkpad T420 and Nvidia Optimus  this post in Ubuntu Forums as well as the official documentation about DisplayPort and Linux on Thinkpad's official documentation ThinkWiki it turned out the T420 Thinkpads have issues with Displayport and Linux and the only solution proposed was to use the second Nvidia Optimus card instead of the integrated display adapter, I however am not sure whether my notebook have this Nvidia at all and did not have time for too much hacking to make it work.

I decided to take a more simple approach and try to use the good old school VGA port protocol with one of the 2 ThinkCentre Edge  m93z stations that were hanging around in the office, in case you never heard of ThinkCentre Edge this is (an integrated computer and monitor in a thin display a kind of cheap PC alternative from Lenovo to iMacs (all in one Macintosh Desktop computer) .

ThinkCentre_Edge_computer-in-monitor-desktop-like-iMac-from_Lenovo

I saw some skeptical looks from colleagues but I with my usual stubborness gave it a try and after a bit of quick research I got it working on Linux ! 🙂
 

If you're wondering whether THINKCENTRE Edge can be used with VGA port as External Display to a Linux powered Laptop the Answer is YES  !!! 🙂


To make it working, 
All you have to do it is configure it as External display from ThinkCentre (OSD) display menu. 

thinkcentre-menus-osd-menu-screenshot

But wait the joy was not so full, even though the ThinkCentre displayed my GNOME FlashBack background picture on its screen it did not show my actual GNOME Menus (Application, Places and Desktop) just like shown in below screenshot ..

Debian-9-1-Stretch-Linux-background-default-picture

I could see fine the ThinkCentre monitor showing normally in xrandr command which is the tool to always check first if you're new on Linux and want to check settings regarding your Notebook display settings / Desktop PC exnternal display settings on Linux , the output of  xrandr is below.
 

hipo@jericho:~$ xrandr

Screen 0: minimum 320 x 200, current 1024 x 768, maximum 4096 x 4096 LVDS1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 0mm x 0mm 1920x1200 60.0 +   1600x1200 60.0   1680x1050 60.0   1280x1024 76.0 75.0 72.0 60.0   1440x900 75.0 59.9   1152x864 75.0 1024x768 60.0*+ 800x600 60.3 56.2 640x480 59.9 VGA1 connected 1024x768+0+0 (normal left inverted right x axis y axis) 519mm x 324mm 1920x1200 60.0 + 1600x1200 60.0 1680x1050 60.0 1280x1024 76.0 75.0 72.0 60.0 1440x900 75.0 59.9 1152x864 75.0 1024x768 75.1 70.1 60.0* 832x624 74.6 800x600 72.2 75.0 60.3 640x480 72.8 75.0 66.7 60.0 720x400 70.1

I also gave a try to arandr which is a simple GUI interface to xrandr, i.e.:
 

root@jericho:~# apt-get install –yes -qq arandr
root@jericho:~# exit
hipo@jericho:~$  arandr


arandr-debian-gnu-linux-screenshot

Unfortunately trying to turn on / off the VGA monitor shown in above screenshot using arandr and saving did not produced any positive results, as the ThinkCentre Edge used as external monitor kept being showing only my Debian Linux background.

During my attempts to make it working I stumbled upon driconf a configuration applet program for Direct Rendering Infrastructure (DRI).

driconf-screenshot-linux

I also gave a try to default gnome-control-center Monitor settings tool but I couldn't turn off the notebook display in order to make the ThinkCentre Edge my primary display.

Finally after some more investigation online, I found how to switch on my Notebook display by running below xrandr command
 

xrandr –output LVDS-1 –off


Just in case if you need to re-enable (on) the LVDS-1 use 

xrandr -d :0 –output LVDS –auto​

Create user and password on Linux non interactive and add it to sudo a tiny Dev Ops script

Thursday, September 20th, 2018

Bash-Final-the-Bourne-again-shell-logo
A common task for SysAdmins who managed a multitude of servers remotely via Secure Shell was to add a user and assign password by using a script, this was sometimes necessery to set-up some system users and create access for university users on 10 / 20 testing Linux servers.

Nowadays this task of adding user to a list of remote servers and granting the new user superuser permissions through /etc/sudoers is practiced heavily by the so called Dev Ops (Just another Buziness Word for Senior System Admiistrators with good scripting skills and a little bit of development experience – same game different name.

The Dev Ops System Integration Engineers use this useful add non-interactive user via SSH in Cloud environments in order to prepare superuser (root permissioned through /etc/sudoers) user, that is later be used for lets say deployment on a few hundred of servers of lets say LAMP (Linux + Apache + MySQL + PHP) or LEMP (Linux NGINX MySQL PHP) or Software Load Balancer HAProxy  balacing for MySQL clusters / Nginx Application servers / JIRAs etc, through a Playbook script with some deployment automation tool such as Ansible.

Well enough talk here is the few lines of code which does create a user locally:
 

linux:~# apt-get install –yes sudo
linux:~# useradd devops –home /home/devops -s /bin/bash
linux:~# mkdir /home/devops
linux:~# chown -R devops:devops /home/devops
linux:~# echo 'username:testpass' | chpasswd


Though this lines could be invoked easily by passing it as arguments via ssh it is often unhandy to run them on remote host, because some of the remote hosts against executed, might have already the user existent with granted permissions for sudo

Thus a much better way to do things is use below script and first upload it to remote servers by running the scp command in a loop:

while read line; do
scp  root@$i:/root/
ssh "
create_user_noninteractive_and_add_to_sudoers.sh"
done < servers_list.txt


Where servers_list.txt contains a list of remote IPs:

#!/bin/bash
# Create new user/group and add nopasswd login to sudoers
# Author: Georgi Georgiev
# has to be run sa root – sudo devops
# hipo@pc-freak.net

u_id='devops';
g_id='devops';
pass='testpass';
sudoers_f='/etc/sudoers';

check_install_sudo ()  {
if [ $(dpkg –get-selections | cut -f1|grep -E ‘^sudo’) ]; then
apt-get install –yes sudo
else
        printf "Nothing to do sudo installed";
fi
}

check_install_user () {

if [ “$(sed -n “/$u_id/p” /etc/passwd|wc -l)” -eq 0 ]; then
apt-get install –yes sudo
apt-get install –yes sudo
useradd $u_id –home /home/$u_id
mkdir /home/$u_id
chown -R $u_id:$g_id /home/$u_id
echo "$u_id:$pass" | chpasswd
cp -rpf /etc/bash.bashrc /home/$u_id
if [ “$(sed -n “/$u_id/p” $sudoers_f|wc -l)” -eq “0” ]; then
echo "$u_id ALL=(ALL) NOPASSWD: ALL" >> $sudoers_f
else
        echo "$u_id existing. Exiting ..";
        exit 1;
fi

else
        echo "Will do nothing because $u_id exists";
fi

}

check_install_sudo;
check_install_user;


By the way this task was the simplest task given by a Company where I applied for a Dev Ops System Engineer, so I hope this will help someone else too.

P.S. If you prefer Shell scripts (even though much harder, time consuming etc.) as a mean of automation as an alternative to Ansible / Chef I suggest you check out and perhaps try to do the task with http://fuckingshellscripts.org 🙂

Install Slack and Mattermost clients for Start Up Business communication on Linux

Wednesday, September 19th, 2018

install-slack-and-mattermost-clients-for-start-up-business-communication-on-Linux
Many businesses nowadays are lookig for alternatives to the Microsoft dominated market of communication – Skype / Skype for Business Chat Audio and Video desktop client.
The two are the defacto standard for most of Corporate Businesses and is heavily used across most largest Corporations (companies) such as IBM / Xeror / DXC / CSC / Oracle / SAP / Microsoft / Amazon / Adobe … the list goes on and on.

However even though Skype is so easy to use across Microsoft Domain connected Computers the many start-up companies of today often try to avoid its use. The reason, well Skype is totally Proprietary non-transparent and by using it you probably get spied by Microsoft the CIA and God knows how many other Country Agencies. Besides that Skype has a bad history often had problems with Audio (Linux microphone and Video settings) in Free Software (Linux, FreeBSD etc.) realm and even though nowdays situation is improving and Skype Video / Audio runs fine on GNU / Linuxes its Skype for Business has no working release by Microsoft and has left-up Free Software users and Staring Business companies platform of the size of 20 to 1000 people  that choose Linux as a main Desktop / Work had to look for other ways to communicate internally within company and with clients.

Jabber XMPP communication protocol has been one alternative for a long time and historically many compainies that were running out of Skype use for their work often were using Small internally company hosted jabber servers, however as Jabber's  communication clients such as Gajim development is lagging behind seriously over the last 7 years and it prooved so buggy many businesses were looking for ways to avoid it.

Slack_Technologies_-corporate-communication-alternative-to-skype-Logo.png


Slack is multi platform just like Skype and has versions for Linux / Windows for macOS but its power comes mostly because most of its users use it via Skype Web Client (while Skype is a Desktop app and heavily used in Web Browser.

slack-web-communication-client-in-webbrowser-screenshot

Slack reminds in a what of things to the good old IRC chats and has channels in a similar fashion, it support Audio conversations but unfortunately at the moment didn't support Video.

The emergency of New Age of Computing and the quick adoption of Clouds as an aim to cut business costs put Jabber totally out of the game and in the niche and in August 2013 on the scene raised Slack (Team Messaging) which is an acronymi of (Searchable Log of All Conversations and Knowledge) – a cloud-based set of proprietary team collaboration tools and services, founded by Stewart Butterfield for the purpose of online Game (now defunct) called Glitch.

asana-for-slack-integration-2018-2-linux
The problem with Slack is that it is a freemium product, whose main paid features are the ability to search more than 10,000 archived messages (the ordinary free version allows the user to make up to 10000 searches in chat history), the paid Slack versions adds also unlimited apps and integrations and theoretically unlimited number of users (though this is seriously doubtful).

slack-10000-messages-per-user-limitation-shot

One very handy feature of Slack is its integration with "The World's Leading Software Development Platform" – GitHub .

To solve the problem with the little amount of Slack Chat history in Users conversations many Start-Up Business Companies do use Slack as a communication media with Clients and does often use as a communication Media another very popular Cloud messaging Open Source Software called Mattermost deploys to Cloud infrastruture but is at IT control of your company or your hired support and not third party vendor supports, making it a great communication tool for small and mid-sized companies who want to save money of purchasing a special server and hiring an admin or paying for one to support it all the time but instead directly use their Cloud account and deploy it there.

Mattermost is capable to Reach anyone, anywhere on any device. From the airport to data center, safely connect teams with EMM apps, hybrid cloud deployment and enterprise-grade flexibility to meet the unique needs of enterprise.

It is capable to Integrate with existing applications and build new workflows and empower your teams especially operations and DevOps – to perform faster and effectively. Mattermost


To install Slack on Linux:

Go and Download slack from Slack Linux download (the DEB / RPM 64 bit package)

As of time of writting this article latest Slack Desktop packages are: slack-desktop-3.3.1-amd64.deb and slack-3.3.1-0.1.fc21.x86_64.rpm

Depending on the type of Linux distribution install it with dpkg or rpm

1. Installing Slack Desktop client on Debian / Ubuntu Linux

On Debian / Ubuntu / Mint install Slack with:

root@ubuntu:~# dpkg -i slack-desktop-*.deb


For Ubuntu users there is also unofficial third-party Slack app ScoudCloud

it integrates well with Ubuntu Unity desktop (which I personally dislike 🙂 ) and gives you some extra goodies such as showing in an Unity manner unread message count, notification, bubbles, unity quicklists for fast-switching between Slack channels etc.

2. Installing Slack Desktop client on Redhat / Fedora / CentOS Linux

On Redhat / Fedora / CentOS install it with:

[root@fedora ~]:# rpm -ivh slack-*.rpm


3. Installing Mattermost Desktop client on Linux

mattermost-open-source-communication-in-the-cloud

Download Mattermost Linux package from download URL here

As of time of writting the DEB versions are mattermost-desktop-4.1.2-linux-amd64.deb mattermost-desktop-4.1.2-linux-i386.deb and there is no official RPM package for Fedora / CentOS users however I guess the .deb package can easily be converted to .rpm with alien tool.

To install Mattermost on Debian (in moment of writting September 2018):

root@debian:~# wget https://releases.mattermost.com/desktop/4.1.2/mattermost-desktop-4.1.2-linux-amd64.deb

root@debian:~# dpkg -i mattermost-desktop-4.1.2-linux-amd64.deb


mattermost-linux-client-2-screenshot

Mattermost supports file attachment (send) / Video previews (you can play sent Web videos directly within the Mattermost client) and on experimental level even supports Video and Audio Calls.

mattermost-linux-client-screenshot-1

mattermost-markdown-help-linux-screenshot

One nice feature of Mattermost for those who love coding is using tags to format messages

There is plenty of features of Mattermost among the best ones are integrations Private Cloud open source integrations (Jira, Jenkins, Bots, clients), supports Webhooks, Restful APIS, CLI and Public Cloud connections via Zapier – a connect and uatomate workflows e.g. gives you ability to move info between web apps automatically.

Mail send from command line on Linux and *BSD servers – useful for scripting

Monday, September 10th, 2018

mail-send-email-from-command-line-on-linux-and-freebsd-operating-systems-logo

Historically Email sending has been very different from what most people use it in the Office, there was no heavy Email clients such as Outlook Express no MX Exchange, no e-mail client capabilities for Calendar and Meetings schedule as it is in most of the modern corporate offices that depend on products such as Office 365 (I would call it a connectedHell 365 days a year !).

There was no free webmail and pop3 / imap providers such as Mail.Yahoo.com, Gmail.com, Hotmail.com, Yandex.com, RediffMail, Mail.com the innumerous lists goes and on.
Nope back in the day emails were doing what they were originally supposed to like the post services in real life simply send and receive messages.

For those who remember that charming times, people used to be using BBS-es (which were basicly a shared set-up home system as a server) or some of the few University Internal Email student accounts or by crazy sysadmins who received their notification and warnings logs about daemon (services) messages via local DMZ-ed network email servers and it was common to read the email directly with mail (mailx) text command or custom written scripts … It was not uncommon also that mailx was used heavily to send notification messages on triggered events from logs. Oh life was simple and clear back then, and even though today the email could be used in a similar fashion by hard-core old school sysadmins and Dev Ops / simple shell scriptings tasks or report cron jobs such usage is already in the deep history.

The number of ways one could send email in text format directly from the GNU / Linux / *BSD server to another remote mail MTA node (assuming it had properly configured Relay server be it Exim or Postifix) were plenty.

In this article I will try to rewind back some of the UNIX history by pinpointing a few of the most common ways, one used to send quick emails directly from a remote server connection terminal or lets say a cheap VPS few cents server, through something like (SSH or Telnet) etc.
 

1. Using the mail command client (part of bsd-mailx on Debian).
 

In my previous article Linux: "bash mail command not found" error fix
I ended the article with a short explanation on how this is done but I will repeat myself one more time here for the sake of clearness of this article.

root@linux:~# echo "Your Sample Message Body" | mail -s "Whatever … Message Subject" remote_receiver@remote-server-email-address.com


The mail command will connect to local server TCP PORT 25 on local configured MTA and send via it. If the local MTA is misconfigured or it doesn't have a proper MX / PTR DNS records etc. or not configure as a relay SMTP remote mail will not get delivered. Sent Email should be properly delivered at remote recipient address.

How to send HTML formatted emails using mailx command on Linux console / terminal shell using remote server through SSH ?

Connect to remote SSH server (VPS), dedicated server, home Linux router etc. and run:

root@linux:~# mailx -a 'Content-Type: text/html'
      -s "This is advanced mailx indeed!" < email_content.html
      "first_email_to_send_to@gmail.com, mail_recipient_2@yahoo.com"


email_content.html should be properly formatted (at best w3c standard compliant) HTML.

Here is an example email_content.html (skeleton file)

    To: your_customer@gmail.com
    Subject: This is an HTML message
    From: marketing@your_company.com
    Content-Type: text/html; charset="utf8"

    <html>
    <body>
    <div style="
        background-color:
        #abcdef; width: 300px;
        height: 300px;
        ">
    </div>
Whatever text mixed with valid email HTML tags here.
    </body>
    </html>


Above command sends to two email addresses however if you have a text formatted list of recipients you can easily use that file with a bash shell script for loop and send to multiple addresses red from lets say email_addresses_list.txt .

To further advance the one liner you can also want to provide an email attachment, lets say the file email_archive.rar by using the -A email_archive.rar argument.

root@linux:~# mailx -a 'Content-Type: text/html'
      -s "This is advanced mailx indeed!" -A ~/email_archive.rar < email_content.html
      "first_email_to_send_to@gmail.com, mail_recipient_2@yahoo.com"

For those familiar with Dan Bernstein's Qmail MTA (which even though a bit obsolete is still a Security and Stability Beast across email servers) – mailx command had to be substituted with a custom qmail one in order to be capable to send via qmail MTA daemon.
 

2. Using sendmail command to send email
 

Do you remember that heavy hard to configure MTA monster sendmail ? It was and until this very day is the default Mail Transport Agent for Slackware Linux.

Here is how we were supposed to send mail with it:

[root@sendmail-host ~]# vim email_content_to_be_delivered.txt

Content of file should be something like:

Subject: This Email is sent from UNIX Terminal Email

Hi this Email was typed in a file and send via sendmail console email client
(part of the sendmail mail server)

It is really fun to go back in the pre-history of Mail Content creation 🙂

[root@sendmail-host ~]# sendmail -v user_name@remote-mail-domain.com  < /tmp/email_content_to_be_delivered.txt

-v argument provided, will make the communication between the mail server and your mail transfer agent visible.
 

3. Using ssmtp command to send mail
 

ssmtp MTA and its included shell command was used historically as it was pretty straight forward you just launch it on the command line type on one line all your email and subject and ship it (by pressing the CTRL + D key combination).

To give it a try you can do:

root@linux:~# apt-get install ssmtp
Reading package lists… Done
Building dependency tree       
Reading state information… Done
The following additional packages will be installed:
  libgnutls-openssl27
The following packages will be REMOVED:
  exim4-base exim4-config exim4-daemon-heavy
The following NEW packages will be installed:
  libgnutls-openssl27 ssmtp
0 upgraded, 2 newly installed, 3 to remove and 1 not upgraded.
Need to get 239 kB of archives.
After this operation, 3,697 kB disk space will be freed.
Do you want to continue? [Y/n] Y
Get:1 http://ftp.us.debian.org/debian stretch/main amd64 ssmtp amd64 2.64-8+b2 [54.2 kB]
Get:2 http://ftp.us.debian.org/debian stretch/main amd64 libgnutls-openssl27 amd64 3.5.8-5+deb9u3 [184 kB]
Fetched 239 kB in 2s (88.5 kB/s)         
Preconfiguring packages …
dpkg: exim4-daemon-heavy: dependency problems, but removing anyway as you requested:
 mailutils depends on default-mta | mail-transport-agent; however:
  Package default-mta is not installed.
  Package mail-transport-agent is not installed.
  Package exim4-daemon-heavy which provides mail-transport-agent is to be removed.

(Reading database … 169307 files and directories currently installed.)
Removing exim4-daemon-heavy (4.89-2+deb9u3) …
dpkg: exim4-config: dependency problems, but removing anyway as you requested:
 exim4-base depends on exim4-config (>= 4.82) | exim4-config-2; however:
  Package exim4-config is to be removed.
  Package exim4-config-2 is not installed.
  Package exim4-config which provides exim4-config-2 is to be removed.
 exim4-base depends on exim4-config (>= 4.82) | exim4-config-2; however:
  Package exim4-config is to be removed.
  Package exim4-config-2 is not installed.
  Package exim4-config which provides exim4-config-2 is to be removed.

Removing exim4-config (4.89-2+deb9u3) …
Selecting previously unselected package ssmtp.
(Reading database … 169247 files and directories currently installed.)
Preparing to unpack …/ssmtp_2.64-8+b2_amd64.deb …
Unpacking ssmtp (2.64-8+b2) …
(Reading database … 169268 files and directories currently installed.)
Removing exim4-base (4.89-2+deb9u3) …
Selecting previously unselected package libgnutls-openssl27:amd64.
(Reading database … 169195 files and directories currently installed.)
Preparing to unpack …/libgnutls-openssl27_3.5.8-5+deb9u3_amd64.deb …
Unpacking libgnutls-openssl27:amd64 (3.5.8-5+deb9u3) …
Processing triggers for libc-bin (2.24-11+deb9u3) …
Setting up libgnutls-openssl27:amd64 (3.5.8-5+deb9u3) …
Setting up ssmtp (2.64-8+b2) …
Processing triggers for man-db (2.7.6.1-2) …
Processing triggers for libc-bin (2.24-11+deb9u3) …

As you see from above output local default Debian Linux Exim is removed …

Lets send a simple test email …

hipo@linux:~# ssmtp user@remote-mail-server.com
Subject: Simply Test SSMTP Email
This Email was send just as a test using SSMTP obscure client
via SMTP server.
^d

What is notable about ssmtp is that even though so obsolete today it supports of STARTTLS (email communication encryption) that is done via its config file

/etc/ssmtp/ssmtp.conf

4. Send Email from terminal using Mutt client
 

Mutt was and still is one of the swiff army of most used console text email clients along with Alpine and Fetchmail to know more about it read here

Mutt supports reading / sending mail from multiple mailboxes and capable of reading IMAP and POP3 mail fetch protocols and was a serious step forward over mailx. Its syntax pretty much resembles mailx cmds.

root@linux:~# mutt -s "Test Email" user@example.com < /dev/null

Send email including attachment a 15 megabytes MySQL backup of Squirrel Webmail

root@linux:~# mutt  -s "This is last backup small sized database" -a /home/backups/backup_db.sql user@remote-mail-server.com < /dev/null


5. Using simple telnet to test and send email (verify existence of email on remote SMTP)
 

As a Mail Server SysAdmin this is one of my best ways to test whether I had a server properly configured and even sometimes for the sake of fun I used it as a hack to send my mail 🙂
telnet is and will always be a great tool for doing SMTP issues troubleshooting.
 

It is very useful to test whether a remote SMTP TCP port 25 is opened or a local / remote server firewall prevents connections to MTA.

Below is an example connect and send example using telnet to my local SMTP on pc-freak.net (QMail powered (R) 🙂 )

sending-email-using-telnet-command-howto-screenshot

root@pcfreak:~# telnet localhost 25
Trying 127.0.0.1…
Connected to localhost.
Escape character is '^]'.
220 This is Mail Pc-Freak.NET ESMTP
HELO mail.pc-freak.net
250 This is Mail Pc-Freak.NET
MAIL FROM:<hipo@pc-freak.net>
250 ok
RCPT TO:<roots_bg@yahoo.com>
250 ok
DATA
354 go ahead
Subject: This is a test subject

This is just a test mail send through telnet
.
250 ok 1536440787 qp 28058
^]
telnet>

Note that the returned messages are native to qmail, a postfix would return a slightly different content, here is another test example to remote SMTP running sendmail or postfix.

root@pcfreak:~# telnet mail.servername.com 25
Trying 127.0.0.1…
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
220 mail.servername.com ESMTP Sendmail 8.13.8/8.13.8; Tue, 22 Oct 2013 05:05:59 -0400
HELO yahoo.com
250 mail.servername.com Hello mail.servername.com [127.0.0.1], pleased to meet you
mail from: systemexec@gmail.com
250 2.1.0 hipo@pc-freak.net… Sender ok
rcpt to: hip0d@yandex.ru
250 2.1.5 hip0d@yandex.ru… Recipient ok
data
354 Enter mail, end with "." on a line by itself
Hey
This is test email only

Thanks
.
250 2.0.0 r9M95xgc014513 Message accepted for delivery
quit
221 2.0.0 mail.servername.com closing connection
Connection closed by foreign host.


It is handy if you want to know whether remote MTA server has a certain Emailbox existing or not with telnet by simply trying to send to a certian email and checking the Email server returned output (note that the message returned depends on the remote MTA version and many qmails are configured to not give information on the initial SMTP handshake but returns instead a MAILER DAEMON failure error sent back to your sender address. Some MX servrers are still vulnerable to this attack yet, historically dreamhost.com. Below attack screenshot is made at the times before dreamhost.com fixed the brute force email issue.

Terminal-Verify-existing-Email-with-telnet

6. Using simple netcat TCP/IP Swiss Army Knife to test and send email in console

netcat-logo-a-swiff-army-knife-of-the-hacker-and-security-expert-logo
Other tool besides telnet of testing remote / local SMTP is netcat tool (for reading and writting data across TCP and UDP connections).

The way to do it is analogous but since netcat is not present on most Linux OSes by default you need to install it through the package manager first be it apt or yum etc.

# apt-get –yes install netcat


 

First lets create a new file test_email_content.txt using bash's echo cmd.
 

# echo 'EHLO hostname
MAIL FROM: hip0d@yandex.ru
RCPT TO:   solutions@pc-freak.net
DATA
From: A tester <hip0d@yandex.ru>
To:   <solutions@pc-freak.net>
Date: date
Subject: A test message from test hostname

Delete me, please
.
QUIT
' >>test_email_content.txt

# netcat -C localhost 25 < test_email_content.txt

220 This is Mail Pc-Freak.NET ESMTP
250-This is Mail Pc-Freak.NET
250-STARTTLS
250-SIZE 80000000
250-PIPELINING
250 8BITMIME
250 ok
250 ok
354 go ahead
451 See http://pobox.com/~djb/docs/smtplf.html.

Because of its simplicity and the fact it has a bit more capabilities in reading / writing data over network it was no surprise it was among the favorite tools not only of crackers and penetration testers but also a precious debug tool for the avarage sysadmin. netcat's advantage over telnet is you can push-pull over the remote SMTP port (25) a non-interactive input.


7. Using openssl to connect and send email via encrypted channel

root@linux:~# openssl s_client -connect smtp.gmail.com:465 -crlf -ign_eof

    ===
               Certificate negotiation output from openssl command goes here
        ===

        220 smtp.gmail.com ESMTP j92sm925556edd.81 – gsmtp
            EHLO localhost
        250-smtp.gmail.com at your service, [78.139.22.28]
        250-SIZE 35882577
        250-8BITMIME
        250-AUTH LOGIN PLAIN XOAUTH2 PLAIN-CLIENTTOKEN OAUTHBEARER XOAUTH
        250-ENHANCEDSTATUSCODES
        250-PIPELINING
        250-CHUNKING
        250 SMTPUTF8
            AUTH PLAIN *passwordhash*
        235 2.7.0 Accepted
            MAIL FROM: <hipo@pcfreak.org>
        250 2.1.0 OK j92sm925556edd.81 – gsmtp
            rcpt to: <systemexec@gmail.com>
        250 2.1.5 OK j92sm925556edd.81 – gsmtp
            DATA
        354  Go ahead j92sm925556edd.81 – gsmtp
            Subject: This is openssl mailing

            Hello nice user
            .
        250 2.0.0 OK 1339757532 m46sm11546481eeh.9
            quit
        221 2.0.0 closing connection m46sm11546481eeh.9
        read:errno=0


8. Using CURL (URL transfer) tool to send SSL / TLS secured crypted channel emails via Gmail / Yahoo servers and MailGun Mail send API service


Using curl webpage downloading advanced tool for managing email send might be  a shocking news to many as it is idea is to just transfer data from a server.
curl is mostly used in conjunction with PHP website scripts for the reason it has a Native PHP implementation and many PHP based websites widely use it for download / upload of user data.
Interestingly besides support for HTTP and FTP it has support for POP3 and SMTP email protocols as well
If you don't have it installed on your server and you want to give it a try, install it first with apt:
 

root@linux:~# apt-get install curl


To learn more about curl capabilities make sure you check cURL –manual arg.
 

root@linux:~# curl –manual

a) Sending Emails via Gmail and other Mail Public services

Curl is capable to send emails from terminal using Gmail and Yahoo Mail services, if you want to give that a try.

gmail-settings-google-allow-less-secure-apps-sign-in-to-google-screenshot

Go to myaccount.google.com URL and login from the web interface choose Sign in And Security choose Allow less Secure Apps to be -> ON and turn on access for less secure apps in Gmail. Though I have not tested it myself so far with Yahoo! Mail, I suppose it should have a similar security settings somewhere.

Here is how to use curl to send email via Gmail.

Gmail-password-Allow-less-secure-apps-ON-screenshot-howto-to-be-able-to-send-email-with-text-commands-with-encryption-and-outlook

root@linux:~# curl –url 'smtps://smtp.gmail.com:465' –ssl-reqd \
  –mail-from 'your_email@gmail.com' –mail-rcpt 'remote_recipient@mail.com' \
  –upload-file mail.txt –user 'your_email@gmail.com:your_accout_password'


b) Sending Emails using Mailgun.com (Transactional Email Service API for developers)

To use Mailgun to script sending automated emails go to Mailgun.com and create account and generate new API key.

Then use curl in a similar way like below example:

curl -sv –user 'api:key-7e55d003b…f79accd31a' \
    https://api.mailgun.net/v3/sandbox21a78f824…3eb160ebc79.mailgun.org/messages \
    -F from='Excited User <developer@yourcompany.com>' \
    -F to=sandbox21a78f824…3eb160ebc79.mailgun.org \
    -F to=user_acc@gmail.com \
    -F subject='Hello' \
    -F text='Testing Mailgun service!' \
   –form-string html='<h1>EDMdesigner Blog</h1><br /><cite>This tutorial helps me understand email sending from Linux console</cite>' \
    -F attachment=@logo_picture.jpg

The -F option that is heavy present in above command lets curl (Emulate a form filled in button in which user has pressed the submit button).
For more info of the options check out man curl.
 

9. Using swaks command to send emails from

root@linux:~# apt-cache show swaks|grep "Description" -B 10
Package: swaks
Version: 20170101.0-1
Installed-Size: 221
Maintainer: Andreas Metzler <ametzler@debian.org>
Architecture: all
Depends: perl
Recommends: libnet-dns-perl, libnet-ssleay-perl
Suggests: perl-doc, libauthen-sasl-perl, libauthen-ntlm-perl
Description-en: SMTP command-line test tool
 swaks (Swiss Army Knife SMTP) is a command-line tool written in Perl
 for testing SMTP setups; it supports STARTTLS and SMTP AUTH (PLAIN,
 LOGIN, CRAM-MD5, SPA, and DIGEST-MD5). swaks allows one to stop the
 SMTP dialog at any stage, e.g to check RCPT TO: without actually
 sending a mail.
 .
 If you are spending too much time iterating "telnet foo.example 25"
 swaks is for you.
Description-md5: f44c6c864f0f0cb3896aa932ce2bdaa8

 

root@linux:~# apt-get instal –yes swaks

root@linux:~# swaks –to mailbox@example.com -s smtp.gmail.com:587
      -tls -au <user-account> -ap <account-password>


The -tls argument (in order to use gmail encrypted TLS channel on port 587)

If you want to hide the password not to provide the password from command line so (in order not to log it to user history) add the -a options.

10. Using qmail-inject on Qmail mail servers to send simple emails

Create new file with content like:
 

root@qmail:~# vim email_file_content.text
To: user@mail-example.com
Subject: Test


This is a test message.
 

root@qmail:~# cat email_file_content.text | /var/qmail/bin/qmail-inject


qmail-inject is part of ordinary qmail installation so it is very simple it even doesn't return error codes it just ships what ever given as content to remote MTA.
If the linux host where you invoke it has a properly configured qmail installation the email will get immediately delivered. The advantage of qmail-inject over the other ones is it is really lightweight and will deliver the simple message more quickly than the the prior heavy tools but again it is more a Mail Delivery Agent (MDA) for quick debugging, if MTA is not working, than for daily email writting.

It is very useful to simply test whether email send works properly without sending any email content by (I used qmail-inject to test local email delivery works like so).
 

root@linux:~# echo 'To: mailbox_acc@mail-server.com' | /var/qmail/bin/qmail-inject

11. Debugging why Email send with text tool is not being send properly to remote recipient

If you use some of the above described methods and email is not delivered to remote recipient email addresses check /var/log/mail.log (for a general email log and postfix MTAs – the log is present on many of the Linux distributions) and /var/log/messages or /var/log/qmal (on Qmail installations) /var/log/exim4 (on servers running Exim as MTA).

https://pc-freak.net/images/linux-email-log-debug-var-log-mail-output

 Closure

The ways to send email via Linux terminal are properly innumerous as there are plenty of scripted tools in various programming languages, I am sure in this article,  also missing a lot of pre-bundled installable distro packages. If you know other interesting ways / tools to send via terminal I would like to hear it.

Hope you enjoyed, happy mailing !