Posts Tagged ‘cmd’

How to RIP audio CD and convert to MP3 format on Linux

Thursday, April 11th, 2024

I've been given a very tedious task to Copy music from Audio CD Drive to MP3 / MP4 file format and then copy the content to external Flash drive.
Doing so is pretty trivial, you just need to have a CD / DVD rom on your computer something that becomes rare nowadays and then you need to have installed a bunch of software, if you don't already have it as i've pointed in my previous article Howto craete Music Audio CD from MP3 files, create playable WAV format audio CD Albums from MP3s.

Creating a Audio CD from an MP3 collection is exactly the opposite to what is aim't now (to copy the content of a CD to a computer and prepare it for A Car MP3 player).

1. RIPing audio CDs to WAV and Conver to MP3 from terminal
 

On Linux there is  many ways to do it and many tools that can do it for your both graphical and command line.
But as I prefer command line to do stuff, in this article I'll mention the quickest and most elementary one which is done in 2 steps.

1. Use a tool to dump the CD Audio music to Tracks in WAV format
2. Convert the WAV to MP3 format

We'll need cdparanoia tool installed as well as ffmpeg.

If you don't have them installed do:

# apt-get install –yes cdparanoia dvd+rw-tools cdw cdrdao audiotools cdlabelgen dvd+rw-tools wodim ffmpeg lame normalize-audio libavcodec58

Next create the directory where you want to dump the .wav files.

# mkdir /home/hipo/audiorip/cd1
# cd /home/hipo/audiorip/cd1

Next assumng the Audio CD is plugged in the CD reader, dump its full content into track*.WAV files with cmd:

# paranoia -B

This will produce you the dumped songs into .wav files.

hipo@noah:~/audiorip/cd1$ ls -al *.wav
-rw-r–r– 1 root root  10278284 мар 25 22:49 track01.cdda.wav
-rw-r–r– 1 root root  21666668 мар 25 22:50 track02.cdda.wav
-rw-r–r– 1 root root  88334108 мар 25 22:53 track03.cdda.wav
-rw-r–r– 1 root root  53453948 мар 25 22:55 track04.cdda.wav
-rw-r–r– 1 root root 100846748 мар 25 22:58 track05.cdda.wav
-rw-r–r– 1 root root  41058908 мар 25 22:59 track06.cdda.wav
-rw-r–r– 1 root root 105952940 мар 25 23:02 track07.cdda.wav
-rw-r–r– 1 root root  50074124 мар 25 23:03 track08.cdda.wav
-rw-r–r– 1 root root  92555948 мар 25 23:06 track09.cdda.wav
-rw-r–r– 1 root root  61939964 мар 25 23:07 track10.cdda.wav
-rw-r–r– 1 root root   8521340 мар 25 23:07 track11.cdda.wav

Then you can use a simple for loop with ffmpeg command to conver the .wav files to .mp3s.

hipo@noah:~/audiorip/cd1$  for i in $( ls -1 *); do ffmpeg -i $i $i.wav.mp3; done
 

ffmpeg version 1.2.12 Copyright (c) 2000-2015 the FFmpeg developers
  built on Feb 12 2015 18:03:16 with gcc 4.7 (Debian 4.7.2-5)
  configuration: –prefix=/usr –extra-cflags='-g -O2 -fstack-protector –param=ssp-buffer-size=4 -Wformat -Werror=format-security ' –extra-ldflags='-Wl,-z,relro' –cc='ccache cc' –enable-shared –enable-libmp3lame –enable-gpl –enable-nonfree –enable-libvorbis –enable-pthreads –enable-libfaac –enable-libxvid –enable-postproc –enable-x11grab –enable-libgsm –enable-libtheora –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libx264 –enable-libspeex –enable-nonfree –disable-stripping –enable-libvpx –enable-libschroedinger –disable-encoder=libschroedinger –enable-version3 –enable-libopenjpeg –enable-librtmp –enable-avfilter –enable-libfreetype –enable-libvo-aacenc –disable-decoder=amrnb –enable-libvo-amrwbenc –enable-libaacplus –libdir=/usr/lib/x86_64-linux-gnu –disable-vda –enable-libbluray –enable-libcdio –enable-gnutls –enable-frei0r –enable-openssl –enable-libass –enable-libopus –enable-fontconfig –enable-libpulse –disable-mips32r2 –disable-mipsdspr1 –dis  libavutil      52. 18.100 / 52. 18.100
  libavcodec     54. 92.100 / 54. 92.100
  libavformat    54. 63.104 / 54. 63.104
  libavdevice    54.  3.103 / 54.  3.103
  libavfilter     3. 42.103 /  3. 42.103
  libswscale      2.  2.100 /  2.  2.100
  libswresample   0. 17.102 /  0. 17.102
  libpostproc    52.  2.100 / 52.  2.100
[wav @ 0x66c900] max_analyze_duration 5000000 reached at 5015510 microseconds
Guessed Channel Layout for  Input Stream #0.0 : stereo
Input #0, wav, from 'track01.cdda.wav':
  Duration: 00:00:23.19, bitrate: 1411 kb/s
    Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Output #0, mp3, to 'track01.cdda.wav.wav.mp3':
  Metadata:
    TSSE            : Lavf54.63.104
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_s16le -> libmp3lame)
Press [q] to stop, [?] for help
size=     363kB time=00:00:23.19 bitrate= 128.2kbits/s    
video:0kB audio:363kB subtitle:0 global headers:0kB muxing overhead 0.058402%
ffmpeg version 1.2.12 Copyright (c) 2000-2015 the FFmpeg developers
  built on Feb 12 2015 18:03:16 with gcc 4.7 (Debian 4.7.2-5)
  configuration: –prefix=/usr –extra-cflags='-g -O2 -fstack-protector –param=ssp-buffer-size=4 -Wformat -Werror=format-security ' –extra-ldflags='-Wl,-z,relro' –cc='ccache cc' –enable-shared –enable-libmp3lame –enable-gpl –enable-nonfree –enable-libvorbis –enable-pthreads –enable-libfaac –enable-libxvid –enable-postproc –enable-x11grab –enable-libgsm –enable-libtheora –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libx264 –enable-libspeex –enable-nonfree –disable-stripping –enable-libvpx –enable-libschroedinger –disable-encoder=libschroedinger –enable-version3 –enable-libopenjpeg –enable-librtmp –enable-avfilter –enable-libfreetype –enable-libvo-aacenc –disable-decoder=amrnb –enable-libvo-amrwbenc –enable-libaacplus –libdir=/usr/lib/x86_64-linux-gnu –disable-vda –enable-libbluray –enable-libcdio –enable-gnutls –enable-frei0r –enable-openssl –enable-libass –enable-libopus –enable-fontconfig –enable-libpulse –disable-mips32r2 –disable-mipsdspr1 –dis  libavutil      52. 18.100 / 52. 18.100
  libavcodec     54. 92.100 / 54. 92.100
  libavformat    54. 63.104 / 54. 63.104
  libavdevice    54.  3.103 / 54.  3.103
  libavfilter     3. 42.103 /  3. 42.103
  libswscale      2.  2.100 /  2.  2.100
  libswresample   0. 17.102 /  0. 17.102
  libpostproc    52.  2.100 / 52.  2.100
[mp3 @ 0x66c900] max_analyze_duration 5000000 reached at 5015510 microseconds
Input #0, mp3, from 'track01.cdda.wav.mp3':
  Metadata:
    encoder         : Lavf54.63.104
  Duration: 00:00:23.22, start: 0.000000, bitrate: 128 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 128 kb/s
File 'track01.cdda.wav.mp3.wav.mp3' already exists. Overwrite ? [y/N] y
Output #0, mp3, to 'track01.cdda.wav.mp3.wav.mp3':
  Metadata:
    TSSE            : Lavf54.63.104
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p
Stream mapping:
  Stream #0:0 -> #0:0 (mp3 -> libmp3lame)
Press [q] to stop, [?] for help
Trying to remove 1152 samples, but the queue is emptys    
size=     363kB time=00:00:23.24 bitrate= 128.1kbits/s    
video:0kB audio:363kB subtitle:0 global headers:0kB muxing overhead 0.058336%
ffmpeg version 1.2.12 Copyright (c) 2000-2015 the FFmpeg developers
  built on Feb 12 2015 18:03:16 with gcc 4.7 (Debian 4.7.2-5)
  configuration: –prefix=/usr –extra-cflags='-g -O2 -fstack-protector –param=ssp-buffer-size=4 -Wformat -Werror=format-security ' –extra-ldflags='-Wl,-z,relro' –cc='ccache cc' –enable-shared –enable-libmp3lame –enable-gpl –enable-nonfree –enable-libvorbis –enable-pthreads –enable-libfaac –enable-libxvid –enable-postproc –enable-x11grab –enable-libgsm –enable-libtheora –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libx264 –enable-libspeex –enable-nonfree –disable-stripping –enable-libvpx –enable-libschroedinger –disable-encoder=libschroedinger –enable-version3 –enable-libopenjpeg –enable-librtmp –enable-avfilter –enable-libfreetype –enable-libvo-aacenc –disable-decoder=amrnb –enable-libvo-amrwbenc –enable-libaacplus –libdir=/usr/lib/x86_64-linux-gnu –disable-vda –enable-libbluray –enable-libcdio –enable-gnutls –enable-frei0r –enable-openssl –enable-libass –enable-libopus –enable-fontconfig –enable-libpulse –disable-mips32r2 –disable-mipsdspr1 –dis  libavutil      52. 18.100 / 52. 18.100
  libavcodec     54. 92.100 / 54. 92.100
  libavformat    54. 63.104 / 54. 63.104
  libavdevice    54.  3.103 / 54.  3.103
  libavfilter     3. 42.103 /  3. 42.103
  libswscale      2.  2.100 /  2.  2.100
  libswresample   0. 17.102 /  0. 17.102
  libpostproc    52.  2.100 / 52.  2.100
[mp3 @ 0x66c900] max_analyze_duration 5000000 reached at 5015510 microseconds
Input #0, mp3, from 'track01.cdda.wav.mp3.wav.mp3':
  Metadata:
    encoder         : Lavf54.63.104
  Duration: 00:00:23.25, start: 0.000000, bitrate: 128 kb/s
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 128 kb/s
Output #0, mp3, to 'track01.cdda.wav.mp3.wav.mp3.wav.mp3':
  Metadata:
    TSSE            : Lavf54.63.104
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p
Stream mapping:
  Stream #0:0 -> #0:0 (mp3 -> libmp3lame)
Press [q] to stop, [?] for help
Trying to remove 1152 samples, but the queue is emptys    
size=     364kB time=00:00:23.27 bitrate= 128.1kbits/s    
video:0kB audio:364kB subtitle:0 global headers:0kB muxing overhead 0.058271%
ffmpeg version 1.2.12 Copyright (c) 2000-2015 the FFmpeg developers
  built on Feb 12 2015 18:03:16 with gcc 4.7 (Debian 4.7.2-5)
  configuration: –prefix=/usr –extra-cflags='-g -O2 -fstack-protector –param=ssp-buffer-size=4 -Wformat -Werror=format-security ' –extra-ldflags='-Wl,-z,relro' –cc='ccache cc' –enable-shared –enable-libmp3lame –enable-gpl –enable-nonfree –enable-libvorbis –enable-pthreads –enable-libfaac –enable-libxvid –enable-postproc –enable-x11grab –enable-libgsm –enable-libtheora –enable-libopencore-amrnb –enable-libopencore-amrwb –enable-libx264 –enable-libspeex –enable-nonfree –disable-stripping –enable-libvpx –enable-libschroedinger –disable-encoder=libschroedinger –enable-version3 –enable-libopenjpeg –enable-librtmp –enable-avfilter –enable-libfreetype –enable-libvo-aacenc –disable-decoder=amrnb –enable-libvo-amrwbenc –enable-libaacplus –libdir=/usr/lib/x86_64-linux-gnu –disable-vda –enable-libbluray –enable-libcdio –enable-gnutls –enable-frei0r –enable-openssl –enable-libass –enable-libopus –enable-fontconfig –enable-libpulse –disable-mips32r2 –disable-mipsdspr1 –dis  libavutil      52. 18.100 / 52. 18.100
  libavcodec     54. 92.100 / 54. 92.100
  libavformat    54. 63.104 / 54. 63.104
  libavdevice    54.  3.103 / 54.  3.103
  libavfilter     3. 42.103 /  3. 42.103
  libswscale      2.  2.100 /  2.  2.100
  libswresample   0. 17.102 /  0. 17.102
  libpostproc    52.  2.100 / 52.  2.100
[wav @ 0x66c900] max_analyze_duration 5000000 reached at 5015510 microseconds
Guessed Channel Layout for  Input Stream #0.0 : stereo
Input #0, wav, from 'track02.cdda.wav':
  Duration: 00:02:21.28, bitrate: 1411 kb/s
    Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
Output #0, mp3, to 'track02.cdda.wav.wav.mp3':
  Metadata:
    TSSE            : Lavf54.63.104
    Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p
Stream mapping:
  Stream #0:0 -> #0:0 (pcm_s16le -> libmp3lame)
Press [q] to stop, [?] for help


Finally remove the old unneded .wav files and enjoy the mp3s with vlc / mplayer / mpg123 or whatever player you like.

hipo@noah:~/audiorip/cd1$  rm -f *.wav


Now mount the flash drive and copy th files into it.

# mkdir /media/usb-drive
# mount /dev/sdc1 /media/usb-drive/
# mkdir -p /media/usb-drive/cd1
# fdisk -l |grep -i sdc1

/dev/sdc1 on /media/usb-drive type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=utf8,shortname=mixed,errors=remount-ro

# cp -rpf ~/audiorip/cd1*.mp3 /mnt/usb-drive/cd1
# umount /dev/sdc1


2. RIPping audio CD on Linux with rip-audio-cds-linux.sh  script
 

#!/bin/bash
# A simple shell script to rip audio cd and create mp3 using lame
# and cdparanoia utilities.
# —————————————————————————-
# Written by Vivek Gite <http://www.cyberciti.biz/>
# (c) 2006 nixCraft under GNU GPL v2.0+
# —————————————————————————-
read -p "Starting in 5 seconds ( to abort press CTRL + C ) " -t 5
cdparanoia -B
for i in *.wav
do
    lame –vbr-new -b 360 "$i" "${i%%.cdda.wav}.mp3"
    rm -f "$i"
done


If you need to automate the task of dumping the audio CDs to WAV and convert them to MP3s you can do it via a small shell script like the one provided by cyberciti.biz that uses paranoia and lame commands in a shell script loop. Script rip-audio-cds-linux.sh is here

3. Dump Audio CD to MP3 with Graphical program ( ripperx ) 

By default most modern Linux distributions including the Debian GNU / Linux based ones has the ripperx in the default repositories, as well as the tool is downloadable and compilable from source from sourceforge.net 

ripping-audio-cds-linux-graphical-program-ripperx-tool

# apt-cache show ripperx|grep -i descript -A3 -B3
Architecture: amd64
Depends: cdparanoia, vorbis-tools (>= 1.0beta3), libatk1.0-0 (>= 1.12.4), libc6 (>= 2.14), libcairo2 (>= 1.2.4), libfontconfig1 (>= 2.12.6), libfreetype6 (>= 2.2.1), libgcc1 (>= 1:3.0), libgdk-pixbuf2.0-0 (>= 2.22.0), libglib2.0-0 (>= 2.16.0), libgtk2.0-0 (>= 2.8.0), libpango-1.0-0 (>= 1.14.0), libpangocairo-1.0-0 (>= 1.14.0), libpangoft2-1.0-0 (>= 1.14.0), libstdc++6 (>= 5.2), libtag1v5 (>= 1.9.1-2.2~)
Suggests: sox, cdtool, mpg321, flac, toolame
Description-en: GTK-based audio CD ripper/encoder
 ripperX is a graphical interface for ripping CD audio tracks (using
 cdparanoia) and then encoding them into the Ogg, FLAC, or MP2/3
 formats using the vorbis tools, FLAC, toolame or other available
 MP3 encoders.
 .
 It includes support for CDDB lookups and ID3v2 tags.
Description-md5: cdeabf4ef72c33d57aecc4b4e2fd5952
Homepage: http://sourceforge.net/projects/ripperx/
Tag: hardware::storage, hardware::storage:cd, interface::graphical,
 interface::x11, role::program, scope::application, uitoolkit::gtk,

# apt install –yes ripperx


https://www.pc-freak.net/images/ripperx-linux-gui-rip-audio-cds-tool

That's all folks.
Enjoy !

How to count number of ESTABLISHED state TCP connections to a Windows server

Wednesday, March 13th, 2024

count-netstat-established-connections-on-windows-server-howto-windows-logo-debug-network-issues-windows

Even if you have the background of a Linux system administrator, sooner or later you will have have to deal with some Windows hosts, thus i'll blog in this article shortly on how the established TCP if it happens you will have to administarte a Windows hosts or help a windows sysadmin noobie 🙂

In Linux it is pretty easy to check the number of established conenctions, because of the wonderful command wc (word count). with a simple command like:
 

$ netstat -etna |wc -l


Then you will get the number of active TCP connections to the machine and based on that you can get an idea on how busy the server is.

But what if you have to deal with lets say a Microsoft Windows 2012 /2019 / 2020 or 2022 Server, assuming you logged in as Administrator and you see the machine is quite loaded and runs multiple Native Windows Administrator common services such as IIS / Active directory Failover Clustering, Proxy server etc.
How can you identify the established number of connections via a simple command in cmd.exe?

1.Count ESTABLISHED TCP connections from Windows Command Line

Here is the answer, simply use netstat native windows command and combine it with find, like that and use the /i (ignores the case of characters when searching the string) /c (count lines containing the string) options

C:\Windows\system32>netstat -p TCP -n|  find /i "ESTABLISHED" /c
1268

Voila, here are number of established connections, only 1268 that is relatively low.
However if you manage Windows servers, and you get some kind of hang ups as part of the monitoring, it is a good idea to setup a script based on this simple command for at least Windows Task Scheduler (the equivallent of Linux's crond service) to log for Peaks in Established connections to see whether Server crashes are not related to High Rise in established connections.
Even better if company uses Zabbix / Nagios, OpenNMS or other  old legacy monitoring stuff like Joschyd even as of today 2024 used in some big of the TOP IT companies such as SAP (they were still using it about 4 years ago for their SAP HANA Cloud), you can set the script to run and do a Monitoring template or Alerting rules to draw you graphs and Trigger Alerts if your connections hits a peak, then you at least might know your Windows server is under a "Hackers" Denial of Service attack or there is something happening on the network, like Cisco Network Infrastructure Switch flappings or whatever.

Perhaps an example script you can use if you decide to implement the little nestat established connection checks Monitoring in Zabbix is the one i've writen about in the previous article "Calculate established connection from IP address with shell script and log to zabbix graphic".

2. Few Useful netstat options for the Windows system admin
 

C:\Windows\System32> netstat -bona


netstat-useful-arguments-for-the-windows-system-administrator

Cmd.exe will lists executable files, local and external IP addresses and ports, and the state in list form. You immediately see which programs have created connections or are listening so that you can find offenders quickly.

b – displays the executable involved in  creating the connection.
o – displays the owning process ID.
n – displays address and port numbers.
a – displays all connections and listening ports.

As you can see in the screenshot, by using netstat -bona you get which process has binded to which local address and the Process ID PID of it, that is pretty useful in debugging stuff.

3. Use a Third Party GUI tool to debug more interactively connection issues

If you need to keep an eye in interactive mode, sometimes if there are issues CurrPorts tool can be of a great help

currports-windows-network-connections-diagnosis-cports

CurrPorts Tool own Description

CurrPorts is network monitoring software that displays the list of all currently opened TCP/IP and UDP ports on your local computer. For each port in the list, information about the process that opened the port is also displayed, including the process name, full path of the process, version information of the process (product name, file description, and so on), the time that the process was created, and the user that created it.
In addition, CurrPorts allows you to close unwanted TCP connections, kill the process that opened the ports, and save the TCP/UDP ports information to HTML file , XML file, or to tab-delimited text file.
CurrPorts also automatically mark with pink color suspicious TCP/UDP ports owned by unidentified applications (Applications without version information and icons).

Sum it up

What we learned is how to calculate number of established TCP connections from command line, useful for scripting, how you can use netstat to display the process ID and Process name that relates to a used Local / Remote TCP connections, and how eventually you can use this to connect it to some monitoring tool to periodically report High Peaks with TCP established connections (usually an indicator of servere system issues).
 

How to disable Windows pagefile.sys and hiberfil.sys to temporary or permamently save disk space if space is critically low

Monday, March 28th, 2022

howto-pagefile-hiberfil.sys-remove-reduce-increase-increase-size-windows-logo

Sometimes you have to work with Windows 7 / 8 / 10 PCs  etc. that has a very small partition C:\
drive or othertimes due to whatever the disk got filled up with time and has only few megabytes left
and this totally broke up the windows performance as Windows OS becomes terribly sluggish and even
simple things as opening Internet Browser (Chrome / Firefox / Opera ) or Windows Explorer stones the PC performance.

You might of course try to use something like Spacesniffer tool (a great tool to find lost data space on PC s short description on it is found in my previous article how to
delete temporary Internet Files and Folders to to speed up and free disk space
 ) or use CCleaner to clean up a bit the pc.
Sometimes this is not enough though or it is not possible to do at all the main
partition disk C:\ is anyhow too much low (only 30-50MB are available on HDD) or the Physical or Virtual Machine containing the OS is filled with important data
and you couldn't risk to remove anything including Internet Temporary files, browsing cookies … whatever.

Lets say you are the fate chosen guy as sysadmin to face this uneasy situation and have no easy
way to add disk space from another present free space partition or could not add a new SATA hard drive
SSD drive, what should you do?
 

The solution wipe off pagefile.sys and hiberfil.sys

Usually every Windows installation has a pagefile.sys and hiberfil.sys.

  • pagefile.sys – is the default file that is used as a swap file, immediately once the machine runs out of memory. For Unix / Linux users better understanding pagefile.sys is the equivallent of Linux's swap partition. Of course as the pagefile is in a file and not in separate partition the swapping in Windows is perhaps generally worse than in Linux.
  • hiberfil.sys – is used to store data from the machine on machine Hibernation (for those who use the feature)


Pagefile.sys which depending on the configured RAM memory on the OS could takes up up to 5 – 8 GB, there hanging around doing nothing but just occupying space. Thus a temporary workaround that could free you some space even though it will degrade performance and on servers and production machines this is not a good solution on just user machines, where you temporary need to free space any other important task you can free up space
by seriously reducing the preconfigured default size of pagefile.sys (which usually is 1.5 times the active memory on the OS – hence if you have 4GB you would have a 6Gigabytes of pagefile.sys).

Other possibility especially on laptop and movable devices running Win OS is to disable hiberfil.sys, read below how this is done.


The temporary solution here is to simply free space by either reducing the pagefile.sys or completely disabling it


1. Disable pagefile.sys on Windows XP, Windows 7 / 8 / 10 / 11


The GUI interface to disable pagefile across all NT based Windows OS-es is quite similar, the only difference is newer versions of Windows has slightly more options.


1.1 Disable pagefile on Windows XP


Quickest way is to find pagefile.sys settings from GUI menus

1. Computer (My Computer) – right click mouse
2. Properties (System Preperties will appear)
3. Advanced (tab) 
4. Settings
5. Advanced (tab)
6. Change button

windows-xp-pagefile-disable-screenshot

1.2 Disable pagefile on Windows 7

 

advanced-system-settings-control-panel-system-and-security-screenshot

windows-system-properties-screenshot-properties-advanced-change-Virtual-memory-pagefile-screenshot

system-properties-performance-options
 

Once applied you'll be required to reboot the PC

How-to-turn-off-Virtual-Memory-Paging_File-in-Windows-7-restart

 

1.3 Disable Increase / Decrease pagefile.sys on Windows 10 / Win 11
 

open-system-properties-advanced-win10

win10-performance-options-menu-screenshot

configure-virtual-memory-win-10-screenshot


1.4 Make Windows clear pagefile.sys on shutdown

On home PCs it might be useful thing to clear up ( nullify) pagefile.sys on shutdown, that could save you some disk space on every reboot, until file continuously grows to its configured Maximum.

Run

regedit

Modify registry key at location

 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management

windows-clean-up-pagefile-sys-file-on-shutdown-or-reboot-registry-editor-value-screenshot

You can apply the value also via a registry file you can get the Enable Clearpagefile at shutdown here .reg.
 

2. Manipulating pagefile.sys size and file delete from command line with wmic tool 

For scripting purposes you might want to use the wmic pagefile which can do increase / decrase or delete the file without GUI, that is very helpful if you have to admin a Windows Domain (Active Directory)
 

[hipo.WINDOWS-PC] ➤ wmic pagefile /?

PAGEFILE – Virtual memory file swapping management.

HINT: BNF for Alias usage.
(<alias> [WMIObject] | [] | [] ) [].

USAGE:

PAGEFILE ASSOC []
PAGEFILE CREATE <assign list>
PAGEFILE DELETE
PAGEFILE GET [] []
PAGEFILE LIST [] []

 

[hipo.WINDOWS-PC] ➤ wmic pagefile
AllocatedBaseSize  Caption          CurrentUsage  Description      InstallDate                Name             PeakUsage  Status  TempPageFile
4709               C:\pagefile.sys  499           C:\pagefile.sys  20200912061902.938000+180  C:\pagefile.sys  525                FALSE

 

[hipo.WINDOWS-PC] ➤ wmic pagefile list /format:list

AllocatedBaseSize=4709
CurrentUsage=499
Description=C:\pagefile.sys
InstallDate=20200912061902.938000+180
Name=C:\pagefile.sys
PeakUsage=525
Status=
TempPageFile=FALSE

wmic-pagefile-command-line-tool-for-windows-default-output-screenshot

 

  • To change the Initial Size or Maximum Size of Pagefile use:
     

➤ wmic pagefileset where name="C:\\pagefile.sys" set InitialSize=2048,MaximumSize=2048

  • To move the pagefile / change location of pagefile to less occupied disk drive partition (i.e. D:\ drive)

     

     

    Sometimes you might have multiple drives on the PC and some of them might be having multitudes of gigabytes while main drive C:\ could be fully occupied due to initial install bad drive organization, in that case a good work arount to save you space so you can work normally with the server is just to temporary or permanently move pagefile to another drive.

wmic pagefileset where name="D:\\pagefile.sys" set InitialSize=2048,MaximumSize=2048


!! CONSIDER !!! 

That if you have the option to move the pagefile.sys for best performance it is advicable to place the file inside another physical disk, preferrably a Solid State Drive one, SATA disks are too slow and reduced Input / Output disk operations will lead to degraded performance, if there is lack of memory (i.e. pagefile.sys is actively open read and wrote in).
 

  • To delete pagefile.sys 
     

➤ wmic pagefileset where name="C:\\pagefile.sys" delete

 

If for some reason you prefer to not use wmic but simple del command you can delete pagefile.sys also by:

Removing file default "Hidden" and "system" file attributes – set for security reasons as the file is a system file usually not touched by user. This will save you from "permission denied" errors:
 

➤ attrib -s -h %systemdrive%\pagefile.sys


Delete the file:
 

➤ del /a /q %systemdrive%\pagefile.sys


3. Disable hibernation on Windows 7 / 8 and Win 10 / 11

Disabling hibernation file hiberfil.sys can also free up some space, especially if the hibernation has been actively used before and the file is written with data. Of course, that is more common on notebooks.
Windows hibernation has significantly improved over time though i didn't have very pleasant experience in the past and I prefer to disable it just in case.
 

3.1 Disable Windows 7 / 8 / 10 / 11 hibernation from GUI 

Disable it through:

Control Panel -> All Control Panel Items -> Power Options -> Edit Plan Settings -> Change advanced power settings


 like shown in below screenshot:

Windqows-power-options-Advanced-settings-Allow-Hybrid-sleep-option-menu-screenshot

 

3.2 Disable Windows 7 / 8 / 10 / 11 hibernation from command line

Disable hibernation Is done in the same way through the powercfg.exe command, to disable it
if you're cut of disk space and you want to save space from it:

run as Administrator in Command Line Windows (cmd.exe)
 

powercfg.exe /hibernate off

If you later need to switch on hibernation
 

powercfg.exe /hibernate on


disable-hiberfile-windows-screenshot

3.3 Disable Windows hibernation on legacy Windows XP

On XP to disable hibernation open

1. Power Options Properties
2. Select Hibernate
3. Select Enable Hibernation to clear the checkbox and disable Hibernation mode. 
4. Select OK to apply the change.

Close the Power Options Properties box. 

enable-disable-hibernate-windows-xp-menu-screenshot

To sum it up

We have learned some basics on Windows swapping and hibernation and i've tried to give some insight on how thiese files if misconfigured could lead to degraded Win OS performance. In any case using SSD as of 2022 to store both files is a best practice for machines that has plenty of memory always try to completely disable / remove the files. It was shown how  to manage pagefile.sys and hiberfil.sys across Windows Operating Systems different versions both from GUI and via command line as well as how you can configure pagefile.sys to be cleared up on pc reboot.
 

Fix Out of inodes on Postfix Linux Mail Cluster. How to clean up filesystem running out of Inodes, Filesystem inodes on partition is 100% full

Wednesday, August 25th, 2021

Inode_Entry_inode-table-content

Recently we have faced a strange issue with with one of our Clustered Postfix Mail servers (the cluster is with 2 nodes that each has configured Postfix daemon mail servers (running on an OpenVZ virtualized environment).
A heartbeat that checks liveability of clusters and switches nodes in case of one of the two gets broken due to some reason), pretty much a standard SMTP cluster.

So far so good but since the cluster is a kind of abondoned and is pretty much legacy nowadays and used just for some Monitoring emails from different scripts and systems on servers, it was not really checked thoroughfully for years and logically out of sudden the alarming email content sent via the cluster stopped working.

The normal sysadmin job here  was to analyze what is going on with the cluster and fix it ASAP. After some very basic analyzing we catched the problem is caused by a  "inodes full" (100% of available inodes were occupied) problem, e.g. file system run out of inodes on both machines perhaps due to a pengine heartbeat process  bug  leading to producing a high number of .bz2 pengine recovery archive files stored in /var/lib/pengine>

Below are the few steps taken to analyze and fix the problem.
 

1. Finding out about the the system run out of inodes problem


After logging on to system and not finding something immediately is wrong with inodes, all I can see from crm_mon is cluster was broken.
A plenty of emails were left inside the postfix mail queue visible with a standard command

[root@smtp1: ~ ]# postqueue -p

It took me a while to find ot the problem is with inodes because a simple df -h  was showing systems have enough space but still cluster quorum was not complete.
A bit of further investigation led me to a  simple df -i reporting the number of inodes on the local filesystems on both our SMTP1 and SMTP2 got all occupied.

[root@smtp1: ~ ]# df -i
Filesystem            Inodes   IUsed   IFree IUse% Mounted on
/dev/simfs            500000   500000  0   100% /
none                   65536      61   65475    1% /dev

As you can see the number of inodes on the Virual Machine are unfortunately depleted

Next step was to check directories occupying most inodes, as this is the place from where files could be temporary moved to a remote server filesystem or moved to another partition with space on a server locally attached drives.
Below command gives an ordered list with directories locally under the mail root filesystem / and its respective occupied number files / inodes,
the more files under a directory the more inodes are being occupied by the files on the filesystem.

 

run-out-if-inodes-what-is-inode-find-out-which-filesystem-or-directory-eating-up-all-your-system-inodes-linux_inode_diagram.gif
1.1 Getting which directory consumes most of the inodes on the systems

 

[root@smtp1: ~ ]# { find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n; } 2>/dev/null
….
…..

…….
    586 /usr/lib64/python2.4
    664 /usr/lib64
    671 /usr/share/man/man8
    860 /usr/bin
   1006 /usr/share/man/man1
   1124 /usr/share/man/man3p
   1246 /var/lib/Pegasus/prev_repository_2009-03-10-1236698426.308128000.rpmsave/root#cimv2/classes
   1246 /var/lib/Pegasus/prev_repository_2009-05-18-1242636104.524113000.rpmsave/root#cimv2/classes
   1246 /var/lib/Pegasus/prev_repository_2009-11-06-1257494054.380244000.rpmsave/root#cimv2/classes
   1246 /var/lib/Pegasus/prev_repository_2010-08-04-1280907760.750543000.rpmsave/root#cimv2/classes
   1381 /var/lib/Pegasus/prev_repository_2010-11-15-1289811714.398469000.rpmsave/root#cimv2/classes
   1381 /var/lib/Pegasus/prev_repository_2012-03-19-1332151633.572875000.rpmsave/root#cimv2/classes
   1398 /var/lib/Pegasus/repository/root#cimv2/classes
   1696 /usr/share/man/man3
   400816 /var/lib/pengine

Note, the above command orders the files from bottom to top order and obviosuly the bottleneck directory that is over-eating Filesystem inodes with an exceeding amount of files is
/var/lib/pengine
 

2. Backup old multitude of files just in case of something goes wrong with the cluster after some files are wiped out


The next logical step of course is to check what is going on inside /var/lib/pengine just to find a very ,very large amount of pe-input-*NUMBER*.bz2 files were suddenly produced.

 

[root@smtp1: ~ ]# ls -1 pe-input*.bz2 | wc -l
 400816


The files are produced by the pengine process which is one of the processes that is controlling the heartbeat cluster state, presumably it is done by running process:

[root@smtp1: ~ ]# ps -ef|grep -i pengine
24        5649  5521  0 Aug10 ?        00:00:26 /usr/lib64/heartbeat/pengine


Hence in order to fix the issue, to prevent some inconsistencies in the cluster due to the file deletion,  copied the whole directory to another mounted parition (you can mount it remotely with sshfs for example) or use a local one if you have one:

[root@smtp1: ~ ]# cp -rpf /var/lib/pengine /mnt/attached_storage


and proceeded to clean up some old multitde of files that are older than 2 years of times (720 days):


3. Clean  up /var/lib/pengine files that are older than two years with short loop and find command

 


First I made a list with all the files to be removed in external text file and quickly reviewed it by lessing it like so

[root@smtp1: ~ ]#  cd /var/lib/pengine
[root@smtp1: ~ ]# find . -type f -mtime +720|grep -v pe-error.last | grep -v pe-input.last |grep -v pe-warn.last -fprint /home/myuser/pengine_older_than_720days.txt
[root@smtp1: ~ ]# less /home/myuser/pengine_older_than_720days.txt


Once reviewing commands I've used below command to delete the files you can run below command do delete all older than 2 years that are different from pe-error.last / pe-input.last / pre-warn.last which might be needed for proper cluster operation.

[root@smtp1: ~ ]#  for i in $(find . -type f -mtime +720 -exec echo '{}' \;|grep -v pe-error.last | grep -v pe-input.last |grep -v pe-warn.last); do echo $i; done


Another approach to the situation is to simply review all the files inside /var/lib/pengine and delete files based on year of creation, for example to delete all files in /var/lib/pengine from 2010, you can run something like:
 

[root@smtp1: ~ ]# for i in $(ls -al|grep -i ' 2010 ' | awk '{ print $9 }' |grep -v 'pe-warn.last'); do rm -f $i; done


4. Monitor real time inodes freeing

While doing the clerance of old unnecessery pengine heartbeat archives you can open another ssh console to the server and view how the inodes gets freed up with a command like:

 

# check if inodes is not being rapidly decreased

[root@csmtp1: ~ ]# watch 'df -i'


5. Restart basic Linux services producing pid files and logs etc. to make then workable (some services might not be notified the inodes on the Hard drive are freed up)

Because the hard drive on the system was full some services started to misbehaving and /var/log logging was impacted so I had to also restart them in our case this is the heartbeat itself
that  checks clusters nodes availability as well as the logging daemon service rsyslog

 

# restart rsyslog and heartbeat services
[root@csmtp1: ~ ]# /etc/init.d/heartbeat restart
[root@csmtp1: ~ ]# /etc/init.d/rsyslog restart

The systems had been a data integrity legacy service samhain so I had to restart this service as well to reforce the /var/log/samhain log file to again continusly start writting data to HDD.

# Restart samhain service init script 
[root@csmtp1: ~ ]# /etc/init.d/samhain restart


6. Check up enough inodes are freed up with df

[root@smtp1 log]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/simfs 500000 410531 19469 91% /
none 65536 61 65475 1% /dev


I had to repeat the same process on the second Postfix cluster node smtp2, and after all the steps like below check the status of smtp2 node and the postfix queue, following same procedure made the second smtp2 cluster member as expected 🙂

 

7. Check the cluster node quorum is complete, e.g. postfix cluster is operating normally

 

# Test if email cluster is ok with pacemaker resource cluster manager – lt-crm_mon
 

[root@csmtp1: ~ ]# crm_mon -1
============
Last updated: Tue Aug 10 18:10:48 2021
Stack: Heartbeat
Current DC: smtp2.fqdn.com (bfb3d029-89a8-41f6-a9f0-52d377cacd83) – partition with quorum
Version: 1.0.12-unknown
2 Nodes configured, unknown expected votes
4 Resources configured.
============

Online: [ smtp2.fqdn.com smtp1.fqdn.com ]

failover-ip (ocf::heartbeat:IPaddr2): Started csmtp1.ikossvan.de
Clone Set: postfix_clone
Started: [ smtp2.fqdn.com smtp1fqdn.com ]
Clone Set: pingd_clone
Started: [ smtp2.fqdn.com smtp1.fqdn.com ]
Clone Set: mailto_clone
Started: [ smtp2.fqdn.com smtp1.fqdn.com ]

 

8.  Force resend a few hundred thousands of emails left in the email queue


After some inodes gets freed up due to the file deletion, i've reforced a couple of times the queued mail servers to be immediately resent to remote mail destinations with cmd:

 

# force emails in queue to be resend with postfix

[root@smtp1: ~ ]# sendmail -q


– It was useful to watch in real time how the queued emails are quickly decreased (queued mails are successfully sent to destination addresses) with:

 

# Monitor  the decereasing size of the email queue
[root@smtp1: ~ ]# watch 'postqueue -p|grep -i '@'|wc -l'

Linux: Howto Fix “N: Repository ‘http://deb.debian.org/debian buster InRelease’ changed its ‘Version’ value from ‘10.9’ to ‘10.10’” error to resolve apt-get release update issue

Friday, August 13th, 2021

Linux's surprises and disorganization is continuously growing day by day and I start to realize it is becoming mostly impossible to support easily this piece of hackware bundled together.
Usually so far during the last 5 – 7 years, I rarely had any general issues with using:

 apt-get update && apt-get upgrade && apt-get dist-upgrade 

to raise a server's working stable Debian Linux version packages e.g. version X.Y to verzion X.Z (for example up the release from Debian Jessie from 8.1 to 8.2). 

Today I just tried to follow this well known and established procedure that, of course nowdays is better to be done with the newer "apt" command instead with the legacy "apt-get"
And the set of 

 

# apt-get update && apt-get upgrade && apt-get dist-upgrade

 

has triggered below shitty error:
 

root@zabbix:~# apt-get update && apt-get upgrade
Get:1 http://security.debian.org buster/updates InRelease [65.4 kB]
Get:2 http://deb.debian.org/debian buster InRelease [122 kB]
Get:3 http://security.debian.org buster/updates/non-free Sources [688 B]
Get:4 http://repo.zabbix.com/zabbix/5.0/debian buster InRelease [7096 B]
Get:5 http://security.debian.org buster/updates/main Sources [198 kB]
Get:6 http://security.debian.org buster/updates/main amd64 Packages [300 kB]
Get:7 http://security.debian.org buster/updates/main Translation-en [157 kB]
Get:8 http://security.debian.org buster/updates/non-free amd64 Packages [556 B]
Get:9 http://deb.debian.org/debian buster/main Sources [7836 kB]
Get:10 http://repo.zabbix.com/zabbix/5.0/debian buster/main Sources [1192 B]
Get:11 http://repo.zabbix.com/zabbix/5.0/debian buster/main amd64 Packages [4785 B]
Get:12 http://deb.debian.org/debian buster/non-free Sources [85.7 kB]
Get:13 http://deb.debian.org/debian buster/contrib Sources [42.5 kB]
Get:14 http://deb.debian.org/debian buster/main amd64 Packages [7907 kB]
Get:15 http://deb.debian.org/debian buster/main Translation-en [5968 kB]
Get:16 http://deb.debian.org/debian buster/main amd64 Contents (deb) [37.3 MB]
Get:17 http://deb.debian.org/debian buster/contrib amd64 Packages [50.1 kB]
Get:18 http://deb.debian.org/debian buster/non-free amd64 Packages [87.7 kB]
Get:19 http://deb.debian.org/debian buster/non-free Translation-en [88.9 kB]
Get:20 http://deb.debian.org/debian buster/non-free amd64 Contents (deb) [861 kB]
Fetched 61.1 MB in 22s (2774 kB/s)
Reading package lists… Done
N: Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '10.9' to '10.10'


As I used to realize nowdays, as Linux started originally as 'Hackers' operating system, its legacy is just one big hack and everything from simple maintenance up to the higher and more sophisticated things requires a workaround 'hack''.

 

This time the hack to resolve error:
 

N: Repository 'http://deb.debian.org/debian buster InRelease' changed its 'Version' value from '10.9' to '10.10'


is up to running cmd:
 

debian-server:~# apt-get update –allow-releaseinfo-change
Поп:1 http://ftp.de.debian.org/debian buster-backports InRelease
Поп:2 http://ftp.debian.org/debian stable InRelease
Поп:3 http://security.debian.org stable/updates InRelease
Изт:5 https://packages.sury.org/php buster InRelease [6837 B]
Изт:6 https://download.docker.com/linux/debian stretch InRelease [44,8 kB]
Изт:7 https://packages.sury.org/php buster/main amd64 Packages [317 kB]
Игн:4 https://attic.owncloud.org/download/repositories/production/Debian_10  InRelease
Изт:8 https://download.owncloud.org/download/repositories/production/Debian_10  Release [964 B]
Изт:9 https://packages.sury.org/php buster/main i386 Packages [314 kB]
Изт:10 https://download.owncloud.org/download/repositories/production/Debian_10  Release.gpg [481 B]
Грш:10 https://download.owncloud.org/download/repositories/production/Debian_10  Release.gpg
  Следните подписи са невалидни: DDA2C105C4B73A6649AD2BBD47AE7F72479BC94B
Грш:11 https://ookla.bintray.com/debian generic InRelease
  403  Forbidden [IP: 52.39.193.126 443]
Четене на списъците с пакети… Готово
N: Repository 'https://packages.sury.org/php buster InRelease' changed its 'Suite' value from '' to 'buster'
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://download.owncloud.org/download/repositories/production/Debian_10  Release: 

apt-get-update-allow-releaseinfo-change-debian-linux-screenshot

Onwards to upgrade the system up to the latest .deb packages, as usual run:

# apt-get -y update && apt-get upgrade -y

 

and updates should be applied as usual with some prompts on whether you prefer to keep or replace existing service configuration and some information on some general changes that might affect your installed services. In a few minutes and few prompts hopefully your Debian OS should be up to the latest stable.

Start Stop Restart Microsoft IIS Webserver from command line and GUI

Thursday, April 17th, 2014

start-stop-restart-microsoft-iis-howto-iis-server-logo
For a decomissioning project just recently I had the task to stop Microsoft IIS  on Windows Server system.
If you have been into security for a while you know well how many vulnerabilities Microsoft (Internet Information Server) Webserver used to be. Nowadays things with IIS are better but anyways it is better not to use it if possible …

Nomatter what the rason if you need to make IIS stop serving web pages here is how to do it via command line:

At Windows Command Prompt, type:

net stop WAS

If the command returns error message to stop it type:

net stop W3SVC

stop-microsoft-IIS-webservice
Just in case you have to start it again run:

net start W3SVC

start-restart-IIS-webserver-screenshot

For those who prefer to do it from GUI interface, launch services.msc command from Windows Run:

> services.msc

services-msc-stop-microsoft-iis-webserver

In list of services lookup for
IIS Admin Service and HTTP SSL
a) (Click over it with right mouse button -> Properties)
b) Set Startup type to Manual
c) Click Stop Button

You're done now IIS is stopped to make sure it is stopped you can run from cmd.exe:

telnet localhost 80

when not working you should get 'Could not open connection to the host. on port 80: Connection failed' like shown up in screenshot.

Report haproxy node switch script useful for Zabbix or other monitoring

Tuesday, June 9th, 2020

zabbix-monitoring-logo
For those who administer corosync clustered haproxy and needs to build monitoring in case if the main configured Haproxy node in the cluster is changed, I've developed a small script to be integrated with zabbix-agent installed to report to a central zabbix server via a zabbix proxy.
The script  is very simple it assumed DC1 variable is the default used haproxy node and DC2 and DC3 are 2 backup nodes. The script is made to use crm_mon which is not installed by default on each server by default so if you'll be using it you'll have to install it first, but anyways the script can easily be adapted to use pcs cmd instead.

Below is the bash shell script:

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }'); do ((f++)); DC[$f]="$i"; done; \
DC=$(sudo /usr/sbin/crm_mon -n -1 | grep 'Current DC' | awk '{ print $1 " " $2 " " $3}' | awk '{ print $3 }'); \
if [ “$DC” == “${DC[1]}” ]; then echo “1 Default DC Switched to ${DC[1]}”; elif [ “$DC” == “${DC[2]}” ]; then \
echo "2 Default DC Switched to ${DC[2]}”; elif [ “$DC” == “${DC[3]}” ]; then echo “3 Default DC: ${DC[3]}"; fi


To configure it with zabbix monitoring it can be configured via UserParameterScript.

The way I configured  it in Zabbix is as so:


1. Create the userpameter_active_node.conf

Below script is 3 nodes Haproxy cluster

# cat > /etc/zabbix/zabbix_agentd.d/userparameter_active_node.conf

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }'); do ((f++)); DC[$f]="$i"; done; \
DC=$(sudo /usr/sbin/crm_mon -n -1 | grep 'Current DC' | awk '{ print $1 " " $2 " " $3}' | awk '{ print $3 }'); \
if [ “$DC” == “${DC[1]}” ]; then echo “1 Default DC Switched to ${DC[1]}”; elif [ “$DC” == “${DC[2]}” ]; then \
echo "2 Default DC Switched to ${DC[2]}”; elif [ “$DC” == “${DC[3]}” ]; then echo “3 Default DC: ${DC[3]}"; fi

Once pasted to save the file press CTRL + D


The version of the script with 2 nodes slightly improved is like so:
 

UserParameter=active.dc,f=0; for i in $(sudo /usr/sbin/crm_mon -n -1|grep -i 'Node ' |awk '{ print $2 }' | sed -e 's#:##g'); do DC_ARRAY[$f]=”$i”; ((f++)); done; GET_CURR_DC=$(sudo /usr/sbin/crm_mon -n -1 | grep ‘Current DC’ | awk ‘{ print $1 ” ” $2 ” ” $3}’ | awk ‘{ print $3 }’); if [ “$GET_CURR_DC” == “${DC_ARRAY[0]}” ]; then echo “1 Default DC ${DC_ARRAY[0]}”; fi; if [ “$GET_CURR_DC” == “${DC_ARRAY[1]}” ]; then echo “2 Default Current DC Switched to ${DC_ARRAY[1]} Please check “; fi; if [ -z “$GET_CURR_DC” ] || [ -z “$DC_ARRAY[1]” ]; then printf "Error something might be wrong with HAProxy Cluster on  $HOSTNAME "; fi;


The haproxy_active_DC_zabbix.sh script with a bit of more comments as explanations is available here 
2. Configure access for /usr/sbin/crm_mon for zabbix user in sudoers

 

# vim /etc/sudoers

zabbix          ALL=NOPASSWD: /usr/sbin/crm_mon


3. Configure in Zabbix for active.dc key Trigger and Item

active-node-switch1

Export / Import PuTTY Tunnels SSH Sessions from one to another Windows machine howto

Thursday, January 31st, 2019

Putty-copy-ssh-tunnels-howto-from-one-to-another-windows-machine-3

As I've started on job position – Linux Architect in last November 2018 in Itelligence AG as a contractor (External Service) – a great German company who hires the best IT specialists out there and offers a flexible time schedules for emploees doing various very cool IT advanced operations and Strategic advancement of SAP's Cloud used Technology and Services improvements for SAP SE – SAP S4HANA and HEC (HANA Enterprise Cloud) and been given for work hardware a shiny Lenovo Thinkpad 500 Laptop with Windows 10 OS (SAP pre-installed), I needed to make some SSH Tunnels to machines to (Hop Station / Jump hosts) for that purpose, after some experimenting with MobaXterm Free (Personal Edition 11.0) and the presumable limitations of tunnels of the free client as well as my laziness to add the multiple ssh tunnels to different ssh / rdp / vnc etc. servers, finally I decided to just copy all the tunnels from a colleague who runs Putty and again use the good old Putty – old school Winblows SSH Terminal Client but just for creating the SSH tunnels and for rest use MobaXterm, just like in old times while still employe in Hewlett Packard. For that reason to copy the Tunnels from my dear German Colleague Henry Beck (A good herated collegue who works in field of Storage dealing with NetApps / filer Clusters QNap etc.).

Till that moment I had no idea how copying a saved SSH Tunnels definition is possible, I did a quick research just to find out this is done not with Putty Interface itself but, insetead through dumping Windows Putty Stored Registry records into a File, then transfer to the PC where Tunnels needs to be imported and then again (either double click the registry file) to load it, into registry or use Windows registry editor command line interface reg, here is how:
 

1. Export

 

Run cmd.exe (note below command) 

requires elevated Run as Administrator prompt:

Only sessions:

regedit /e "%USERPROFILE%\Desktop\putty-sessions.reg" HKEY_CURRENT_USER\Software\SimonTatham\PuTTY\Sessions

All settings:

regedit /e "%USERPROFILE%\Desktop\putty.reg" HKEY_CURRENT_USER\Software\SimonTatham

Powershell:

If you have powershell installed on machine, to dump

Only sessions:

 

reg export HKCU\Software\SimonTatham\PuTTY\Sessions ([Environment]::GetFolderPath("Desktop") + "\putty-sessions.reg")

All settings:

reg export HKCU\Software\SimonTatham ([Environment]::GetFolderPath("Desktop") + "\putty.reg")


2. Import

Double-click on the 

*.reg

 file and accept the import.

 

Alternative ways:

 

cmd.exe

require elevated command prompt:

regedit /i putty-sessions.reg regedit /i putty.reg

PowerShell:

reg import putty-sessions.reg reg import putty.reg



Below are some things to consider:

Note !do not replace 

SimonTatham

 with your username.

 

Note !: It will create a 

reg

 file on the Desktop of the current user (for a different location modify path)

 

Note !: It will not export your related (old system stored) SSH keys.

What to expect next?

Putty-Tunnels-SSH-Sessions-screenshot-Windows

The result is in Putty you will have the Tunnel sessions loadable when you launch (Portable or installed) Putty version.
Press Load button over the required saved Tunnels list and there you go under

 

Connection SSH -> Tunnels 

 

you will see all the copied tunnels.

Enjoy!

Ansible Quick Start Cheatsheet for Linux admins and DevOps engineers

Wednesday, October 24th, 2018

ansible-quick-start-cheetsheet-ansible-logo

Ansible is widely used (Configuration management, deployment, and task execution system) nowadays for mass service depoyments on multiple servers and Clustered environments like, Kubernetes clusters (with multiple pods replicas) virtual swarms running XEN / IPKVM virtualization hosting multiple nodes etc. .

Ansible can be used to configure or deploy GNU / Linux tools and services such as Apache / Squid / Nginx / MySQL / PostgreSQL. etc. It is pretty much like Puppet (server / services lifecycle management) tool , except its less-complecated to start with makes it often a choose as a tool for mass deployment (devops) automation.

Ansible is used for multi-node deployments and remote-task execution on group of servers, the big pro of it it does all its stuff over simple SSH on the remote nodes (servers) and does not require extra services or listening daemons like with Puppet. It combined with Docker containerization is used very much for later deploying later on inside Cloud environments such as Amazon AWS / Google Cloud Platform / SAP HANA / OpenStack etc.

Ansible-Architechture-What-Is-Ansible-Edureka

0. Instaling ansible on Debian / Ubuntu Linux


Ansible is a python script and because of that depends heavily on python so to make it running, you will need to have a working python installed on local and remote servers.

Ansible is as easy to install as running the apt cmd:

 

# apt-get install –yes ansible
 

The following additional packages will be installed:
  ieee-data python-jinja2 python-kerberos python-markupsafe python-netaddr python-paramiko python-selinux python-xmltodict python-yaml
Suggested packages:
  sshpass python-jinja2-doc ipython python-netaddr-docs python-gssapi
Recommended packages:
  python-winrm
The following NEW packages will be installed:
  ansible ieee-data python-jinja2 python-kerberos python-markupsafe python-netaddr python-paramiko python-selinux python-xmltodict python-yaml
0 upgraded, 10 newly installed, 0 to remove and 1 not upgraded.
Need to get 3,413 kB of archives.
After this operation, 22.8 MB of additional disk space will be used.

apt-get install –yes sshpass

 

Installing Ansible on Fedora Linux is done with:

 

# dnf install ansible –yes sshpass

 

On CentOS to install:
 

# yum install ansible –yes sshpass

sshpass needs to be installed only if you plan to use ssh password prompt authentication with ansible.

Ansible is also installable via python-pip tool, if you need to install a specific version of ansible you have to use it instead, the package is available as an installable package on most linux distros.

Ansible has a lot of pros and cons and there are multiple articles already written on people for and against it in favour of Chef or Puppet As I recently started learning Ansible. The most important thing to know about Ansible is though many of the things can be done directly using a simple command line, the tool is planned for remote installing of server services using a specially prepared .yaml format configuration files. The power of Ansible comes of the use of Ansible Playbooks which are yaml scripts that tells ansible how to do its activities step by step on remote server. In this article, I'm giving a quick cheat sheet to start quickly with it.
 

1. Remote commands execution with Ansible
 

First thing to do to start with it is to add the desired hostnames ansible will operate with it can be done either globally (if you have a number of remote nodes) to deploy stuff periodically by using /etc/ansible/hosts or use a custom host script for each and every ansible custom scripts developed.

a. Ansible main config files

A common ansible /etc/ansible/hosts definition looks something like that:

 

# cat /etc/ansible/hosts
[mysqldb]
10.69.2.185
10.69.2.186
[master]
10.69.2.181
[slave]
10.69.2.187
[db-servers]
10.69.2.181
10.69.2.187
[squid]
10.69.2.184

Host to execute on can be also provided via a shell variable $ANSIBLE_HOSTS
b) is remote hosts reachable / execute commands on all remote host

To test whether hour hosts are properly configure from /etc/ansible/hosts you can ping all defined hosts with:

 

ansible all -m ping


ansible-check-hosts-ping-command-screenshot

This makes ansible try to remote to remote hosts (if you have properly configured SSH public key authorization) the command should return success statuses on every host.

 

ansible all -a "ifconfig -a"


If you don't have SSH keys configured you can also authenticate with an argument (assuming) all hosts are configured with same password with:

 

ansible all –ask-pass -a "ip all show" -u hipo –ask-pass


ansible-show-ips-ip-a-command-screenshot-linux

If you have configured group of hosts via hosts file you can also run certain commands on just a certain host group, like so:

 

ansible <host-group> -a <command>

It is a good idea to always check /etc/ansible/ansible.cfg which is the system global (main red ansible config file).

c) List defined host groups
 

ansible localhost -m debug -a 'var=groups.keys()'
ansible localhost -m debug -a 'var=groups'

d) Searching remote server variables

 

# Search remote server variables
ansible localhost -m setup -a 'filter=*ipv4*'

 

 

ansible localhost -m setup -a 'filter=ansible_domain'

 

 

ansible all -m setup -a 'filter=ansible_domain'

 

 

# uninstall package on RPM based distros
ansible centos -s -m yum -a "name=telnet state=absent"
# uninstall package on APT distro
ansible localhost -s -m apt -a "name=telnet state=absent"

 

 

2. Debugging – Listing information about remote hosts (facts) and state of a host

 

# All facts for one host
ansible -m setup
  # Only ansible fact for one host
ansible
-m setup -a 'filter=ansible_eth*'
# Only facter facts but for all hosts
ansible all -m setup -a 'filter=facter_*'


To Save outputted information per-host in separate files in lets say ~/ansible/host_facts

 

ansible all -m setup –tree ~/ansible/host_facts

 

3. Playing with Playbooks deployment scripts

 

a) Syntax Check of a playbook yaml

 

ansible-playbook –syntax-check


b) Run General Infos about a playbook such as get what a playbook would do on remote hosts (tasks to run) and list-hosts defined for a playbook (like above pinging).

 

ansible-playbook –list-hosts
ansible-playbook
–list-tasks


To get the idea about what an yaml playbook looks like, here is example from official ansible docs, that deploys on remote defined hosts a simple Apache webserver.
 


– hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  – name: ensure apache is at the latest version
    yum:
      name: httpd
      state: latest
  – name: write the apache config file
    template:
      src: /srv/httpd.j2
      dest: /etc/httpd.conf
    notify:
    – restart apache
  – name: ensure apache is running
    service:
      name: httpd
      state: started
  handlers:
    – name: restart apache
      service:
        name: httpd
        state: restarted

To give it a quick try save the file as webserver.yml and give it a run via ansible-playbook command
 

ansible-playbook -s playbooks/webserver.yml

 

The -s option instructs ansible to run play on remote server with super user (root) privileges.

The power of ansible is its modules, which are constantly growing over time a complete set of Ansible supported modules is in its official documenation.

Ansible-running-playbook-Commands-Task-script-Successful-output-1024x536

There is a lot of things to say about playbooks, just to give the brief they have there own language like a  templates, tasks, handlers, a playbook could have one or multiple plays inside (for instance instructions for deployment of one or more services).

The downsides of playbooks are they're so hard to write from scratch and edit, because yaml syntaxing is much more stricter than a normal oldschool sysadmin configuration file.
I've stucked with problems with modifying and writting .yaml files and I should say the community in #ansible in irc.freenode.net was very helpful to help me debug the obscure errors.

yamllint (The YAML Linter tool) comes handy at times, when facing yaml syntax errors, to use it install via apt:
 

# apt-get install –yes yamllint


a) Running ansible in "dry mode" just show what ansible might do but not change anything
 

ansible-playbook playbooks/PLAYBOOK_NAME.yml –check


b) Running playbook with different users and separate SSH keys

 

ansible-playbook playbooks/your_playbook.yml –user ansible-user
 
ansible -m ping hosts –private-key=~/.ssh/keys/custom_id_rsa -u centos

 

c) Running ansible playbook only for certain hostnames part of a bigger host group

 

ansible-playbook playbooks/PLAYBOOK_NAME.yml –limit "host1,host2,host3"


d) Run Ansible on remote hosts in parallel

To run in raw of 10 hosts in parallel
 

# Run 10 hosts parallel
ansible-playbook <File.yaml> -f 10            


e) Passing variables to .yaml scripts using commandline

Ansible has ability to pre-define variables from .yml playbooks. This variables later can be passed from shell cli, here is an example:

# Example of variable substitution pass from command line the var in varsubsts.yaml if present is defined / replaced ansible-playbook playbooks/varsubst.yaml –extra-vars "myhosts=localhost gather=yes pkg=telnet"

 

4. Ansible Galaxy (A Docker Hub) like large repository with playbook (script) files

 

Ansible Galaxy has about 10000 active users which are contributing ansible automation playbooks in fields such as Development / Networking / Cloud / Monitoring / Database / Web / Security etc.

To install from ansible galaxy use ansible-galaxy

# install from galaxy the geerlingguy mysql playbook
ansible-galaxy install geerlingguy.mysql


The available packages you can use as a template for your purpose are not so much as with Puppet as Ansible is younger and not corporate supported like Puppet, anyhow they are a lot and does cover most basic sysadmin needs for mass deployments, besides there are plenty of other unofficial yaml ansible scripts in various github repos.