Posts Tagged ‘file’

How to redirect / forward all postfix emails to one external email address?

Thursday, October 29th, 2020

Postfix_mailserver-logo-howto-forward-email-with-regular-expression-or-maildrop

Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!someothermail@domain2.com"
}


To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .*linus@mail.example.com.*/ )
{
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail linuxbox@collector.example.com"
        }
}

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your Email-Server.com are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?

 

In main.cf /etc/postfix/main.cf include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp

/.+@.+/ external-forward-email@gmail.com

Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )
 

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual


Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.
 

$ postconf -n | grep virtual
virtual_alias_domains = mydomain.com myanotherdomain.com
virtual_alias_maps = hash:/etc/postfix/virtual


The regexp /.+@.+/ external-forward-email@gmail.com applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix


If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:

/.+@exclude-domain1.com/ @exclude-domain1.com
/.+@exclude-domain2.com/ @exclude-domain2.com

Time for a test. Send a test email


Next step is to Test it mail forwarding works as expected
 

# echo -e "Tseting body" | mail -s "testing subject" -r "testing@test.com" whatevertest-user@mail-recipient-domain.com

Deny DHCP Address by MAC on Linux

Thursday, October 8th, 2020

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the www.pc-freak.net and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4

 

option domain-name "pcfreak.lan";
option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;
max-lease-time 891200;
authoritative;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
}
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
        fixed-address 192.168.0.200;
}
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
/home/hipo/info
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;


The Deny DHCP Address by MAC is described on isc.org distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;


As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;


One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.

 

root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid
 

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on https://web.pc-freak.net.

 

Reinstall all Debian packages with a copy of apt deb package list from another working Debian Linux installation

Wednesday, July 29th, 2020

Reinstall-all-Debian-packages-with-copy-of-apt-packages-list-from-another-working-Debian-Linux-installation

Few days ago, in the hurry in the small hours of the night, I've done something extremely stupid. Wanting to move out a .tar.gz binary copy of qmail installation to /var/lib/qmail with all the dependent qmail items instead of extracting to admin user /root directory (/root), I've extracted it to the main Operating system root / directrory.
Not noticing this, I've quickly executed rm -rf var with the idea to delete all directory tree under /root/var just 3 seconds later, I've realized I'm issuing the rm -rf var with the wrong location WITH a root user !!!! Being scared on what I've done, I've quickly pressed CTRL+C to immedately cancel the deletion operation of my /var.

wrong-system-var-rm-linux-dont-do-that-ever-or-your-system-will-end-up-irreversably-damaged

But as you can guess, since the machine has an Slid State Drive drive and SSD memory drive are much more faster in I/O operations than the classical ATA / SATA disks. I was not quick enough to cancel the operation and I've noticed already some part of my /var have been R.I.P-pped in the heaven of directories.

This was ofcourse upsetting so for a while I rethinked the situation to get some ideas on what I can do to recover my system ASAP!!! and I had the idea of course to try to reinstall All my installed .deb debian packages to restore system closest to the normal, before my stupid mistake.

Guess my unpleasent suprise when I have realized dpkg and respectively apt-get apt and aptitude package management tools cannot anymore handle packages as Debian Linux's package dependency database has been damaged due to missing dpkg directory 

 

/var/lib/dpkg 

 

Oh man that was unpleasent, especially since I've installed plenty of stuff that is custom on my Mate based desktop and, generally reinstalling it updating the sytem to the latest Debian security updates etc. will be time consuming and painful process I wanted to omit.

So of course the logical thing to do here was to try to somehow recover somehow a database copy of /var/lib/dpkg  if that was possible, that of course led me to the idea to lookup for a way to recover my /var/lib/dpkg from backup but since I did not maintained any backup copy of my OS anywhere that was not really possible, so anyways I wondered whether dpkg does not keep some kind of database backups somewhere in case if something goes wrong with its database.
This led me to this nice Ubuntu thred which has pointed me to the part of my root rm -rf dpkg db disaster recovery solution.
Luckily .deb package management creators has thought about situation similar to mine and to give the user a restore point for /var/lib/dpkg damaged database

/var/lib/dpkg is periodically backed up in /var/backups

A typical /var/lib/dpkg on Ubuntu and Debian Linux looks like so:
 

hipo@jeremiah:/var/backups$ ls -l /var/lib/dpkg
total 12572
drwxr-xr-x 2 root root    4096 Jul 26 03:22 alternatives
-rw-r–r– 1 root root      11 Oct 14  2017 arch
-rw-r–r– 1 root root 2199402 Jul 25 20:04 available
-rw-r–r– 1 root root 2199402 Oct 19  2017 available-old
-rw-r–r– 1 root root       8 Sep  6  2012 cmethopt
-rw-r–r– 1 root root    1337 Jul 26 01:39 diversions
-rw-r–r– 1 root root    1223 Jul 26 01:39 diversions-old
drwxr-xr-x 2 root root  679936 Jul 28 14:17 info
-rw-r—– 1 root root       0 Jul 28 14:17 lock
-rw-r—– 1 root root       0 Jul 26 03:00 lock-frontend
drwxr-xr-x 2 root root    4096 Sep 17  2012 parts
-rw-r–r– 1 root root    1011 Jul 25 23:59 statoverride
-rw-r–r– 1 root root     965 Jul 25 23:59 statoverride-old
-rw-r–r– 1 root root 3873710 Jul 28 14:17 status
-rw-r–r– 1 root root 3873712 Jul 28 14:17 status-old
drwxr-xr-x 2 root root    4096 Jul 26 03:22 triggers
drwxr-xr-x 2 root root    4096 Jul 28 14:17 updates

Before proceeding with this radical stuff to move out /var/lib/dpkg/info from another machine to /var mistakenyl removed oned. I have tried to recover with the well known:

  • extundelete
  • foremost
  • recover
  • ext4magic
  • ext3grep
  • gddrescue
  • ddrescue
  • myrescue
  • testdisk
  • photorec

Linux file deletion recovery tools from a USB stick loaded with a Number of LiveCD distributions, i.e. tested recovery with:

  • Debian LiveCD
  • Ubuntu LiveCD
  • KNOPPIX
  • SystemRescueCD
  • Trinity Rescue Kit
  • Ultimate Boot CD


but unfortunately none of them couldn't recover the deleted files … 

The reason why the standard file recovery tools could not recover ?

My assumptions is after I've done by rm -rf var; from sysroot,  issued the sync (- if you haven't used it check out man sync) command – that synchronizes cached writes to persistent storage and did a restart from the poweroff PC button, this should have worked, as I've recovered like that in the past) in a normal Sys V System with a normal old fashioned blocks filesystem as EXT2 . or any other of the filesystems without a journal, however as the machine run a EXT4 filesystem with a journald and journald, this did not work perhaps because something was not updated properly in /lib/systemd/systemd-journal, that led to the situation all recently deleted files were totally unrecoverable.

1. First step was to restore the directory skele of /var/lib/dpkg

# mkdir -p /var/lib/dpkg/{alternatives,info,parts,triggers,updates}

 

2. Recover missing /var/lib/dpkg/status  file

The main file that gives information to dpkg of the existing packages and their statuses on a Debian based systems is /var/lib/dpkg/status

# cp /var/backups/dpkg.status.0 /var/lib/dpkg/status

 

3. Reinstall dpkg package manager to make package management working again

Say a warm prayer to the Merciful God ! and do:

# apt-get download dpkg
# dpkg -i dpkg*.deb

 

4. Reinstall base-files .deb package which provides basis of a Debian system

Hopefully everything will be okay and your dpkg / apt pair will be in normal working state, next step is to:

# apt-get download base-files
# dpkg -i base-files*.deb

 

5. Do a package sanity and consistency check and try to update OS package list

Check whether packages have been installed only partially on your system or that have missing, wrong or obsolete control  data  or  files.  dpkg  should suggest what to do with them to get them fixed.

# dpkg –audit

Then resynchronize (fetch) the package index files from their sources described in /etc/apt/sources.list

# apt-get update


Do apt db constistency check:

#  apt-get check


check is a diagnostic tool; it updates the package cache and checks for broken dependencies.
 

Take a deep breath ! …

Do :

ls -l /var/lib/dpkg
and compare with the above list. If some -old file is not present don't worry it will be there tomorrow.

Next time don't forget to do a regular backup with simple rsync backup script or something like Bacula / Amanda / Time Vault or Clonezilla.
 

6. Copy dpkg database from another Linux system that has a working dpkg / apt Database

Well this was however not the end of the story … There were still many things missing from my /var/ and luckily I had another Debian 10 Buster install on another properly working machine with a similar set of .deb packages installed. Therefore to make most of my programs still working again I have copied over /var from the other similar set of package installed machine to my messed up machine with the missing deleted /var.

To do so …
On Functioning Debian 10 Machine (Working Host in a local network with IP 192.168.0.50), I've archived content of /var:

linux:~# tar -czvf var_backup_debian10.tar.gz /var

Then sftped from Working Host towards the /var deleted broken one in my case this machine's hostname is jericho and luckily still had SSHD and SFTP running processes loaded in memory:

jericho:~# sftp root@192.168.0.50
sftp> get var_backup_debian10.tar.gz

Now Before extracting the archive it is a good idea to make backup of old /var remains somewhere for example somewhere in /root 
just in case if we need to have a copy of the dpkg backup dir /var/backups

jericho:~# cp -rpfv /var /root/var_backup_damaged

 
jericho:~# tar -zxvf /root/var_backup_debian10.tar.gz 
jericho:/# mv /root/var/ /

Then to make my /var/lib/dpkg contain the list of packages from my my broken Linux install I have ovewritten /var/lib/dpkg with the files earlier backupped before  .tar.gz was extracted.

jericho:~# cp -rpfv /var /root/var_backup_damaged/lib/dpkg/ /var/lib/

 

7. Reinstall All Debian  Packages completely scripts

 

I then tried to reinstall each and every package first using aptitude with aptitude this is done with

# aptitude reinstall '~i'

However as this failed, tried using a simple shell loop like below:

for i in $(dpkg -l |awk '{ print $2 }'); do echo apt-get install –reinstall –yes $i; done

Alternatively, all .deb package reninstall is also possible with dpkg –get-selections and with awk with below cmds:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log;
awk '$1=$1' ORS=' ' list.log > newlist.log
;
apt-get install –reinstall $(cat newlist.log)

It can also be run as one liner for simplicity:

dpkg –get-selections | grep -v deinstall | awk '{print $1}' > list.log; awk '$1=$1' ORS=' ' list.log > newlist.log; apt-get install –reinstall $(cat newlist.log)

This produced a lot of warning messages, reporting "package has no files currently installed" (virtually for all installed packages), indicating a severe packages problem below is sample output produced after each and every package reinstall … :

dpkg: warning: files list file for package 'iproute' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'brscan-skey' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libapache2-mod-php7.4' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libexpat1:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-readline' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'linux-headers-4.19.0-5-amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libgraphite2-3:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libbonoboui2-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libxcb-dri3-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblcms2-2:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpixman-1-0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'gksu' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'liblogging-stdlog0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mesa-vdpau-drivers:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libzvbi0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libcdparanoia0:i386' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'python-gconf' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'php5.6-cli' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'libpaper1:amd64' missing; assuming package has no files currently installed
dpkg: warning: files list file for package 'mixer.app' missing; assuming package has no files currently installed

After some attempts I found a way to be able to work around the warning message, for each package by simply reinstalling the package reporting the issue with

apt –reinstall $package_name


Though reinstallation started well and many packages got reinstalled, unfortunately some packages such as apache2-mod-php5.6 and other php related ones  started failing during reinstall ending up in unfixable states right after installation of binaries from packages was successfully placed in its expected locations on disk. The failures occured during the package setup stage ( dpkg –configure $packagename) …

The logical thing to do is a recovery attempt with something like the usual well known by any Debian admin:

apt-get install –fix-missing

As well as Manual requesting to reconfigure (issue re-setup) of all installed packages also did not produced a positive result

dpkg –configure -a

But many packages were still failing due to dpkg inability to execute some post installation scripts from respective .deb files.
To work around that and continue installing the rest of packages I had to manually delete all files related to the failing package located under directory 

/var/lib/dpkg/info#

For example to omit the post installation failure of libapache2-mod-php5.6 and have a succesful install of the package next time I tried reinstall, I had to delete all /var/lib/dpkg/info/libapache2-mod-php5.6.postrm, /var/lib/dpkg/info/libapache2-mod-php5.6.postinst scripts and even sometimes everything like libapache2-mod-php5.6* that were present in /var/lib/dpkg/info dir.

The problem with this solution, however was the package reporting to install properly, but the post install script hooks were still not in placed and important things as setting permissions of binaries after install or applying some configuration changes right after install was missing leading to programs failing to  fully behave properly or even breaking up even though showing as finely installed …

The final solution to this problem was radical.
I've used /var/lib/dpkg database (directory) from ther other working Linux machine with dpkg DB OK found in var_backup_debian10.tar.gz (linux:~# host with a working dpkg database) and then based on the dpkg package list correct database responding on jericho:~# to reinstall each and every package on the system using Debian System Reinstaller script taken from the internet.
Debian System Reinstaller works but to reinstall many packages, I've been prompted again and again whether to overwrite configuration or keep the present one of packages.
To Omit the annoying [Y / N ] text prompts I had made a slight modification to the script so it finally looked like this:
 

#!/bin/bash
# Debian System Reinstaller
# Copyright (C) 2015 Albert Huang
# Copyright (C) 2018 Andreas Fendt

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.

# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.

# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

# —
# This script assumes you are using a Debian based system
# (Debian, Mint, Ubuntu, #!), and have sudo installed. If you don't
# have sudo installed, replace "sudo" with "su -c" instead.

pkgs=`dpkg –get-selections | grep -w 'install$' | cut -f 1 |  egrep -v '(dpkg|apt)'`

for pkg in $pkgs; do
    echo -e "\033[1m   * Reinstalling:\033[0m $pkg"    

    apt-get –reinstall -o Dpkg::Options::="–force-confdef" -o Dpkg::Options::="–force-confold" -y install $pkg || {
        echo "ERROR: Reinstallation failed. See reinstall.log for details."
        exit 1
    }
done

 

 debian-all-packages-reinstall.sh working modified version of Albert Huang and Andreas Fendt script  can be also downloaded here.

Note ! Omitting the text confirmation prompts to install newest config or keep maintainer configuration is handled by the argument:

 

-o Dpkg::Options::="–force-confold


I however still got few NCurses Console selection prompts during the reinstall of about 3200+ .deb packages, so even with this mod the reinstall was not completely automatic.

Note !  During the reinstall few of the packages from the list failed due to being some old unsupported packages this was ejabberd, ircd-hybrid and a 2 / 3 more.
This failure was easily solved by completely purging those packages with the usual

# dpkg –purge $packagename

and reruninng  debian-all-packages-reinstall.sh on each of the failing packages.

Note ! The failing packages were just old ones left over from Debian 8 and Debian 9 before the apt-get dist-upgrade towards 10 Duster.
Eventually I got a success by God's grance, after few hours of pains and trials, ending up in a working state package database and a complete set of freshly reinstalled packages.

The only thing I had to do finally is 2 hours of tampering why GNOME did not automatically booted after the system reboot due to failing gdm
until I fixed that I've temprary used ligthdm (x-display-manager), to do I've

dpkg –reconfigure gdm3

lightdm-x-display-manager-screenshot-gdm3-reconfige

 to work around this I had to also reinstall few libraries, reinstall the xorg-server, reinstall gdm and reinstall the meta package for GNOME, using below set of commands:
 

apt-get install –reinstall libglw1-mesa libglx-mesa0
apt-get install –reinstall libglu1-mesa-dev
apt install –reinstallgsettings-desktop-schemas
apt-get install –reinstall xserver-xorg-video-intel
apt-get install –reinstall xserver-xorg
apt-get install –reinstall xserver-xorg-core
apt-get install –reinstall task-desktop
apt-get install –reinstall task-gnome-desktop

 

As some packages did not ended re-instaled on system because on the original host from where /var/lib/dpkg db was copied did not have it I had to eventually manually trigger reinstall for those too:

 

apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes thunderbird
apt-get install –reinstall –yes audacity
apt-get install –reinstall –yes gajim
apt-get install –reinstall –yes slack remmina
apt-get install –yes k3b
pt-get install –yes gbgoffice
pt-get install –reinstall –yes skypeforlinux
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes libcurl3-gnutls libcurl3-nss
apt-get install –yes virtualbox-5.2
apt-get install –reinstall –yes vlc
apt-get install –reinstall –yes alsa-tools-gui
apt-get install –reinstall –yes gftp
apt install ./teamviewer_15.3.2682_amd64.deb –yes

 

Note that some of above packages requires a properly configured third party repositories, other people might have other packages that are missing from the dpkg list and needs to be reinstalled so just decide according to your own case of left aside working system present binaries that doesn't belong to any dpkg installed package.

After a bit of struggle everything is back to normal Thanks God! 🙂 !
 

 

How to remove ‘active contents’ from PDF file on Linux / Strip Active Contents from PDF

Thursday, July 18th, 2019

how-to-remove-active-content-from-pdf-with-ghoscript-on-gnu-linux.svg

I'm updating my Autiobography (CV) with my latest job eployeers, technology expertise and certifications and usually use the EuroPassCV standard web service to update already generated PDF files.The service as web based application service allows easy edit from the web as most web services which is quite handy and then allows Export to DOCX or PDF file format. So far so good but today I faced a really weird problem after, I've used successfully EuroPassCV service  and downloaded the PDF to my computer and tried to submit my Curriculum Vitae application to SAP's Successfactor newly created account for the purpose I faced a weird I error saying

"The system does not allow files with Active contents. Please …"

the-system-does-not-allow-files-with-active-contents-pdf-error-successfactors-errors

Of course if this error message was received on a Start-up application on Application upload that would be fine, but come on this is SAP's Successfactors, it cannot accept a standard generated PDF from EuroPass which nowadays is a standard for CV here in Europe and hosted on of official European Union website europa.eu

To me this is a clear signal SAP needs an experienced ICT specialists and Quality Assurance testers like me to fix their mess and I will be willing to help them if they contact me until its too late for them, but let me go back to the topic of this article which was how to remove active contents from a PDF file 🙂

So first lets make clear what is Active content in a file ?

Active contents is content that includes programs like Internet polls, JavaScript applications, stock tickers, animated images, ActiveX applications, action items, streaming video and audio, weather maps, embedded objects, and much more. Active content contains programs that trigger automatic actions on a Web page without the user's knowledge or consent.
Active contents (Macros) could exist in many file formats that are used daily in most companies / organizations daily, active content can be contained in documents such as MS Excel,  Word, PDF, PowerPoint and so on.

So why does some applications disable document support for Active contents?

Well just for the reason of security, Active contents could often be some kind of malware or crapware and they can mess up with the web application (in case of bugs) or even mess up with server software if it is a complex warm like behavior exploiting some kind of vulnerability.
One thing to say about active contents removal on file upload by applications is that this practice could only be tolerated if the organization had already adapted a security through obscurity which most likely is the case with SAP's Successfactors and many other applications out there.

So next question is how to  Panicea (Resolution) Active Contents existing in a PDF file

Assuming you have a GNU / Linux Desktop or server with ghostscript package installed (which is the case by default with virtually any modern Linux distribution), removing Active Contents from PDF to make possible file to be submitted to the picky Security Conscious application with a single command:
 

gs -dNOPAUSE -sDEVICE=pdfwrite -sOUTPUTFILE=CV-Georgi_Dimitrov_Georgiev-Europass-20190718-EN-noact-content.pdf -dBATCH CV-Georgi_Dimitrov_Georgiev-Europass-20190718-EN.pdf


After that the stripped active contents PDF file would succeed in uploading to web app.
 

 

 

Where are Apache log files on my server – Apache log file locations on Debian / Ubuntu / CentOS / Fedora and FreeBSD ?

Tuesday, November 7th, 2017

apache-where-are-httpd-access-log-files

Where are Apache log files on my server?

1. Finding Linux / FreeBSD operating system distribtion and version

Before finding location of Apache log files it is useful to check what is the remote / local Linux operating system version, hence

First thing to do when you login to your remote Linux server is to check what kind of GNU / Linux you're dealing with:

cat /etc/issue
cat /etc/issue.net


In most GNU / Linux distributions should give you enough information about the exact Linux distribution and version remote server is running.

You will get outputs like

# cat /etc/issue
SUSE LINUX Enterprise Server 10.2 Kernel \r (\m), \l

or

# cat /etc/issue
Debian GNU/Linux 8 \n \l

If remote Linux is Fedora look for fedora-release file:

cat /etc/fedora-release Fedora release 7 (Moonshine)

The proposed freedesktop.org standard with the introduction of systemd across all Linux distributions is

/etc/os-release

 

# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"


Once we know what kind of Linux distribution we're dealing with, we can proceed with looking up for standard location of Apache config:

2. Apache config file location for Fedora / CentOS / RHEL and other RPM based distributions

RHEL / Red Hat / CentOS / Fedora Linux Apache access file location
 

/var/log/httpd/access_log


3. Apache config file location for Debian / Ubuntu and other deb based Linux distributions

Debian / Ubuntu Linux Apache access log file location

/var/log/apache2/access.log


4. Apache config file location for FreeBSD

FreeBSD Apache access log file location –

/var/log/httpd-access.log


5. Finding custom Apache access log locations
 

If for some reason the system administrator on the remote server changed default path for each of distributions, you can find custom configured log files through:

a) On Debian / Ubuntu / deb distros:

debian:~# grep CustomLog /etc/apache2/apache2.conf


b) On CentOS / RHEL / Fedora Linux RPM based ones:

[root@centos:  ~]# grep CustomLog /etc/httpd/conf/httpd.conf


c) On FreeBSD OS

 

freebsd# grep CustomLog /etc/httpd/conf/httpd.conf
 # a CustomLog directive (see below).
    #CustomLog "/var/log/httpd-access.log" common
    CustomLog "/var/log/httpd-access.log" combined

How to enable Gravis UltraSound in DOSBox for enhanced music experience in DOS programs and Games

Tuesday, October 31st, 2017

DOSBox

Gravis UltraSound Classic

 

Gravis UltraSound

Gravis UltraSound or GUS is a sound card for the IBM PC compatible systems.
It was lunched in 1992 and is notable for it's ability to use real-world sound recordings (wavetable) of a musical instruments rather than artificial computer-generated waveforms.
As one of my friends used to say back then: "it sounds like a CD".

To enable GUS in DOSBox all you need to do is:

1. Download the archive with the GUS files from https://alex.pc-freak.net/files/GUS/ULTRASND.zip. Extract the archive (there is already a directory in it so you don't have to create one) preferably where you keep your DOSBox stuff (like Games).

2. Find your DOSBox config file. Depending on the version or host OS, the dosbox conf file is located either inside the user profile folder or inside the same folder as dosbox.exe. In Windows 7 the config file is located at

"C:\Users\Fred\AppData\Local\VirtualStore\Program Files (x86)\dosbox.conf"

where "Fred" is your username.

In GNU/Linux it's in "/home/Fred/.dosbox/dosbox.conf" where "Fred" is your username.

The name of the conf file may also have dosbox version (for example –

"dosbox-0.74.conf").

Open it with a text editor like notepad (Windows) or equvalent for GNU/Linux (vi, Kate, gedit…). Locate "[gus]" section (without the quotes) and edit it so it looks like this:

[gus]
#      gus: Enable the Gravis Ultrasound emulation.
#  gusrate: Sample rate of Ultrasound emulation.
#           Possible values: 44100, 48000, 32000, 22050, 16000, 11025, 8000, 49716.
#  gusbase: The IO base address of the Gravis Ultrasound.
#           Possible values: 240, 220, 260, 280, 2a0, 2c0, 2e0, 300.
#   gusirq: The IRQ number of the Gravis Ultrasound.
#           Possible values: 5, 3, 7, 9, 10, 11, 12.
#   gusdma: The DMA channel of the Gravis Ultrasound.
#           Possible values: 3, 0, 1, 5, 6, 7.
# ultradir: Path to Ultrasound directory. In this directory
#           there should be a MIDI directory that contains
#           the patch files for GUS playback. Patch sets used
#           with Timidity should work fine.

gus=true
gusrate=44100
gusbase=240
gusirq=5
gusdma=3
ultradir=C:\ULTRASND

Then save the dosbox conf file.

3. Start DOSBox and mount "ULTRASND" directory to "C:".

You can do that with

mount c (directory to ULTRASND)

For example if you have extracted the archive in "C:\Games" it has created "C:\Games\ULTRASND" and the command you will have to write in DOSBox is

mount c c:\Games

(example: if your game is in "C:\Games\Heroes2" and your GUS directory is "C:\Games\ULTRSND" (if you have extracted the archive "C:\Games\") then you "mount c c:\Games" and you are set)

or for GNU/Linux if you have extracted the archive in "/home/Fred/Games" it has created "/home/Fred/Games/ULTRASND" and the command you will have to write in DOSBox is

mount c /home/Fred/Games (where "Fred" is your user name).

(example: if your game is in "/home/Fred/Games/Heroes2" and your GUS directory is "/home/Fred/Games/ULTRSND" (if you have extracted the archive" /home/Fred/Games/") in  then you "mount c /home/Fred/Games" and you are set)

You can make this automatic so you don't have to write it everytime by adding this command in the end (bottom) part of your dosbox conf file and save it.

You're practicly ready. All you need to do now is set Gravis UltraSound in your game or application setup (for example with the file "setup.exe") with IO: 240, IRQ 5 and DMA 3. If you prefer you previous sound card you can do that by selecting it again from the setup without disabling GUS from the dosbox conf file.

Happy listening!

Gravis Ultrasound

 

Article written by Alex

rc.local missing in Debian 8 Jessie and Debian 9 Stretch and newer Ubuntu 16, Fedora, CentOS Linux – Why is /etc/rc.local not working and how to make it work again

Monday, September 11th, 2017

rc.local-not-working-solve-fix-linux-startup-with-rc.local-explained-how-to-make-rc.local-working-again-on-newer-linux-distributions

If you have installed a newer version of Debian GNU / Linux such as Debian Jessie or Debian  9 Stretch or Ubuntu 16 Xenial Xerus either on a server or on a personal Desktop laptop and you want tto execute a number of extra commands next to finalization of system boot just like we GNU / Linux users used to do already for the rest 25+ years you will be surprised that /etc/rc.local is no longer available (file is completely missing!!!).

This kind of behaviour (to avoid use of /etc/rc.local and make the file not present by default right after Linux OS install) was evident across many RedHack (Redhat) distributions such as Fedora and CentOS Linux for the last number of releases and the tendency was to also happen in Debian based distros too as it often does, however there was a possibility on this RPM based distros as well as rest of Linux distros to have the /etc/rc.local manually created to work around the missing file.

But NOoooo, the smart new generation GNU / Linux architects with large brains decided to completely wipe out the execution on Linux boot of /etc/rc.local from finalization stage, SMART isn't it??

For instance If you used to eat certain food for the last 25+ years and they suddenly prohibit you to eat it because they say this is not necessery anymore how would you feel?? Crazy isn't it??

Yes I understand the idea to wipe out /etc/rc.local did have a reason as the developers are striving to constanly improve the boot speed process (and the introduction of systemd (system and service manager) in Debian 8 Jessie over the past years did changed significantly on how Linux boots (earlier used SysV boot and LSB – linux standard based init scripts), but come on guys /etc/rc.local
doesn't stone the boot process with minutes, including it will add just 2, 3 seconds extra to boot runtime, so why on earth did you decided to remove it??

What I really loved about Linux through the years was the high level of consistency and inter-operatibility, most things worked just the same way across distributions and there was some logic upgrade, but lately this kind of behaviour is changing so in many of the new things in both GUI and text mode (console) way to interact with a GNU / Linux PC all becomes messy sadly …

So the smart guys who develop Gnu / Linux distros said its time to depreciate /etc/rc.local to prevent the user to be able to execute his set of finalization commands at the end of each booted multiuser runlevel.

The good news is you can bring back (resurrect) /etc/rc.local really easy:

To so, just execute the following either in Physical /dev/tty Console or in Gnome-Terminal (for GNOME users) or for KDE GUI environment users in KDE's terminal emulator konsole:

 

cat <<EOF >/etc/rc.local
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.

exit 0
EOF
chmod +x /etc/rc.local
systemctl start rc-local
systemctl status rc-local


I think above is self-explanatory /etc/rc.local file is being created and then to enable it we run systemctl start rc-local and then to check the just run rc-local service status systemctl status

You will get an output similar to below:
 

 

root@jericho:/home/hipo# systemctl start rc-local
root@jericho:/home/hipo# systemctl status rc-local
● rc-local.service – /etc/rc.local Compatibility
   Loaded: loaded (/lib/systemd/system/rc-local.service; static; vendor preset:
  Drop-In: /lib/systemd/system/rc-local.service.d
           └─debian.conf
   Active: active (exited) since Mon 2017-09-11 13:15:35 EEST; 6s ago
  Process: 5008 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/rc-local.service
sep 11 13:15:35 jericho systemd[1]: Starting /etc/rc.local Compatibility…
setp 11 13:15:35 jericho systemd[1]: Started /etc/rc.local Compatibility.

To test /etc/rc.local is working as expected you can add to print any string on boot, right before exit 0 command in /etc/rc.local

you can add for example:
 

echo "YES, /etc/rc.local IS NOW AGAIN WORKING JUST LIKE IN EARLIER LINUX DISTRIBUTIONS!!! HOORAY !!!!";


On CentOS 7 and Fedora 18 codename (Spherical Cow) or other RPM based Linux distro if /etc/rc.local is missing you can follow very similar procedures to have it enabled, make sure

/etc/rc.d/rc.local

is existing

and /etc/rc.local is properly symlined to /etc/rc.d/rc.local

Also don't forget to check whether /etc/rc.d/rc.local is set to be executable file with ls -al /etc/rc.d/rc.local

If it is not executable, make it be by running cmd:
 

chmod a+x /etc/rc.d/rc.local


If file /etc/rc.d/rc.local happens to be missing just create it with following content:

 

#!/bin/sh

# Your boot time rc.commands goes somewhere below and above before exit 0

exit 0


That's all folks rc.local not working is solved,
enjoy /etc/rc.local working again 🙂

 

Converting .crt .cer .der to PEM, converting .PEM to .DER and convert .PFX PKCS#12 (.P12) to .PEM file using OpenSSL

Friday, September 1st, 2017

openssl_check_verify_crt_csr_key_certificate_consistency-with-openssl-command-openssl-logo

These commands allow you to convert certificates and keys to different formats to make them compatible with specific types of servers or software. For example, you can convert a normal PEM file that would work with Apache to a PFX (PKCS#12) file and use it with Tomcat or IIS.

  • Convert a DER file (.crt .cer .der) to PEM

     

    openssl x509 -inform der -in certificate.cer -out certificate.pem
    
  • Convert a PEM file to DER

     

    openssl x509 -outform der -in certificate.pem -out certificate.der
    
  • Convert a PKCS#12 file (.pfx .p12) containing a private key and certificates to PEM

     

    openssl pkcs12 -in keyStore.pfx -out keyStore.pem -nodes


    You can add -nocerts to only output the private key or add -nokeys to only output the certificates.

  • Convert a PEM certificate file and a private key to PKCS#12 (.pfx .p12)

     

    openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key \
    -in certificate.crt -certfile CACert.crt

Disable Windows hibernate on a work notebook or Desktop Gamers PC – Save a lot of Space on Windows C Drive, delete hidefil.sys howto

Thursday, May 18th, 2017

how-to-to-disable-stop-hibernate-windows-8-10-to-save-disk-space-and-get-rid-of-hbierfil.sys-misteriously-occupying-space-improve-windows-performance

Some Windows  laptop / desktop users prefer not to shutdown computers (especially those coming back from Mac OS backgound) at the end of the day but  hibernate instead.

Hibernate is a great thing but historically we know well that in Windows hibernate is working much worser than on Macs and it is common that after multiple hibernates you will face problems with missing  C: drive space is it might be "misteriously" decreasing in a way that the PC performance degrades as the C:hibfile.sys hidden file occupies few 16Gigas or so (the occupied space by hibfile.sys does resemble the installed RAM Memory on the computer, so if your PC has 16Gigas the hibfile.sys will be lets say approximately 15 Gigabytes)

However most users never use hibernate and might never use it for a life time, especially those on a Desktop Windows PCs, I use Windows as a WorkStation as an employee of DXC (the ex Hewlett Packard or Hewlett Packard Enterprise that merged with CSC) but to be honest I've used hibernate function very raraly on the notebook, thus I find the hibernate more or less useless feature, especially because at many times I try to wake-up the PC after hibernate the computer boots but the display stays dark and I have to restart the Computer before I can go back to normal work operations. Of course my Windows 7 hibernation issues might be caused do to the corporate software installed on my PC or because the fact the hard drive is encrypted but nomatter that in my case and I guess in case of many the hibernate function on Windows 7 / 8 / 10 might be totally useless.
 


Few works is Hiberfil.sys File and Why you might want to complete disable / delete it


On Windows 7 / 8 / 10 the hiberfil.sys file is being used to store the PC current state at time of hibernation, so if you have to move from a place to place within an organization / university / office without a charger hibernation is a really nice way to save battery power without later wasting time for additional PC boot (where a lot of power is wasted for Operationg System to load and re-opening the opened Browser etc.

So in short sleeping the PC with Hibernate function does cause the Computer to write into C:hiberfil.sys all data at the moment stored in the PC RAM (Memory), which is being cleared up at time of Computer being in Sleep mode.
Once the computer receives a Wake-up call from the hibernation in order to present with the Desktop at the same state hiberfile.sys stored information is being red and transferred to PC flushable RAM so the RAM memory is again filled with same bits it used to have right before the hibernation was made.

Because hiberfil.sys is a system file it has the hidden attribute and it can only be write / read by a Administrator Win account and usually it is not a good idea to touch it

Some people haven't shutdown Windows for 20-30 days and especially if Windows has disabled updates it happens for some users to use the hibernate function for weeks (re-hibernating and waking up thousand times) for long periods so the effect is the hiberfile.sys might become gigantic and if you take the time to check what is file or directory is wasting all your C:> drive with leys say WinDirStat or SpaceSniffer you will notice the lets say 15Gigas being eaten by Hiberfil.sys.

Disable of hibfile.sys is also a great tip for Gamers desktop PCs as most gamers won't use hibernate function at all.

I. How to Disable Hibernate Mode in Windows 10, 8, 7, or Vista


In order to get rid of the file across Windows 7 / 8 / 10

Open command prompt (as an Administartor, right click on the Command Prompt cmd.exe and choose Run as Administartor) and issue below cmd:

disable-hibernate-on-windows-7-8-10-powercfg-off-screenshot

C:> powercfg -h off

If later you decide you need the hibernate function again active on the PC or notebook do issue:

C:> powercfg -h on

You’re likely reading this because you noticed a gigantic hiberfil.sys file sitting on your system drive and you’re wondering if you can get rid of it to free up some space. Here’s what that file is and how you can delete it if you want to.

 

II. Disable Hibernate Mode in Windows XP

Hibernate function command is not present on Windows XP so in order to remove it on XP (hope you don't use XP any more and you're not a viction of the resent crypt catastrophic ransomware WannaCry 🙂

disable-hibernate-mode-windows-xp-screenshot

Control Panel -> Power Options

In the Power Options properties window, switch to the “Hibernate” tab and disable the “Enable hibernation” option.

After you disable hibernate mode, restart PC, and manually delete the hiberfil.sys file.

Now enjoy free-ing up few gigabytes of useless wasted C: hard drive space from your PC 🙂

Note: Removing hiberfil.sys is a precious thing to do on old Windows Computers which have been made with a little leys say 40Gigabyte partition drive C: whether with the time due to User profile use and Browsing caches the C: drive has left with leys say 1-2 Gigabyte of free space and the computers overall performance has fallen twice or so.

This post is in memoriam of Chriss Cornell (our generation used to grow with grunge and his music was one of the often listened by me and our generation)

R.I.P: Chriss Cornell (the head of SoundGarden and AudioSlave who passed away yesterday right on the day when we in Bulgarian Eastern Orthodox Church commemorate the memory of a great-martyr Nicolay Sofijski (Great Martyr Nicolas from Sofia martyred by Turkish Ottomans during year 1555).

I found surprising fact for me  that Chriss Cornell converted to Greek Eastern Orthodox faith under influence of his Greek Wife, below is paste from his Wikipedia page:

"

Chriss Cornell Personal life (Rest in Peace Chris)

Cornell was married to Susan Silver, the manager of Alice in Chains and Soundgarden.[123] They had a daughter, Lillian Jean, born in June 2000.[123] He and Silver divorced in 2004.[123] In December 2008, Cornell reported via his official website that he had finally won back his collection of 15 guitars after a four-year court battle with Silver.[124]

He was married to Vicky Karayiannis,[125] a Paris-based American publicist of Greek heritage. The union produced a daughter, Toni, born in September 2004, and a son, Christopher Nicholas, born in December 2005.[126] Cornell converted to the Greek Orthodox Church through her influence.[127]

When asked how Cornell beat all his addictions he stated, "It was a long period of coming to the realization that this way (sober) is better. Going through rehab, honestly, did help … it got me away from just the daily drudgery of depression and either trying to not drink or do drugs or doing them and you know, they give you such a simple message that any idiot can get and it's just over and over, but the bottom line is really, and this is the part that is scary for everyone, the individual kinda has to want it … not kinda, you have to want it and to not do that crap anymore or you will never stop and it will just kill you."[128]

In a 2011 interview,[129] Cornell said the major change with the reformed Soundgarden is a lack of alcohol: "The biggest difference I noticed … and we haven't even really talked about it: There are no bottles of Jack Daniel's around or beers. And we never talked about … it's just not there."


Enjoy!