Posts Tagged ‘use’

KVM Virtual Machine RHEL 8.3 Linux install on Redhat 8.3 Linux Hypervisor with custom tailored kickstart.cfg

Friday, January 22nd, 2021

kvm_virtualization-logo-redhat-8.3-install-howto-with-kickstart

If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.

Here is the official description of Kickstart files from Redhat:

"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."


Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
 

1. Create /vmprivate storage directory where Virtual machines will reside

First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:

[root@redhat ~]#  lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]#  mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate

To view what is the situation with Logical Volumes and  VG group names:

[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  – currently set to     8192
  Block device           253:0

 

  — Logical volume —
  LV Path                /dev/vg00/vmprivate
  LV Name                vmprivate
  VG Name                vg00
  LV UUID                VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
  LV Write Access        read/write
  LV Creation host, time lpgblu01f.ffm.de.int.atosorigin.com, 2021-01-20 17:26:11 +0100
  LV Status              available
  # open                 1
  LV Size                150.00 GiB


Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.

To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab

/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2

[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate  /vmprivate              ext4    defaults,nodev,nosuid 1 2' >> /etc/fstab

 

2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host

[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y

3. Enable libvirtd on the host

[root@redhat ~]#  lsmod | grep -i kvm
[root@redhat ~]#  systemctl enable libvirtd

4. Configure network bridging br0 interface on Hypervisor


In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:

NM_CONTROLED=NO

Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..

[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0

5. Prepare a working kickstart.cfg file for VM


Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .

#version=RHEL8
#install
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=vda

# Use network installation
#url --url=http://hostname.com/rhel/8/BaseOS
##url --url=http://171.23.8.65/rhel/8/os/BaseOS

# Use text mode install
text
#graphical

# System language
#lang en_US.UTF-8
keyboard --vckeymap=us --xlayouts='us'
# Keyboard layouts
##keyboard us
lang en_US.UTF-8

# Root password
rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted

# network-stuff
# place ip=your_VM_IP, netmask, gateway, nameserver hostname 
network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes
# if you need just localhost initially configured uncomment and comment above
##network В --device=lo --hostname=localhost.localdomain

# System authorization information
authconfig --enableshadow --passalgo=sha512 --enablefingerprint

# skipx
skipx

# Firewall configuration
firewall --disabled

# System timezone
timezone Europe/Berlin

# Clear the Master Boot Record
##zerombr

# Repositories
## Add RPM repositories from KS file if necessery
#repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream
#repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS
#repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate
##repo --name=rhsm-baseos В  В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/
##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/
##repo --name=os-baseos В  В  В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/
##repo --name=os-appstream В  --baseurl=http://172.54.8.65/rhel/8/os/AppStream/
#repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS

# Disk partitioning information set proper disk sizing
##bootloader --location=mbr --boot-drive=vda
bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda
# partition plan
zerombr
clearpart --all --drives=vda --initlabel
part /boot --size=1024 --fstype=ext4 --asprimary
part swap --size=1024
part pv.01 --size=30000 --grow --ondisk=vda
##part pv.0 --size=80000 --fstype=lvmpv
#part pv.0 --size=61440 --fstype=lvmpv
volgroup s pv.01
logvol / --vgname=s --size=15360 --name=root --fstype=ext4
logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid"
logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid"
logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid"

# SELinux configuration
selinux --disabled
# Installation logging level
logging --level=debug

# reboot automatically
reboot

###

%packages
@standard
python3
pam_ssh_agent_auth
-nmap-ncat
#-plymouth
#-bpftool
-cockpit
#-cryptsetup
-usbutils
#-kmod-kvdo
#-ledmon
#-libstoragemgmt
#-lvm2
#-mdadm
-rsync
#-smartmontools
-sos
-subscription-manager-cockpit
# Tune Linux vm.dirty_background_bytes (IMAGE-439)
# The following tuning causes dirty data to begin to be background flushed at
# 100 Mbytes, so that it writes earlier and more often to avoid a large build
# up and improving overall throughput.
echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf

# Disable kdump
systemctl disable kdump.service
%end

Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :

Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string

[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0

Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)

Method 2: (md5, sha256, sha512)

[root@redhat ~]# mkpasswd –method=SHA-512 –stdin

The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
 

6. Install the new VM with virt-install cmd


Roll the new preconfigured VM based on above ks template file use some kind of one liner command line  like below:
 

[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"

7. Use a tiny shell script to automate VM creation


For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
 

#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';

virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"


A copy of virt-install.sh script can be downloaded here

Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with

[root@redhat ~]# virst list –all
 Id   Name        State
—————————
 2    
RHEL8_3-VirtualMachine   running

[root@redhat ~]#  virsh console RHEL8_3-VirtualMachine


redhat8-login-prompt

One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website

In case if you later need to destroy the VM and the respective created Image file you can do it with:
 

[root@redhat ~]#  virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]#  virsh undefine RHEL8_3-VirtualMachine

Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂

 

 

Enjoy !

Configure rsyslog buffering on Linux to avoid message lost to Central Logging server

Wednesday, January 13th, 2021

rsyslog-Centralized-Logging-System-using-Rsyslog_logo

1. Rsyslog Buffering

One of the best practice about logs management is to send syslog to a central server. However, a logging system should be capable of avoiding message loss in situations where the server is not reachable. To do so, unsent data needs to be buffered at the client when central server is not available. You might have recently noticed that many servers forwarding logs messages to a central server do not have buffering functionalities activated. Thus I strongly advise you to have look to this documentation to know how to check your configuration: http://www.rsyslog.com/doc/rsyslog_reliable_forwarding.html

Rsyslog buffering with TCP/UDP configured

In rsyslog, every action runs on its own queue and each queue can be set to buffer data if the action is not ready. Of course, you must be able to detect that "the action is not ready", which means the remote server is offline. This can be detected with plain TCP syslog and RELP, but not with UDP. So you need to use either of the two. In this howto, we use plain TCP syslog.

– Version requirement

Please note that we are using rsyslog-specific features. The are required on the client, but not on the server. So the client system must run rsyslog (at least version 3.12.0), while on the server another syslogd may be running, as long as it supports plain tcp syslog.

How To Setup rsyslog buffering on Linux

First, you need to create a working directory for rsyslog. This is where it stores its queue files (should need arise). You may use any location on your local system. Next, you need to do is instruct rsyslog to use a disk queue and then configure your action. There is nothing else to do. With the following simple config file, you forward anything you receive to a remote server and have buffering applied automatically when it goes down. This must be done on the client machine.

# Example:
# $ModLoad imuxsock             # local message reception
# $WorkDirectory /rsyslog/work  # default location for work (spool) files
# $ActionQueueType LinkedList   # use asynchronous processing
# $ActionQueueFileName srvrfwd  # set file name, also enables disk mode
# $ActionResumeRetryCount -1    # infinite retries on insert failure
# $ActionQueueSaveOnShutdown on # save in-memory data if rsyslog shuts down
# *.*       @@server:port

How to check Linux server power supply state is Okay / How to find out a Linux Power Supply is broken

Wednesday, January 6th, 2021

2U-power-supplies-get-status-if-Power-supply-broken-information-linux-ipmitool

If you're a sysadmin and managing remotely Linux servers, every now and then if a machine is hanging without a reason it useful to check the server Power Supply state. I say that because often if the machine is mysteriously hanging and a standard Root Cause Analysis (RCA) on /var/log/messages /var/log/dmesg /var/log/boot etc. did not bring you to any different conclusion. The next step after you send a technician to reboot the machine is to check on Linux OS level whether Power Supply Unit (PSU) hardware on the machine does not have some issues.
As blogged earlier on how to use ipmitool to manage remote ILO remote boards etc. the ipmitool can also be used to check status of Server PSUs.

Below is example output of 2 PSU server whose Power Supplies are functioning normally.
 

[root@linux-server ~]# ipmitool sdr type "Power Supply"

PS Heavy Load    | 2Bh | ok  | 19.1 | State Deasserted
Power Supply 1   | 70h | ok  | 10.1 | Presence detected
Power Supply 2   | 71h | ok  | 10.2 | Presence detected
PS Configuration | 72h | ok  | 19.1 |
PS 1 Therm Fault | 75h | ok  | 10.1 | Transition to OK
PS 2 Therm Fault | 76h | ok  | 10.2 | Transition to OK
PS1 12V OV Fault | 77h | ok  | 10.1 | Transition to OK
PS2 12V OV Fault | 78h | ok  | 10.2 | Transition to OK
PS1 12V UV Fault | 79h | ok  | 10.1 | Transition to OK
PS2 12V UV Fault | 7Ah | ok  | 10.2 | Transition to OK
PS1 12V OC Fault | 7Bh | ok  | 10.1 | Transition to OK
PS2 12V OC Fault | 7Ch | ok  | 10.2 | Transition to OK
PS1 12Vaux Fault | 7Dh | ok  | 10.1 | Transition to OK
PS2 12Vaux Fault | 7Eh | ok  | 10.2 | Transition to OK
Power Unit       | 7Fh | ok  | 19.1 | Fully Redundant

Now if you have a server lets say on an old ProLiant DL360e Gen8 whose Power Supply is damaged, you will get an from ipmitool similar to:

[root@linux-server  systemd]# ipmitool sdr type "Power Supply"
Power Supply 1   | 30h | ok  | 10.1 | 100 Watts, Presence detected
Power Supply 2   | 31h | ok  | 10.2 | 0 Watts, Presence detected, Failure detected, Power Supply AC lost
Power Supplies   | 33h | ok  | 10.3 | Redundancy Lost


If you don't have ipmitool installed due to security or whatever but you have the hardware detection software dmidecode you can use it too to get the Power Supply state

[root@linux-server  systemd]# dmidecode -t chassis
# dmidecode 3.2
Getting SMBIOS data from sysfs.
SMBIOS 2.8 present.

 

Handle 0x0300, DMI type 3, 21 bytes
Chassis Information
        Manufacturer: HP
        Type: Rack Mount Chassis
        Lock: Not Present
        Version: Not Specified
        Serial Number: CZJ38201ZH
        Asset Tag:
        Boot-up State: Critical
        Power Supply State: Critical

        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0

To find only Power Supply info status on a server with dmideode.

# dmidecode –type 39

monitoring-power-supply-hardware-information-linux-ipmitool

Plug between the power supply and the mainboard voltage / coms ATX specification

This can also be used on a normal Linux desktop PCs which usually have only 1U (one power supply) on many of Ubuntus and Linux desktops where lshw (list hardaware information) is installed to get the machine PSUs status with lshw 

 root@ubuntu:~# lshw -c power
  *-battery               
       product: 45N1111
       vendor: SONY
       physical id: 1
       slot: Front
       capacity: 23200mWh
       configuration: voltage=11.1V
        Thermal State: Safe
        Security Status: Unknown
        OEM Information: 0x00000000
        Height: 1 U
        Number Of Power Cords: 2
        Contained Elements: 0


Finally to get an extensive information on the voltages of the Power Supply you can use the good old lm_sensors.

# apt-get install lm-sensors
# sensors-detect 
# service kmod start

# sensors
# watch sensors


As manually monitoring Power Supplies and other various data is dubious, finally you might want to use some centralized monitoring. For one example on that you might want to check my prior Zabbix to Monitor Hardware Hard Drive / Temperature and Disk with lm_sensors / smartd on Linux with Zabbix.

How to redirect / forward all postfix emails to one external email address?

Thursday, October 29th, 2020

Postfix_mailserver-logo-howto-forward-email-with-regular-expression-or-maildrop

Lets say you're  a sysadmin doing email migration of a Clustered SMTP and due to that you want to capture for a while all incoming email traffic and redirect it (forward it) towards another single mailbox, where you can review the mail traffic that is flowing for a few hours and analyze it more deeper. This aproach is useful if you have a small or middle sized mail servers and won't be so useful on a mail server that handels few  hundreds of mails hourly. In below article I'll show you how.

How to redirect all postfix mail for a specific domain to single external email address?

There are different ways but if you don't want to just intercept the traffic and a create a copy of email traffic using the always_bcc integrated postfix option (as pointed in my previous article postfix copy every email to a central mailbox).  You can do a copy of email flow via some custom written dispatcher script set to be run by the MTA on each mail arriva, or use maildrop filtering functionality below is very simple example with maildrop in case if you want to filter out and deliver to external email address only email targetted to specific domain.

If you use maildrop as local delivery agent to copy email targetted to specifidc domain to another defined email use rule like:

if ( /^From:.*domain\.com/:h ) {
  cc "!someothermail@domain2.com"
}


To use maildrop to just forward email incoming from a specific sender towards local existing email address on the postfix to an external email address  use something like:

if ( /^From: .*linus@mail.example.com.*/ )
{
        dotlock "forward.lock" {
          log "Forward mail"
          to "|/usr/sbin/sendmail linuxbox@collector.example.com"
        }
}

Then to make the filter active assuming the user has a physical unix mailbox, paste above to local user's  $HOME/.mailfilter.

What to do if your mail delivered via your Email-Server.com are sent from a monitoring and alarming scripts that are sending towards many mailboxes that no longer exist after the migration?

To achive capturing all normal attempted to be sent traffic via the mail server, we can forward all served mails towards a single external mail address you can use the nice capability of postfix to understand PCRE perl compatible regular expressions. Regular expressions in postfix of course has its specific I recommend you take a look to the postfix regexp table documentation here, as well as check the Postfix Regex / Tester / Debugger online tool – useful to validate a regexp you want to implement.

How to use postfix regular expression to do a redirect of all sent emails via your postfix mail relayhost towards external mail servers?

 

In main.cf /etc/postfix/main.cf include this line near bottom or as a last line:

virtual_maps = hash:/etc/postfix/virtual, regexp:/etc/postfix/virtual-regexp

One defines the virtual file which can be used to define any of your virtual domains you want to simulate as present on the local postfix, the regexp: does load the file which is read by postfix where you can type the regular expression applied to every incoming email via SMTP port 25 or encrypted MTA ports 385 / 995 etc.

So how to redirect all postfix mail to one external email address for later analysis?

Create file /etc/postfix/virtual-regexp

/.+@.+/ external-forward-email@gmail.com

Next build the mapfile (this will generate /etc/postfix/virtual-regexp.db )
 

# postmap /etc/postfix/virtual-regexp

This also requires a virtual.db to exist. If it doesn't create an empty file called virtual and run again postmap postfix .db generator

# touch /etc/postfix/virtual && postmap /etc/postfix/virtual


Note in /etc/postfix/virtual you can add your postfix mail domains for which you want the MTA to accept mail as a local mail.

In case you need to view all postfix defined virtual domains configured to accept mail locally on the mail server.
 

$ postconf -n | grep virtual
virtual_alias_domains = mydomain.com myanotherdomain.com
virtual_alias_maps = hash:/etc/postfix/virtual


The regexp /.+@.+/ external-forward-email@gmail.com applied will start forwarding mails immediately after you reload the MTA with:

# systemctl restart postfix


If you want to exclude target mail domains to not be captured by above regexp, in /etc/postfix/virtual-regexp place:

/.+@exclude-domain1.com/ @exclude-domain1.com
/.+@exclude-domain2.com/ @exclude-domain2.com

Time for a test. Send a test email


Next step is to Test it mail forwarding works as expected
 

# echo -e "Tseting body" | mail -s "testing subject" -r "testing@test.com" whatevertest-user@mail-recipient-domain.com

Deny DHCP Address by MAC on Linux

Thursday, October 8th, 2020

Deny DHCP addresses by MAC ignore MAC to not be DHCPD leased on GNU / Linux howto

I have not blogged for a long time due to being on a few weeks vacation and being in home with a small cute baby. However as a hardcore and a bit of dumb System administrator, I have spend some of my vacation and   worked on bringing up the the www.pc-freak.net and the other Websites hosted as a high availvailability ones living on a 2 Webservers running on a Master to Master MySQL Replication backend database, this is oll hosted on  servers, set to run as a round robin DNS hosts on 2 servers one old Lenove ThinkCentre Edge71 as well as a brand new real Lenovo server Lenovo ThinkServer SD350 with 24 CPUs and a 32 GB of RAM
To assure Internet Connectivity is having a good degree of connectivity and ensure websites hosted on both machines is not going to die if one of the 2 pair configured Fiber Optics Internet Providers Bergon.NET has some Issues, I've rented another Internet Provider Line is set bought from the VIVACOM Mobile Fiber Internet provider – that is a 1 Gigabit Fiber Optics Line.
Next to that to guarantee there is no Database, Webserver, MailServer, Memcached and other running services did not hit downtimes due to Electricity power outage, two Powerful Uninterruptable Power Supplies (UPS)  FPS Fortron devices are connected to the servers each of which that could keep the machine and the connected switches and Servers for up to 1 Hour.

The machines are configured to use dhcpd to distributed IP addresses and the Main Node is set to distribute IPs, however as there is a local LAN network with more of a personal Work PCs, Wireless Devices and Testing Computers and few Virtual machines in the Network and the IPs are being distributed in a consequential manner via a ISC DHCP server.

As always to make everything work properly hence, I had again some a bit weird non-standard requirement to make some of the computers within the Network with Static IP addresses and the others to have their IPs received via the DHCP (Dynamic Host Configuration Protocol) and add some filter for some of the Machine MAC Addresses which are configured to have a static IP addresses to prevent the DHCP (daemon) server to automatically reassign IPs to this machines.

After a bit of googling and pondering I've done it and some of the machines, therefore to save others the efforts to look around How to set Certain Computers / Servers Network Card MAC (Interfaces) MAC Addresses  configured on the LAN network to use Static IPs and instruct the DHCP server to ingnore any broadcast IP addresses leases – if they're to be destined to a set of IGNORED MAcs, I came up with this small article.

Here is the DHCP server /etc/dhcpd/dhcpd.conf from my Debian GNU / Linux (Buster) 10.4

 

option domain-name "pcfreak.lan";
option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;
max-lease-time 891200;
authoritative;
class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;
subclass "black-hole" 70:e2:81:13:44:11;
subclass "black-hole" 70:e2:81:13:44:12;
subclass "black-hole" 00:16:3f:53:5d:11;
subclass "black-hole" 18:45:9b:c6:d9:00;
subclass "black-hole" 16:45:93:c3:d9:09;
subclass "black-hole" 16:45:94:c3:d9:0d;/etc/dhcpd/dhcpd.conf
subclass "black-hole" 60:67:21:3c:20:ec;
subclass "black-hole" 60:67:20:5c:20:ed;
subclass "black-hole" 00:16:3e:0f:48:04;
subclass "black-hole" 00:16:3e:3a:f4:fc;
subclass "black-hole" 50:d4:f5:13:e8:ba;
subclass "black-hole" 50:d4:f5:13:e8:bb;
subnet 192.168.0.0 netmask 255.255.255.0 {
        option routers                  192.168.0.1;
        option subnet-mask              255.255.255.0;
}
host think-server {
        hardware ethernet 70:e2:85:13:44:12;
        fixed-address 192.168.0.200;
}
default-lease-time 691200;
max-lease-time 891200;
log-facility local7;

To spend you copy paste efforts a file with Deny DHCP Address by Mac Linux configuration is here
/home/hipo/info
Of course I have dumped the MAC Addresses to omit a data leaking but I guess the idea behind the MAC ADDR ignore is quite clear

The main configuration doing the trick to ignore a certain MAC ALenovo ThinkServer SD350ddresses that are reachable on the Connected hardware switch on the device is like so:

class "black-hole" {
    match substring (hardware, 1, 6);
    ignore booting;
}
subclass "black-hole" 18:45:91:c3:d9:00;


The Deny DHCP Address by MAC is described on isc.org distribution lists here but it seems the documentation on the topic on how to Deny / IGNORE DHCP Addresses by MAC Address on Linux has been quite obscure and limited online.

As you can see in above config the time via which an IP is freed up and a new IP lease is done from the server is severely maximized as often DHCP servers do use a max-lease-time like 1 hour (3600) seconds:, the reason for increasing the lease time to be to like 10 days time is that the IPs in my network change very rarely so it is a waste of CPU cycles to do a frequent lease.

default-lease-time 691200;
max-lease-time 891200;


As you see to Guarantee resolving works always as expected I have configured – Google Public DNS and OpenDNS IPs

option domain-name-servers 8.8.8.8, 8.8.4.4, 208.67.222.222, 208.67.220.220;


One hint to make is, after setting up all my desired config in the standard config location /etc/dhcp/dhcpd.conf it is always good idea to test configuration before reloading the running dhcpd process.

 

root@pcfreak: ~# /usr/sbin/dhcpd -t
Internet Systems Consortium DHCP Server 4.4.1
Copyright 2004-2018 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Config file: /etc/dhcp/dhcpd.conf
Database file: /va/home/hipo/infor/lib/dhcp/dhcpd.leases
PID file: /var/run/dhcpd.pid
 

That's all folks with this sample config the IPs under subclass "black-hole", which are a local LAN Static IP Addresses will never be offered leasess anymore from the ISC DHCP.
Hope this stuff helps someone, enjoy and in case if you need a colocation of a server or a website hosting for a really cheap price on this new set High Availlability up described machines open an inquiry on https://web.pc-freak.net.

 

How to build Linux logging bash shell script write_log, logging with Named Pipe buffer, Simple Linux common log files logging with logger command

Monday, August 26th, 2019

how-to-build-bash-script-for-logging-buffer-named-pipes-basic-common-files-logging-with-logger-command

Logging into file in GNU / Linux and FreeBSD is as simple as simply redirecting the output, e.g.:
 

echo "$(date) Whatever" >> /home/hipo/log/output_file_log.txt


or with pyping to tee command

 

echo "$(date) Service has Crashed" | tee -a /home/hipo/log/output_file_log.txt


But what if you need to create a full featured logging bash robust shell script function that will run as a daemon continusly as a background process and will output
all content from itself to an external log file?
In below article, I've given example logging script in bash, as well as small example on how a specially crafted Named Pipe buffer can be used that will later store to a file of choice.
Finally I found it interesting to mention few words about logger command which can be used to log anything to many of the common / general Linux log files stored under /var/log/ – i.e. /var/log/syslog /var/log/user /var/log/daemon /var/log/mail etc.
 

1. Bash script function for logging write_log();


Perhaps the simplest method is just to use a small function routine in your shell script like this:
 

write_log()
LOG_FILE='/root/log.txt';
{
  while read text
  do
      LOGTIME=`date "+%Y-%m-%d %H:%M:%S"`
      # If log file is not defined, just echo the output
      if [ “$LOG_FILE” == “” ]; then
    echo $LOGTIME": $text";
      else
        LOG=$LOG_FILE.`date +%Y%m%d`
    touch $LOG
        if [ ! -f $LOG ]; then echo "ERROR!! Cannot create log file $LOG. Exiting."; exit 1; fi
    echo $LOGTIME": $text" | tee -a $LOG;
      fi
  done
}

 

  •  Using the script from within itself or from external to write out to defined log file

 

echo "Skipping to next copy" | write_log

 

2. Use Unix named pipes to pass data – Small intro on what is Unix Named Pipe.


Named Pipe –  a named pipe (also known as a FIFO (First In First Out) for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in OS/2 and Microsoft Windows, although the semantics differ substantially. A traditional pipe is "unnamed" and lasts only as long as the process. A named pipe, however, can last as long as the system is up, beyond the life of the process. It can be deleted if no longer used.
Usually a named pipe appears as a file, and generally processes attach to it for IPC.

 

Once named pipes were shortly explained for those who hear it for a first time, its time to say named pipe in unix / linux is created with mkfifo command, syntax is straight foward:
 

mkfifo /tmp/name-of-named-pipe


Some older Linux-es with older bash and older bash shell scripts were using mknod.
So idea behind logging script is to use a simple named pipe read input and use date command to log the exact time the command was executed, here is the script.

 

#!/bin/bash
named_pipe='/tmp/output-named-pipe';
output_named_log='
/tmp/output-named-log.txt ';

if [ -p $named_pipe ]; then
rm -f $named_pipe
fi
mkfifo $named_pipe

while true; do
read LINE <$named_pipe
echo $(date): "$LINE" >>/tmp/output-named-log.txt
done


To write out any other script output and get logged now, any of your output with a nice current date command generated output write out any output content to the loggin buffer like so:

 

echo 'Using Named pipes is so cool' > /tmp/output-named-pipe
echo 'Disk is full on a trigger' > /tmp/output-named-pipe

  • Getting the output with the date timestamp

# cat /tmp/output-named-log.txt
Mon Aug 26 15:21:29 EEST 2019: Using Named pipes is so cool
Mon Aug 26 15:21:54 EEST 2019: Disk is full on a trigger


If you wonder why it is better to use Named pipes for logging, they perform better (are generally quicker) than Unix sockets.

 

3. Logging files to system log files with logger

 

If you need to do a one time quick way to log any message of your choice with a standard Logging timestamp, take a look at logger (a part of bsdutils Linux package), and is a command which is used to enter messages into the system log, to use it simply invoke it with a message and it will log your specified output by default to /var/log/syslog common logfile

 

root@linux:/root# logger 'Here we go, logging'
root@linux:/root # tail -n 3 /var/log/syslog
Aug 26 15:41:01 localhost CRON[24490]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:01 localhost CRON[24547]: (root) CMD (chown qscand:qscand -R /var/run/clamav/ 2>&1 >/dev/null)
Aug 26 15:42:20 localhost hipo: Here we go, logging

 

If you have took some time to read any of the init.d scripts on Debian / Fedora / RHEL / CentOS Linux etc. you will notice the logger logging facility is heavily used.

With logger you can print out message with different priorities (e.g. if you want to write an error message to mail.* logs), you can do so with:
 

 logger -i -p mail.err "Output of mail processing script"


To log a normal non-error (priority message) with logger to /var/log/mail.log system log.

 

 logger -i -p mail.notice "Output of mail processing script"


A whole list of supported facility named priority valid levels by logger (as taken of its current Linux manual) are as so:

 

FACILITIES AND LEVELS
       Valid facility names are:

              auth
              authpriv   for security information of a sensitive nature
              cron
              daemon
              ftp
              kern       cannot be generated from userspace process, automatically converted to user
              lpr
              mail
              news
              syslog
              user
              uucp
              local0
                to
              local7
              security   deprecated synonym for auth

       Valid level names are:

              emerg
              alert
              crit
              err
              warning
              notice
              info
              debug
              panic     deprecated synonym for emerg
              error     deprecated synonym for err
              warn      deprecated synonym for warning

       For the priority order and intended purposes of these facilities and levels, see syslog(3).

 


If you just want to log to Linux main log file (be it /var/log/syslog or /var/log/messages), depending on the Linux distribution, just type', even without any shell quoting:

 

logger 'The reason to reboot the server Currently was a System security Update

 

So what others is logger useful for?

 In addition to being a good diagnostic tool, you can use logger to test if all basic system logs with its respective priorities work as expected, this is especially
useful as I've seen on a Cloud Holsted OpenXEN based servers as a SAP consultant, that sometimes logging to basic log files stops to log for months or even years due to
syslog and syslog-ng problems hungs by other thirt party scripts and programs.
To test test all basic logging and priority on system logs as expected use the following logger-test-all-basic-log-logging-facilities.sh shell script.

 

#!/bin/bash
for i in {auth,auth-priv,cron,daemon,kern, \
lpr,mail,mark,news,syslog,user,uucp,local0 \
,local1,local2,local3,local4,local5,local6,local7}

do        
# (this is all one line!)

 

for k in {debug,info,notice,warning,err,crit,alert,emerg}
do

logger -p $i.$k "Test daemon message, facility $i priority $k"

done

done

Note that on different Linux distribution verions, the facility and priority names might differ so, if you get

logger: unknown facility name: {auth,auth-priv,cron,daemon,kern,lpr,mail,mark,news, \
syslog,user,uucp,local0,local1,local2,local3,local4, \
local5,local6,local7}

check and set the proper naming as described in logger man page.

 

4. Using a file descriptor that will output to a pre-set log file


Another way is to add the following code to the beginning of the script

#!/bin/bash
exec 3>&1 4>&2
trap 'exec 2>&4 1>&3' 0 1 2 3
exec 1>log.out 2>&1
# Everything below will go to the file 'log.out':

The code Explaned

  •     Saves file descriptors so they can be restored to whatever they were before redirection or used themselves to output to whatever they were before the following redirect.
    trap 'exec 2>&4 1>&3' 0 1 2 3
  •     Restore file descriptors for particular signals. Not generally necessary since they should be restored when the sub-shell exits.

          exec 1>log.out 2>&1

  •     Redirect stdout to file log.out then redirect stderr to stdout. Note that the order is important when you want them going to the same file. stdout must be redirected before stderr is redirected to stdout.

From then on, to see output on the console (maybe), you can simply redirect to &3. For example
,

echo "$(date) : Do print whatever you want logging to &3 file handler" >&3


I've initially found out about this very nice bash code from serverfault.com's post how can I fully log all bash script actions (but unfortunately on latest Debian 10 Buster Linux  that is prebundled with bash shell 5.0.3(1)-release the code doesn't behave exactly, well but still on older bash versions it works fine.

Sum it up


To shortlysummarize there is plenty of ways to do logging from a shell script logger command but using a function or a named pipe is the most classic. Sometimes if a script is supposed to write user or other script output to a a common file such as syslog, logger command can be used as it is present across most modern Linux distros.
If you have a better ways, please drop a common and I'll add it to this article.

 

Ansible Quick Start Cheatsheet for Linux admins and DevOps engineers

Wednesday, October 24th, 2018

ansible-quick-start-cheetsheet-ansible-logo

Ansible is widely used (Configuration management, deployment, and task execution system) nowadays for mass service depoyments on multiple servers and Clustered environments like, Kubernetes clusters (with multiple pods replicas) virtual swarms running XEN / IPKVM virtualization hosting multiple nodes etc. .

Ansible can be used to configure or deploy GNU / Linux tools and services such as Apache / Squid / Nginx / MySQL / PostgreSQL. etc. It is pretty much like Puppet (server / services lifecycle management) tool , except its less-complecated to start with makes it often a choose as a tool for mass deployment (devops) automation.

Ansible is used for multi-node deployments and remote-task execution on group of servers, the big pro of it it does all its stuff over simple SSH on the remote nodes (servers) and does not require extra services or listening daemons like with Puppet. It combined with Docker containerization is used very much for later deploying later on inside Cloud environments such as Amazon AWS / Google Cloud Platform / SAP HANA / OpenStack etc.

Ansible-Architechture-What-Is-Ansible-Edureka

0. Instaling ansible on Debian / Ubuntu Linux


Ansible is a python script and because of that depends heavily on python so to make it running, you will need to have a working python installed on local and remote servers.

Ansible is as easy to install as running the apt cmd:

 

# apt-get install –yes ansible
 

The following additional packages will be installed:
  ieee-data python-jinja2 python-kerberos python-markupsafe python-netaddr python-paramiko python-selinux python-xmltodict python-yaml
Suggested packages:
  sshpass python-jinja2-doc ipython python-netaddr-docs python-gssapi
Recommended packages:
  python-winrm
The following NEW packages will be installed:
  ansible ieee-data python-jinja2 python-kerberos python-markupsafe python-netaddr python-paramiko python-selinux python-xmltodict python-yaml
0 upgraded, 10 newly installed, 0 to remove and 1 not upgraded.
Need to get 3,413 kB of archives.
After this operation, 22.8 MB of additional disk space will be used.

apt-get install –yes sshpass

 

Installing Ansible on Fedora Linux is done with:

 

# dnf install ansible –yes sshpass

 

On CentOS to install:
 

# yum install ansible –yes sshpass

sshpass needs to be installed only if you plan to use ssh password prompt authentication with ansible.

Ansible is also installable via python-pip tool, if you need to install a specific version of ansible you have to use it instead, the package is available as an installable package on most linux distros.

Ansible has a lot of pros and cons and there are multiple articles already written on people for and against it in favour of Chef or Puppet As I recently started learning Ansible. The most important thing to know about Ansible is though many of the things can be done directly using a simple command line, the tool is planned for remote installing of server services using a specially prepared .yaml format configuration files. The power of Ansible comes of the use of Ansible Playbooks which are yaml scripts that tells ansible how to do its activities step by step on remote server. In this article, I'm giving a quick cheat sheet to start quickly with it.
 

1. Remote commands execution with Ansible
 

First thing to do to start with it is to add the desired hostnames ansible will operate with it can be done either globally (if you have a number of remote nodes) to deploy stuff periodically by using /etc/ansible/hosts or use a custom host script for each and every ansible custom scripts developed.

a. Ansible main config files

A common ansible /etc/ansible/hosts definition looks something like that:

 

# cat /etc/ansible/hosts
[mysqldb]
10.69.2.185
10.69.2.186
[master]
10.69.2.181
[slave]
10.69.2.187
[db-servers]
10.69.2.181
10.69.2.187
[squid]
10.69.2.184

Host to execute on can be also provided via a shell variable $ANSIBLE_HOSTS
b) is remote hosts reachable / execute commands on all remote host

To test whether hour hosts are properly configure from /etc/ansible/hosts you can ping all defined hosts with:

 

ansible all -m ping


ansible-check-hosts-ping-command-screenshot

This makes ansible try to remote to remote hosts (if you have properly configured SSH public key authorization) the command should return success statuses on every host.

 

ansible all -a "ifconfig -a"


If you don't have SSH keys configured you can also authenticate with an argument (assuming) all hosts are configured with same password with:

 

ansible all –ask-pass -a "ip all show" -u hipo –ask-pass


ansible-show-ips-ip-a-command-screenshot-linux

If you have configured group of hosts via hosts file you can also run certain commands on just a certain host group, like so:

 

ansible <host-group> -a <command>

It is a good idea to always check /etc/ansible/ansible.cfg which is the system global (main red ansible config file).

c) List defined host groups
 

ansible localhost -m debug -a 'var=groups.keys()'
ansible localhost -m debug -a 'var=groups'

d) Searching remote server variables

 

# Search remote server variables
ansible localhost -m setup -a 'filter=*ipv4*'

 

 

ansible localhost -m setup -a 'filter=ansible_domain'

 

 

ansible all -m setup -a 'filter=ansible_domain'

 

 

# uninstall package on RPM based distros
ansible centos -s -m yum -a "name=telnet state=absent"
# uninstall package on APT distro
ansible localhost -s -m apt -a "name=telnet state=absent"

 

 

2. Debugging – Listing information about remote hosts (facts) and state of a host

 

# All facts for one host
ansible -m setup
  # Only ansible fact for one host
ansible
-m setup -a 'filter=ansible_eth*'
# Only facter facts but for all hosts
ansible all -m setup -a 'filter=facter_*'


To Save outputted information per-host in separate files in lets say ~/ansible/host_facts

 

ansible all -m setup –tree ~/ansible/host_facts

 

3. Playing with Playbooks deployment scripts

 

a) Syntax Check of a playbook yaml

 

ansible-playbook –syntax-check


b) Run General Infos about a playbook such as get what a playbook would do on remote hosts (tasks to run) and list-hosts defined for a playbook (like above pinging).

 

ansible-playbook –list-hosts
ansible-playbook
–list-tasks


To get the idea about what an yaml playbook looks like, here is example from official ansible docs, that deploys on remote defined hosts a simple Apache webserver.
 


– hosts: webservers
  vars:
    http_port: 80
    max_clients: 200
  remote_user: root
  tasks:
  – name: ensure apache is at the latest version
    yum:
      name: httpd
      state: latest
  – name: write the apache config file
    template:
      src: /srv/httpd.j2
      dest: /etc/httpd.conf
    notify:
    – restart apache
  – name: ensure apache is running
    service:
      name: httpd
      state: started
  handlers:
    – name: restart apache
      service:
        name: httpd
        state: restarted

To give it a quick try save the file as webserver.yml and give it a run via ansible-playbook command
 

ansible-playbook -s playbooks/webserver.yml

 

The -s option instructs ansible to run play on remote server with super user (root) privileges.

The power of ansible is its modules, which are constantly growing over time a complete set of Ansible supported modules is in its official documenation.

Ansible-running-playbook-Commands-Task-script-Successful-output-1024x536

There is a lot of things to say about playbooks, just to give the brief they have there own language like a  templates, tasks, handlers, a playbook could have one or multiple plays inside (for instance instructions for deployment of one or more services).

The downsides of playbooks are they're so hard to write from scratch and edit, because yaml syntaxing is much more stricter than a normal oldschool sysadmin configuration file.
I've stucked with problems with modifying and writting .yaml files and I should say the community in #ansible in irc.freenode.net was very helpful to help me debug the obscure errors.

yamllint (The YAML Linter tool) comes handy at times, when facing yaml syntax errors, to use it install via apt:
 

# apt-get install –yes yamllint


a) Running ansible in "dry mode" just show what ansible might do but not change anything
 

ansible-playbook playbooks/PLAYBOOK_NAME.yml –check


b) Running playbook with different users and separate SSH keys

 

ansible-playbook playbooks/your_playbook.yml –user ansible-user
 
ansible -m ping hosts –private-key=~/.ssh/keys/custom_id_rsa -u centos

 

c) Running ansible playbook only for certain hostnames part of a bigger host group

 

ansible-playbook playbooks/PLAYBOOK_NAME.yml –limit "host1,host2,host3"


d) Run Ansible on remote hosts in parallel

To run in raw of 10 hosts in parallel
 

# Run 10 hosts parallel
ansible-playbook <File.yaml> -f 10            


e) Passing variables to .yaml scripts using commandline

Ansible has ability to pre-define variables from .yml playbooks. This variables later can be passed from shell cli, here is an example:

# Example of variable substitution pass from command line the var in varsubsts.yaml if present is defined / replaced ansible-playbook playbooks/varsubst.yaml –extra-vars "myhosts=localhost gather=yes pkg=telnet"

 

4. Ansible Galaxy (A Docker Hub) like large repository with playbook (script) files

 

Ansible Galaxy has about 10000 active users which are contributing ansible automation playbooks in fields such as Development / Networking / Cloud / Monitoring / Database / Web / Security etc.

To install from ansible galaxy use ansible-galaxy

# install from galaxy the geerlingguy mysql playbook
ansible-galaxy install geerlingguy.mysql


The available packages you can use as a template for your purpose are not so much as with Puppet as Ansible is younger and not corporate supported like Puppet, anyhow they are a lot and does cover most basic sysadmin needs for mass deployments, besides there are plenty of other unofficial yaml ansible scripts in various github repos.

Install and use personal Own Cloud on Debian Linux for better shared data security – OwnCloud a Free Software replacement for Google Drive

Thursday, August 23rd, 2018

owncloud-self-hosted-cloud-file-sharing-and-storage-service-for-gnu-linux-howto-install-on-debian

Basicly I am against the use of any Cloud type of service but as nowadays Cloud usage is almost inevitable and most of the times you need some kind of service to store and access remotely your Data from multiple devices such as DropBox, Google Drive, iCloud etc. and using some kind of infrastructure to execute high-performance computing is invitable just like the Private Cloud paid services online are booming nowdays, I decided to give a to research and test what is available as a free software in the field of Clouding (your data) 🙂

Undoubfully, it is really nice fact that there are Free Software / Open Source alternatives to run your Own personal Cloud to store your data from multiple locations on a single point.

The most popular and leading Cloud Collaboration service (which is OpenSource but unfortunately not under GPLv2 / GPV3 – e.g. not fully free software) is OwnCloud.

ownCloud is a flexible self-hosted PHP and Javascript based web application used for data synchronization and file sharing (where its remote file access capabilites are realized by Sabre/Dav an open source WebDav server.
OwnCloud allows end user to easily Store / Manage files, Calendars, Contacts, To-Do lists (user and group administration via OpenID and LDAP), public URLs can be easily, created, the users can interact with browser-based ODF (Open Document Format) word processor , there is a Bookmarking, URL Shortening service integrated, Gallery RSS Feed and Document Viewer tools such as PDF viewer etc. which makes it a great alternative to the popular Google Drive, iCloud, DropBox etc.

The main advantage of using a self-hosted Cloud is that Your data is hosted and managed by you (on your server and your hard drives) and not by some God knows who third party provider such as the upmentioned.
In other words by using OwnCloud you manage your own data and you don't share it ot on demand with the Security Agencies with CIA, MI6, Mussad … (as it is very likely most of publicly offered Cloud storage services keeps track on the data stored on them).

The other disadvantage of Cloud Computing is that the stored data on such is usually stored on multiple servers and you can never know for sure where your data is physically located, which in my opinion is way worse than the option with Self Hosted Cloud where you know where your data belongs and you can do whatever you want with your data keep it secret / delete it or share it on your demand.

OwnCloud has its clients for most popular Mobile (Smart Phone) platforms – an Android client is available in Google Play Store as well as in Apple iTunes besides the clients available for FreeBSD OS, the GNOME desktop integration package and Raspberry Pi.

For those who are looking for additional advanced features an Enterprise version of OwnCloud is also available aiming business use and included software support.

Assuming you have a homebrew server or have hired a dedidacted or VPS server (such as the Ones we provide) ,Installing OwnCloud on GNU / Linux is a relatively easy
task and it will take no more than 15 minutes to 2 hours of your life.
In that article I am going to give you a specific instructions on how to install on Debian GNU / Linux 9 but installing on RPM based distros is similar and straightfoward process.
 

1. Install MySQL / MariaDB database server backend
 

By default OwnCloud does use SQLite as a backend data storage but as SQLite stores its data in a file and is becoming quickly slow, is generally speaking slowre than relational databases such as MariaDB server (or the now almost becoming obsolete MySQL Community server).
Hence in this article I will explain how to install OwnCloud with MariaDB as a backend.

If you don't have it installed already, e.g. it is a new dedicated server install MariaDB with:
 

server:~# apt-get install –yes mariadb-server


Assuming you're install on a (brand new fresh Linux install – you might want to install also the following set of tools / services).

 

server:~# systemctl start mariadb
server:~# systemctl enable mariadb
server:~# mysql_secure_installation


mysql_secure_installation – is to finalize and secure MariaDB installation and set the root password.
 

2. Create necessery database and users for OwnCloud to the database server
 

linux:~# mysql -u root -p
MariaDB [(none)]> CREATE DATABASE owncloud CHARACTER SET utf8;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON owncloud.* TO 'owncloud'@'localhost' IDENTIFIED BY 'owncloud_passwd';
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> \q

 

3. Install Apache + PHP necessery deb packages
 

As of time of writting the article on Debian 9.0 the required packages for a working Apache + PHP install for OwnCloud are as follows.

 

server:~# apt-get install –yes apache2 mariadb-server libapache2-mod-php7.0 \
openssl php-imagick php7.0-common php7.0-curl php7.0-gd \
php7.0-imap php7.0-intl php7.0-json php7.0-ldap php7.0-mbstring \
php7.0-mcrypt php7.0-mysql php7.0-pgsql php-smbclient php-ssh2 \
php7.0-sqlite3 php7.0-xml php7.0-zip php-redis php-apcu

 

4. Install Redis to use as a Memory Cache for accelerated / better performance ownCloud service


Redis is an in-memory kept key-value database that is similar to Memcached so OwnCloud could use it to cache stored data files. To install latest redis-server on Debian 9:
 

server:~# apt-get install –yes redis-server

5. Install ownCloud software packages on the server

Unfortunately, default package repositories on Debian 9 does not provide owncloud server packages but only some owncloud-client packages are provided, that's perhaps the packages issued by owncloud does not match debian packages.

As of time of writting this article, the latest available OwnCloud server  version package for Debian is OC 10.

a) Add necessery GPG keys

The repositories to use are provided by owncloud.org, to use them we need to first add the necessery gpg key to verify the binaries have a legit checksum.
 

server:~# wget -qO- https://download.owncloud.org/download/repositories/stable/Debian_9.0/Release.key | sudo apt-key add –

 

b) Add owncloud.org repositories in separete sources.list file

 

server:~# echo 'deb https://download.owncloud.org/download/repositories/stable/Debian_9.0/ /' | sudo tee /etc/apt/sources.list.d/owncloud.list

 

c) Enable https transports for the apt install tool

 

server:~# apt-get –yes install apt-transport-https

 

d) Update Debian apt cache list files and install the pack

 

server:~# apt-get update

 

server:~# apt-get install –yes owncloud-files

 

By default owncloud store file location is /var/www/owncloud but on many servers that location is not really appropriate because /var/www might be situated on a hard drive partition whose size is not big enough, if that's the case just move the folder to another partition and create a symbolic link in /var/www/owncloud pointing to it …


6. Create necessery Apache configurations to make your new self-hosted cloud accessible
 

a) Create Apache config file

 

server:~# vim /etc/apache2/sites-available/owncloud.conf

 

 

Alias /owncloud "/var/www/owncloud/"

<Directory /var/www/owncloud/>
Options +FollowSymlinks
AllowOverride All

<IfModule mod_dav.c>
Dav off
</IfModule>

SetEnv HOME /var/www/owncloud
SetEnv HTTP_HOME /var/www/owncloud

</Directory>

b) Enable Mod_Dav (WebDAV) if it is not enabled yet

 

server:~# ln -sf ../mods-available/dav_fs.conf
server:~# ln -sf ../mods-available/dav_fs.load
server:~# ln -sf ../mods-available/dav.load
server:~# ln -sf ../mods-available/dav_lock.load

c) Set proper permissions for /var/www/owncloud to make upload work properly

 

chown -R www-data: /var/www/owncloud/


d) Restart Apache WebServer (to make new configuration affective)

 

 

server:~# /etc/init.d/apache2 restart


7. Finalize  OwnCloud Install
 

Access OwnCloud Web Interface to finish the database creation and set the administrator password for the New Self-Hosted cloud
 

http://Your_server_ip_address/owncloud/

By default the Web interface is accessible in unencrypted (insecure) http:// it is a recommended practice (if you already don't have an HTTPS SSL certificate install for the IP or the domain to install one either a self-signed certificate or even better to use LetsEncrypt CertBot to easily create a valid SSL for free for your domain

 

installing-OwnCloud-Web-Config-User-Pass-interface-Owncloud-10-on-Debian-9-Linux-howto

Just fill in in your desired user / pass and pass on the database user / password / db name (if required you can set also a different location for the data directory from the default one /var/www/owncloud/data.

Click Finish Setup and That's all folks!

owncloud-server-web-ui-interface

OwnCloud is successfully installed on the server, you can now go and download a Mobile App or Desktop application for whatever OS you're using and start using it as a Dropbox replacement. In a certain moment you might want to consult also the official UserManual documentation as you would probably need further information on how to manage your owncloud.

Enjoy !