Archive for the ‘System Administration’ Category

How to Easily Integrate AI Into Bash on Linux with ollama

Monday, January 12th, 2026

Essential Ollama Commands You Should Know | by Jaydeep Karale | Artificial  Intelligence in Plain English

AI is more and more entering the Computer scene and so is also in the realm of Computer / Network management. Many proficient admins already start to get advantage to it.
AI doesn’t need a GUI, or a special cloud dashboard or fancy IDE so for console geeks Sysadmins / System engineers and Dev Ops it can be straightly integrated into the bash / zsh etc. and used to easify a bit your daily sys admin tasks.

If you live in the terminal, the most powerful place to add AI is Bash itself. With a few tools and a couple of lines of shell code, you can turn your terminal into an AI-powered assistant that writes commands, explains errors, and helps automate everyday Linux tasks.

No magic. No bloat. Just Unix philosophy with a brain.

1. AI as a Command-Line Tool

Instead of treating AI like a chatbot, treat it like any other CLI utility:

  • stdin → prompt
  • stdout → response
  • pipes → integration

Hardware Requirements of Ollama Server

2. Minimum VM (It runs, but you’ll hate it)

Only use this to test that things work.

  • vCPU: 2
  • RAM: 4 GB
  • Disk: 20 GB (SSD mandatory)
  • Storage type: ( HDD / network storage = bad )
  • Models: phi3, tinyllama

Expect:

  • Very slow pulls
  • Long startup times
  • Laggy responses

a. Recommended VM (Actually usable)

This is the sweet spot for Bash integration and daily CLI use.

  • vCPU: 4–6 (modern host CPU)
  • RAM: 8–12 GB
  • Disk: 30–50 GB local SSD
  • CPU type: host-passthrough (important!)
  • NUMA: off (for small VMs)

Models that feel okay:

  • phi3
  • mistral
  • llama3:8b (slow but tolerable)
 

b. “Feels Good” VM (CPU-only but not painful)

If you want it to feel responsive.

  • vCPU: 8
  • RAM: 16 GB
  • Disk: NVMe-backed storage
  • CPU flags: AVX2 enabled
  • Hugepages: optional but nice

Models:

  • llama3:8b
  • codellama:7b
  • mixtral (slow, but usable)
 

Hypervisor-Specific Advice (Important)

c. KVM / Proxmox (Best choice)

  • CPU type: host
  • Enable AES-NI, AVX, AVX2
  • Use virtio-scsi
  • Cache: writeback
  • IO thread: enabled

d. If running VM on VMware platform

  • Enable Expose hardware-assisted virtualization
  • Use paravirtual SCSI
  • Reserve memory if possible

e. VirtualBox (Not recommended)

  • Poor CPU feature exposure
  • Weak IO performance

Avoid if you can

f. Local AI With Ollama (Recommended)

If you want privacy, low latency, and no API keys, Ollama is currently the easiest way to run LLMs locally on Linux.

3. Install Ollama

# curl -fsSL https://ollama.com/install.sh | sh


Start the service:

# ollama serve

Pull a model (lightweight and fast):

# ollama pull llama3

Test it:

ollama run llama3 "Explain what the ls command does"

If that works, you’re ready to integrate.

To check whether all is properly setup after installed llama3:

root@haproxy2:~# ollama list
NAME             ID              SIZE      MODIFIED
llama3:latest    365c0bd3c000    4.7 GB    About an hour ago

root@haproxy2:~# systemctl status ollama
● ollama.service – Ollama Service
     Loaded: loaded (/etc/systemd/system/ollama.service; enabled; preset: enabled)
     Active: active (running) since Mon 2026-01-12 16:43:30 EET; 15min ago
   Main PID: 37436 (ollama)
      Tasks: 16 (limit: 6999)
     Memory: 5.0G
        CPU: 13min 5.264s
     CGroup: /system.slice/ollama.service
             ├─37436 /usr/local/bin/ollama serve
             └─37472 /usr/local/bin/ollama runner –model /usr/share/ollama/.ollama/models/blobs/sha256-6a0746a1ec1aef3e7ec53>

яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: Flash Attention was auto, set to enabled
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context:        CPU compute buffer size =   258.50 MiB
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: graph nodes  = 999
яну 12 16:45:34 haproxy2 ollama[37436]: llama_context: graph splits = 1
яну 12 16:45:34 haproxy2 ollama[37436]: time=2026-01-12T16:45:34.959+02:00 level=INFO source=server.go:1376 msg="llama runner>
яну 12 16:45:34 haproxy2 ollama[37436]: time=2026-01-12T16:45:34.989+02:00 level=INFO source=sched.go:517 msg="loaded runners>
яну 12 16:45:35 haproxy2 ollama[37436]: time=2026-01-12T16:45:35.000+02:00 level=INFO source=server.go:1338 msg="waiting for >
яну 12 16:45:35 haproxy2 ollama[37436]: time=2026-01-12T16:45:35.001+02:00 level=INFO source=server.go:1376 msg="llama runner>
яну 12 16:55:58 haproxy2 ollama[37436]: [GIN] 2026/01/12 – 16:55:58 | 200 |    3.669915ms |       127.0.0.1 | HEAD     "/"
яну 12 16:55:58 haproxy2 ollama[37436]: [GIN] 2026/01/12 – 16:55:58 | 200 |   42.244006ms |       127.0.0.1 | GET      "/api/>
root@haproxy2:~#

4. Turning AI Into a Bash Command

Let’s create a simple AI helper command called ai.

Add this to your ~/.bashrc or ~/.bash_aliases:

ai() {

  ollama run llama3 "$*"

}

Reload your shell:

$ source ~/.bashrc


Now you can do things like:

$ ai "Write a bash command to find large files"

 

$ ai "Explain this error: permission denied"

$ ai "Convert this sed command to awk"


At this point, AI is already a first-class CLI tool.

5. Using AI to Generate Bash Commands

One of the most useful patterns is asking AI to output only shell commands.

Example:

$ ai "Give me a bash command to recursively find .log files larger than 100MB. Output only the command."

Copy, paste, done.

You can even enforce this behavior with a wrapper:

aicmd() {

  ollama run llama3 "Output ONLY a valid bash command. No explanation. Task: $*"

}

Now:

$ aicmd "list running processes using more than 1GB RAM"

Danger note: always read commands before running them. AI is smart, not trustworthy.

6. AI for Explaining Commands and Logs

This is where AI shines.

Pipe output directly into it:

$ dmesg | tail -n 50 | ai "Explain what is happening here"

Or errors:

$ make 2>&1 | ai "Explain this error and suggest a fix"

You’ve just built a terminal-native debugger.

7. Smarter Bash History Search

You can even use AI to interpret your intent instead of remembering exact commands:

aih() {

  history | ai "From this bash history, find the best command for: $*"

}

Example:

$ aih "compress a directory into tar.gz"

It’s like Ctrl+R, but semantic.

8. Using AI With Shell Scripts

AI can help generate scripts inline:

$ ai "Write a bash script that monitors disk usage and sends a notification when it exceeds 90%"

You’re not replacing scripting skills – you’re accelerating them.

Think of AI as:

  • a junior sysadmin
  • a documentation search engine
  • a rubber duck that talks back

9. Where Ollama Stores Its Data

Depending on how it runs:

System service (most common)

Models live here:

/usr/share/ollama/.ollama/

Inside:

models/

blobs/

User-only install

~/.ollama/

Since you installed as root + systemd, use the first path.

See What’s Taking Space

# du -sh /usr/share/ollama/.ollama/*

Typical output:

  • models/ → metadata
  • blobs/ → the big files (GBs)

10. Remove Unused Models (Safe Way)

List models Ollama knows about:

# ollama list

Remove a model properly:

# ollama rm llama3

This removes metadata and unreferenced blobs.

Always try this first.

11. Full Manual Cleanup (Hard Reset)

If things are broken, stuck, or you want a clean slate:

Stop Ollama

# systemctl stop ollama

Delete all local models and cache

# rm -rf /usr/share/ollama/.ollama/models

# rm -rf /usr/share/ollama/.ollama/blobs

(Optional but safe)

# rm -rf /usr/share/ollama/.ollama/tmp

Start Ollama again

# systemctl start ollama

Ollama will recreate everything automatically.

Verify Cleanup Worked

# du -sh /usr/share/ollama/.ollama

# ollama list

You should see:

  • Very small disk usage
  • Empty model list

Prevent Disk Bloat (Highly Recommended)

Only pull small models on VMs

Stick to:

  • phi3
  • mistral
  • tinyllama

Remove models you don’t use

# ollama rm modelname

Set a custom data directory (optional)

If /usr is small, move Ollama data:

# systemctl stop ollama

# mkdir -p /opt/ollama-data

# chown -R ollama:ollama /opt/ollama-data

Edit service:

# systemctl edit ollama

Add:

[Service]

Environment=OLLAMA_HOME=/opt/ollama-data

Then:

# systemctl daemon-reload

# systemctl start ollama

Quick “Nuke It” One-Liner (Use With Care)

Deletes everything Ollama-related:

# systemctl stop ollama && rm -rf /usr/share/ollama/.ollama && systemctl start ollama

API-Based Option (Cloud Models)

If you prefer cloud models (OpenAI, Anthropic, etc.), the pattern is identical:

  • Use curl
  • Pass prompt
  • Parse output

Once AI returns text to stdout, Bash doesn’t care where it came from.

12. Best Practices how to use shell AI to not overload machine

Before you go wild:

  • Don’t auto-execute AI output
  • Don’t run AI as root
  • Treat responses as suggestions
  • Version-control important scripts
  • Keep prompts specific
     

AI is powerful — but Linux still assumes you know what you’re doing.

Sum it up

Adding AI to Bash isn’t about replacing skills.
It’s about removing friction.

When AI gets easy to use from the command line it is a great convenience for those who don't want to switch to browser all the time and copy / paste like crazy.
AI as a command-line tool fits perfectly into the Linux console:

  • composable
  • scriptable
  • optional
  • powerful

Once you’ve used AI from inside your shell for a few hours, going back to browser-based AI chat stuff like querying ChatGPT feels… slow and inefficient.
However keep in mind that ollama is away from perfect and has a lot of downsides and ChatGPT / Grok / DeepSeek often might give you better results, however as ollama is really isolated and non-depend on external sources your private quries will not get into a public AI historic database and you won't be tracked.
So everything has its Pros and Cons. I'm pretty sure that this tool and free AI tools like those will certainly have a good future and will be heavily used by system admins and
programmers in the coming future.
The terminal just got smarter. And it didn’t need a GUI to do it.

How to Optimize Debian Linux on old Computers to Get improved overall Speed, Performance and Stability

Tuesday, December 30th, 2025

tuning-debian-linux-to-work-quickly-and-smooth-on-old-pc-laptop-hardware

 

How to optimize Debian version 12.12 Linux OS to work responsive on Old ThinkPad laptops like from year 2008 Thinkpad R61 with Window Maker, zram, SSD etc.

Old computers aren’t obsolete but most worthy if you dont want to spend on extra hardware.

With the right setup, Debian Linux can run smoothly on hardware that’s more than a decade old. This article walks through a real-world, proven configuration using a classic ThinkPad R61 (Core 2 Duo, 4 GB RAM, SSD), but the principles apply to many older PCs as well.

Why use Debian Linux on old hardware?

Debian Stable is ideal for old hardware because it offers:

  • Low baseline resource usage
  • Long-term stability
  • Minimal background activity
  • Excellent support for lightweight desktops
  • Flexible and well organized and relatively easy to tune

Paired with a minimal window manager, Debian easily outperforms many “lightweight” distros that still ship heavy defaults.

Hardware Baseline PC setup

Test system:

  • Laptop: ThinkPad R61
  • CPU: Intel Core 2 Duo
  • RAM: 4 GB
  • Storage: SATA SSD
  • Graphics: Intel X3100 / NVIDIA NVS 140M
  • Desktop: Window Maker

This is a common configuration for late-2000s business laptops.

1. Desktop Environment: Keep It Simple

Heavy desktop environments is the main factor to slow down an old PC.
Where possible dont use the Desktop environment at all and stick to console.

Recommended:

  • Window Maker (used by myself)
  • Openbox
  • Fluxbox
  • IceWM

Avoid:

  • GNOME
  • KDE Plasma
  • Cinnamon

Window Maker is especially effective: no compositing, no animations, minimal memory usage.

2. Terminal Choice Matters

For console-based applications (games, tools, system utilities), use a terminal that correctly reports its size. Lets say you use xterm:

$ xterm

You can force a usable terminal size like this:

$ xterm -geometry 80×32 &

This avoids common issues with console applications failing due to incorrect terminal dimensions.

Install urxvt (best choice for terminal productivity)

Open a terminal and run:

apt update
# apt install rxvt-unicode

Optional (if you want tabbed terminal use suckless):

# sudo apt install suckless-tools

  • rxvt-unicode-256color → main terminal (n/a in debian) have to install third party  
  • rxvt-unicode-256color-perl → Perl extensions (tabs, URL click, etc.) (n/a in debian, installable via third party)
  • suckless-tools → includes tabbed, can be used as an alternative for tabs

a. Configure .Xresources

Create or edit ~/.Xresources:

$ vim~/.Xresources

Example for beautiful setup with tabs, transparency, and fonts:

! Basic appearance
! URxvt.font: xft:FiraCode Nerd Font Mono:size=12
 URxvt.background: [90]#1c1c1c
 URxvt.foreground: #c0c0c0
! URxvt.cursorColor: #ff5555
! URxvt.saveLines: 10000
! URxvt.scrollBar: false
! URxvt.borderLess: true

! Enable tabs using built-in tabbed extension
URxvt.perl-ext-common: default,tabbed

! Tab colors
URxvt.tabbed.tabbar-fg: 15
URxvt.tabbed.tabbar-bg: 0
URxvt.tabbed.tab-fg: 2
URxvt.tabbed.tab-bg: 8

! Keybindings for tabs
! Ctrl+Shift+N → new tab
URxvt.keysym.Control-Shift-N: perl:tabbed:new_tab
! Ctrl+Shift+W → close tab
URxvt.keysym.Control-Shift-W: perl:tabbed:close_tab
! Ctrl+Tab → next tab
URxvt.keysym.Control-Tab: perl:tabbed:next_tab
! Ctrl+Shift+Tab → previous tab
URxvt.keysym.Control-Shift-Tab: perl:tabbed:prev_tab
! Tabs keybindings
URxvt.keysym.Control-N: perl:tabbed:new_tab
URxvt.keysym.Control-W: perl:tabbed:close_tab

 

b. Apply .Xresources changes

Run:

$ xrdb ~/.Xresources

Then launch urxvt:

$ rxvt

  • Ctrl+Shift+T → new tab
  • Ctrl+Shift+W → close tab

c. Optional: Make it even cooler

  1. Install powerline fonts or Nerd Fonts (for fancy prompt icons):

# apt install fonts-firacode

  1. Enable URL clicking and clipboard (already enabled above)

  2. Combine with tmux for extra tabs/panes, session management, and more shortcuts.

3. Retain only last 500MB from journald

Retain only the past 500 MB:

# journalctl –vacuum-size=500M

This is exteremely useful as sometimes failing services might generate ton of unnecessery logs and might flood up the old machine hard disk.

4. Reduce journal memory footprint

# vim /etc/systemd/journald.conf

Set Storage=volatile
Set RuntimeMaxUse=50M

# systemctl restart systemd-journald

5. Trim services boot times

# systemd-analyze blame
# systemd-analyze critical-chain

This tells you which services slow down your boot the most.

6. Disable Unnecessary Services

Old systems benefit massively from disabling unused background services.

Check what’s enabled:

# systemctl list-unit-files –state=enabled

Common candidates to disable (if not needed):

# systemctl disable bluetooth
# systemctl disable cups
#systemctl disable avahi-daemon

#systemctl disable ModemManager

Each disabled service saves RAM and CPU cycles.

7. Dirty Page Tuning (Reduces Freezes)

Defaults favor servers, not laptops.

Edit:

vim /etc/sysctl.conf

Add:

vm.dirty_background_ratio=5
vm.dirty_ratio=10

This forces writeback earlier, preventing sudden stalls.

8. Memory Tuning: zram Done Right

Does zram make sense with 4 GB RAM and an SSD?

Yes it could, but only in moderation.

zram compresses memory in RAM and acts as fast swap. On a Core 2 Duo, compression overhead is small and the benefit is smoother multitasking.

Recommended zram configuration

Install zram-tools deb package:

# apt install zram-tools

Edit:

# vim /etc/default/zramswap

Set:

PERCENT=15

This creates ~600 MB of compressed swap — enough to absorb memory spikes without wasting RAM.

9. Keep Disk Swap (But Small)

Even with zram, disk swap is useful as a fallback.

Recommended:

  • 1–2 GB swap on SSD
  • zram should have higher priority than disk swap

Check:

# swapon –show

10. Swappiness and Cache Pressure

Tune the kernel to prefer RAM and zram first:

# vim /etc/sysctl.conf

Add:

vm.swappiness=10
vm.vfs_cache_pressure=50

Apply:

# sysctl -p

This prevents early swapping and keeps the system responsive.

11. CPU Governor: A Hidden Performance Win

Older ThinkPads often run conservative CPU governors.

Install tools:

# apt install cpufrequtils

Set a balanced governor:

# echo 'GOVERNOR="ondemand"' | sudo tee /etc/default/cpufrequtils

# systemctl restart cpufrequtils

This allows the CPU to ramp up quickly when needed.

12. Power and Thermal Management (ThinkPad-Specific)

Install TLP:

# apt install tlp
# systemctl enable tlp
# systemctl start tlp

TLP improves:

  • Battery life
  • Thermal behavior
  • SSD longevity

Defaults are usually perfect – no heavy tuning required.

13. Disable Watchdogs (If You Don’t Debug Kernels)

Watchdogs waste cycles on old CPUs.

Check:

# lsmod | grep watchdog

Disable:

# vim /etc/modprobe.d/blacklist.conf

Add:

blacklist iTCO_wdt
blacklist iTCO_vendor_support

Reboot.

14. Reduce systemd Noise

systemd logs aggressively by default.

Edit:

#vim/etc/systemd/journald.conf

Set:

Storage=volatile
RuntimeMaxUse=50M

Then:

#systemctl restart systemd-journald

Less disk I/O, faster boots.

15. Use tmpfs for caching and Volatile Junk

Put garbage in RAM, not SSD.

Edit:

# vim /etc/fstab

Add:

tmpfs /tmp tmpfs noatime,nosuid,nodev,mode=1777,size=256M 0 0

Optional:

tmpfs /var/tmp tmpfs noatime,nosuid,nodev,size=128M 0 0
# mount -a

16. IRQ Balance: Disable It (might slow down machine)

On single-socket old laptops, irqbalance can hurt.

Disable:

# systemctl disable irqbalance

Test performance; re-enable if needed.

17. Reduce systemd Timeout Delays

Old laptops often wait forever on dead hardware.

Edit:

# vim /etc/systemd/system.conf

Set:

DefaultTimeoutStartSec=10s
DefaultTimeoutStopSec=10s

18. Strip Kernel Modules You Don’t Use

If you don’t use:

  • FireWire
  • Bluetooth
  • Webcam

Blacklist them:

# vim /etc/modprobe.d/blacklist-extra.conf

Example:

blacklist firewire_ohci
blacklist firewire_core
blacklist uvcvideo
blacklist bluetooth

 

Faster boot, fewer interrupts.

19. X11 Performance Tweaks (Intel Graphics)

Create:

# vim/etc/X11/xorg.conf.d/20-intel.conf

Add:

Section "Device"
Identifier "Intel Graphics"
Driver "intel"
Option "TearFree" "false"
Option "AccelMethod" "sna"
EndSection

20. Disable IPv6 (if not used)

Saves a little RAM and startup time.

Edit:

# vim/etc/sysctl.conf

Add:

net.ipv6.conf.all.disable_ipv6=1
net.ipv6.conf.default.disable_ipv6=1

21.Lower Kernel Log Level verbosity

Stop kernel spam.

# dmesg -n 3

Make permanent:

# vim/etc/sysctl.conf

Add:

kernel.printk=3 4 1 3

22.Scheduler Latency (Advanced)

For desktop interactivity:

# vim/etc/sysctl.conf

Add:

kernel.sched_autogroup_enabled=1

Helps UI responsiveness under load.

23.Kill Browser Bloat (Biggest Win)

For Firefox ESR:

  • Disable telemetry
  • Enable tab unloading

browser.sessionstore.interval = 300000

No kernel tweak beats this.

 

a. Enable tab unloading (automatic tab discard)

  1. Open Firefox ESR
    Type

  2. about:config

  3. in the address bar.
  4. Accept the warning: “This might void your warranty.”

  5. Search for the following preference:

browser.tabs.unloadOnLowMemory

  • Default: false
  • Set to: true
     

This enables Firefox to unload inactive tabs automatically when memory is low.

b. Optional tuning

Some other preferences you can tweak:

Preference

Description

Suggested value

 

browser.tabs.maxSuspendedTabs

 

Maximum number of tabs that can be suspended

10–20

 

browser.tabs.autoHide

 

Auto-hide tabs while suspended (older ESR versions)

true

 

browser.tabs.loadInBackground

 

Background tabs load in suspended state

true

 

browser.sessionstore.interval

 

How often session is saved (ms)

15000

These may vary slightly depending on ESR version.

24. Graphics Considerations

ThinkPad R61 models typically have:

  • Intel X3100 → works well out of the box
  • NVIDIA NVS 140M → use nouveau driver

Recommendations:

  • Avoid proprietary legacy NVIDIA drivers
  • No compositing
  • Simple themes only

25. Extremely for Geeks, Build a Custom Kernel (Optional)

Only if you have plenty of time and you have a developers background and maniacal tendencies 🙂

Benefits:

  • Smaller kernel
  • Faster boot
  • Fewer interrupts

Cost:

  • Maintenance burden

26. Application Choices Matter More Than Tweaks

Keep in mind the application choices matter more than tweeks.
Even the best-tuned system can be ruined by heavy applications.

Recommended software:

  • Browser: Firefox ESR
  • File manager: PCManFM
  • Terminal: xterm, rxvt
  • Editor: nano, geany

Limit browser tabs and disable unnecessary extensions.

27. Things Not to Do

Avoid:

  • Huge zram sizes (50%+)
  • Do not Disable swap entirely
  • Beware of Aggressive kernel “performance hacks”
  • Disable any Heavy desktop effects if choosing to run MATE or alike GUI environment

Stability beats micro-optimizations.

Final Recommended Configuration

For a ThinkPad R61 with 4 GB RAM and SSD perhaps the best Linux configuration would be:

  • Debian Stable
  • Window Maker
  • zram: 15%
  • SSD swap: 1–2 GB
  • swappiness: 10
  • TLP enabled
  • No compositor
     

This setup would deliver:

  • Smooth multitasking
  • No UI lag
  • Minimal CPU overhead
  • Long-term stability

Conclusion

Old PCs don’t need to be fast necessery, but can be made work slightly faster, though the limits if used in a proper way with the right software and without out the eye candy nonse of today, they be still fully functionally used.

With Debian, a lightweight window manager, and sensible memory tuning, even 15-year-old + old hardware remains useful today for common daily tasks, and makes it not only useful but fun and different especially if you are a sysadmin or a developer who needs mostly console and a browser.

It gives you another perspective on how to do your computing in a simplier and more minimalistic way.

Of course do not expect the Old Hardware PC to be the perfect station for youtube maniacs, heavy gamers or complete newbies, who dont honor the old PC limited resources and don't want to have a bit of experimental approach to the PC.

Anyways by implementing before mentioned tweaks, they will reward you with reliability and simplicity  – something modern over complicated OS and Apps often lack.

Enjoy and Happy Christmas 2025 and Happy New Year 2026 soon ! 🙂

How to Deploy Central DNS on Linux with 3 Authoritative Servers and 1 Recursive Cache

Friday, December 19th, 2025

unbound-centrall-dns-deployment-3-linux-authoritative-servers-and-1-caching-DNS

Centralized DNS is one of those services that must be always UP, predictable, and fast. When it isn’t, everything breaks in strange and unpleasant ways.

This article describes a robust central DNS architecture for Linux environments using:

  • 3 authoritative DNS servers
  • 1 dedicated caching resolver
  • Clear separation between authoritative and recursive roles

Architecture Overview

Roles

Server

Role

Purpose

dns-auth-01

Authoritative

Primary (master)

dns-auth-02

Authoritative

Secondary (slave)

dns-auth-03

Authoritative

Secondary (slave)

dns-cache-01

Recursive / Cache

Internal resolution

Why Separate Roles?

Authoritative and recursive DNS have very different workloads:

  • Authoritative DNS: predictable, zone-based, read-only
  • Recursive DNS: bursty, cache-heavy, user-facing

Mixing them increases attack surface, complexity, and failure impact.

Software Choices

Recommended stack:

  • BIND9 or NSD for authoritative servers
  • Unbound for caching/recursive resolver

Reasons:

  • Mature, well-understood behavior
  • Clear separation of responsibilities
  • Excellent Linux support
  • Scriptable and observable

Network Layout

Example internal layout:

10.0.0.10   dns-auth-01 (master)

10.0.0.11   dns-auth-02 (slave)

10.0.0.12   dns-auth-03 (slave)

10.0.0.20   dns-cache-01 (recursive)

All Linux servers point only to dns-cache-01 as their resolver.

Authoritative DNS Configuration

Master Server (dns-auth-01)

Zones are managed only on the master.

Example BIND zone definition:

zone "example.internal" {

    type master;

    file "/etc/bind/zones/example.internal.zone";

    allow-transfer { 10.0.0.11; 10.0.0.12; };

    also-notify { 10.0.0.11; 10.0.0.12; };

};

Key points:

  • Zone transfers restricted by IP
  • NOTIFY enabled for fast propagation
  • No recursion enabled

Disable recursion:

options {

    recursion no;

    allow-query { any; };

};

Slave Servers (dns-auth-02 / dns-auth-03)

Example configuration:

zone "example.internal" {

    type slave;

    masters { 10.0.0.10; };

    file "/var/cache/bind/example.internal.zone";

};

Slaves:

  • Never edited manually
  • Automatically sync zones
  • Serve as HA and load distribution

Caching Resolver (dns-cache-01)

Use Unbound as a dedicated recursive resolver.

Unbound Configuration

Minimal but effective setup:

server:

    interface: 0.0.0.0

    access-control: 10.0.0.0/24 allow

    do-ip6: no

    hide-identity: yes

    hide-version: yes

    prefetch: yes

    cache-min-ttl: 300

    cache-max-ttl: 86400

Forward Internal Zones to Authoritative Servers

forward-zone:

    name: "example.internal"

    forward-addr: 10.0.0.10

    forward-addr: 10.0.0.11

    forward-addr: 10.0.0.12

External Resolution

Either:

  • Use root hints (recommended for independence)
  • Or forward to trusted upstream resolvers

Example:

forward-zone:

    name: "."

    forward-addr: 9.9.9.9

    forward-addr: 1.1.1.1

Client Configuration

All Linux servers use the caching resolver only:

/etc/resolv.conf

nameserver 10.0.0.20

Or via systemd-resolved:

# resolvectl dns eth0 10.0.0.20

Clients never query authoritative servers directly.

High Availability Considerations

Resolver Redundancy

For production environments:

  • Deploy two caching resolvers
  • Use DHCP or systemd-resolved fallback ordering

Example:

nameserver 10.0.0.20

nameserver 10.0.0.21

Zone Management

  • Store zone files in Git
  • Increment SOA serials automatically
  • Deploy via CI/CD or Ansible

DNS changes should be auditable, not ad-hoc.

Security Hardening

Minimum recommendations:

  • No recursion on authoritative servers
  • Firewall restricts TCP/UDP 53
  • TSIG for zone transfers (optional but recommended)
  • Disable version disclosure
  • Monitor query rates

Monitoring & Validation

Useful tools:

  • dig +trace
  • unbound-control stats
  • rndc status
  • Prometheus exporters for BIND/Unbound

DNS that isn’t monitored will fail silently.

Final Thoughts

This setup scales well, is easy to reason about, and avoids the most common DNS mistakes:

  • Mixing recursive and authoritative roles
  • Letting clients query everything directly
  • Overcomplicating zone management

DNS should be boring.
If it’s exciting, something is wrong.

Why SSH Login Is Slow on Linux and How to Fix It

Thursday, December 18th, 2025

openssh_debug-and-fix-slow-connections-to-linux-how-to-original-openssh-logo

Slow SSH logins are one of those problems that don’t look serious at first — until you realize every connection takes 20–30 seconds to respond. The shell eventually appears, but the delay is long enough to break automation, frustrate users, and make admins suspicious of deeper system issues.

This article walks through some common causes of slow SSH logins and how to diagnose them efficiently on Linux servers.

1. DNS Lookups: The Most Common Culprit

By default, sshd performs a reverse DNS lookup on the connecting IP address. If DNS is misconfigured or unreachable, SSH will wait.

How to Test

From the server (measure how many seconds it takes to do ssh to the machine):
 

$ time ssh localhost

If localhost logins are instant but remote logins are slow, suspect DNS.

Check /etc/ssh/sshd_config:

UseDNS yes

Fix

Disable DNS lookups (at least temporary to test):

UseDNS no

Then restart SSH:

# systemctl restart sshd

Note: This does not reduce security in most environments and is safe for the majority of servers.

2. Broken or Slow PAM Modules

PAM (Pluggable Authentication Modules) can introduce delays — especially if modules depend on:

  • LDAP
  • Kerberos
  • Network home directories
  • Smart card services

Debug with Verbose SSH

From the client:

$ ssh -vvv user@remote-server

Look for pauses during:

debug1: Authentications that can continue:

Test PAM Delay

Temporarily disable PAM in /etc/ssh/sshd_config:

UsePAM no

Restart SSH and test again.
If login becomes instant, inspect /etc/pam.d/sshd.

3. Entropy Shortage on Virtual Machines

Older kernels or low-activity VMs can run out of entropy, causing SSH key operations to block.

Check Entropy Level

# cat /proc/sys/kernel/random/entropy_avail

Values below 100 may cause delays.

Fix

Install an entropy daemon (if on Deb based distro):

# apt install haveged

or on CentOS / RHEL / Fedora

# yum install rng-tools

Then start the service:

# systemctl enable –now haveged

4. GSSAPI Authentication Delay

SSH attempts Kerberos authentication even when not used.

Symptom

Delay occurs before password prompt appears.

Fix

Edit /etc/ssh/sshd_config:

GSSAPIAuthentication no

GSSAPICleanupCredentials no

Restart SSH afterward.

5. Slow Home Directory or Shell Initialization

Sometimes SSH is fast, but the shell is slow.

Test with a Minimal Shell

$ ssh user@server /bin/sh

If this is instant, check:

  • .bashrc
  • .profile
  • .bash_logout

Common mistakes:

  • Network calls (curl, wget)
  • Mounted NFS home directories
  • Broken PATH exports
  • Commands waiting on unavailable resources

6. Logging and Timing the Login Process

Enable SSH debug logging in /etc/ssh/sshd_config:

LogLevel DEBUG

Then watch logs:

# journalctl -u sshd -f

or:

# tail -f /var/log/auth.log

This allows you to see exactly where the delay happens.

7. A Systematic Troubleshooting Checklist

  1. Disable DNS lookups (UseDNS no)
  2. Disable GSSAPI
  3. Test PAM
  4. Check entropy
  5. Test minimal shell
  6. Review auth logs

In practice, 90% of slow SSH issues are DNS or PAM related.

Conclusion

But wait there might be much more behind the SSH slowness such as misconfigured LDAP or other infrastructure in the middle.
Slow SSH logins are rarely “just SSH.", and though this guide should help you with some sporadic random server issues, if the issues is present on a complex infra with multiple ssh servers, then  that is almost always a symptom of:

  • Network misconfiguration
  • Over-engineered authentication
  • Broken assumptions about system dependencies

Approaching the problem methodically saves hours of guesswork and restores what SSH is supposed to be, work without glitches.

How to Harden a Linux Server in 2025 – Practical Steps for Sysadmins to protect against hackers and bots

Thursday, December 11th, 2025

linux_server-hardening-practical-steps-for-sysadmins-protecting-machine-vs-hackers-and-bots-good-practices

Securing a Linux server has never been more importan than ever these days..
With automated attacks, AI-driven exploits, and increasingly complex infrastructure, even a small misconfiguration can lead to a serious breach.
But wait, you don't have to wait to get bumped by a random script kiddie. Good news is you can mitigate a bit attacks with just a few practical and pretty much standard steps, that can can drastically increase your server’s security.

Below is a straightforward, battle-tested hardening guide suitable for Debian, Ubuntu, CentOS, AlmaLinux, and most modern distributions.

1. Keep the System Updated (But Safely)

Outdated packages remain the #1 cause of server compromises.
On Debian/Ubuntu:

# apt update && apt upgrade -y

# apt install unattended-upgrades

On RHEL-based systems:
 

# dnf update -y

# dnf install dnf-automatic

Enable security-only auto-updates where possible. Full auto-updates may break production apps, so use them carefully.

2. Create a Non-Root User and Disable Direct Root Login
 

Attackers constantly brute-force “root”. Avoid letting them.
 

# adduser sysadmin

# usermod -aG sudo sysadmin

Then edit SSH:

# vim /etc/ssh/sshd_config

Set:

PermitRootLogin no

PasswordAuthentication no

And restart:

# systemctl restart sshd


Use SSH keys only.

3. Install a Firewall and Block Everything by Default

UFW (Debian/Ubuntu):

# ufw default deny incoming

# ufw default allow outgoing

#ufw allow ssh

# ufw enable

Firewalld (RHEL/AlmaLinux):

# systemctl enable firewalld –now

# firewall-cmd –permanent –add-service=ssh

# firewall-cmd –reload

Turn off any unneeded ports immediately.

4. Protect SSH with Fail2Ban

Fail2Ban watches log files for suspicious authentication attempts and blocks offenders.

# apt install fail2ban -y

or

# dnf install fail2ban -y

Enable:

# systemctl enable –now fail2ban

To harden SSH jail:

[sshd]

enabled = true

maxretry = 5

bantime = 1h

findtime = 10m

5. Enable Kernel Hardening

Install sysctl rules that protect against common attacks:

Create /etc/sysctl.d/99-hardening.conf:

kernel.kptr_restrict = 2

kernel.sysrq = 0

net.ipv4.conf.all.rp_filter = 1

net.ipv4.tcp_synack_retries = 2

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.all.send_redirects = 0

net.ipv4.conf.all.log_martians = 1

Apply:

# sysctl –system

6. Install and Configure AppArmor or SELinux

Mandatory Access Control significantly limits damage if a service gets compromised.

  • Ubuntu / Debian uses AppArmor by default — ensure it's enabled.
  • RHEL, AlmaLinux, Rocky use SELinux — keep it in enforcing mode unless absolutely necessary.

Check SELinux:

# getenforce

You want:

Enforcing but hopefully you will have to configure all your machine services to venerate and work correctly with selinux enabled.

7. Scan the System with Lynis

Lynis is the best open-source Linux security auditing tool.

# apt install lynis

# lynis audit system

It provides a security score and actionable suggestions.

8. Use 2FA for SSH (Optional but Highly Recommended)

Use Two Factor Authentication:

a. Freely with Oath toolkityou can read how in my previous article how to set up 2fa free software authentication on Linux

or

b. Install Google Authenticator:

# apt install libpam-google-authenticator

# google-authenticator

Enable in /etc/pam.d/sshd:

auth required pam_google_authenticator.so

And in SSH config:

ChallengeResponseAuthentication yes

Restart SSH.

9. Separate Services Using Containers or Systemd Isolation

Even simple servers can benefit from isolation.

Systemd sandbox options:

ProtectSystem=full

ProtectHome=true

ProtectKernelTunables=true

PrivateTmp=true

Add these inside a service file under:

/etc/systemd/system/yourservice.service

It prevents processes from touching parts of the system they shouldn’t.

10. Regular Backups Are Part of Security

A secure server with no backups is a disaster waiting to happen.

Use:

  • rsync
  • borgbackup
  • restic
  • Cloud object storage with versioning

Always encrypt backups and test restore procedures.

Conclusion

Hardening a Linux server in 2025 requires vigilance, good practices, and layered security. No single tool will protect your system — but when you combine SSH security, firewalls, Fail2Ban, kernel hardening, and backups, you eliminate the majority of attack vectors.

 

Speed Up Linux: 10 Underrated Hacks for Faster Workstations and Servers

Tuesday, December 9th, 2025

make-linux-faster-underrated-hacks-cute-tux-penguin-toolbox-logo

 

Most Linux “performance tuning guides” recycle the same tips: disable services, add RAM, install a lighter desktop … and then you end up with a machine that feels basically not too much quicker.

Below is a collection of practical, field-tested, sysadmin-grade hacks that actually change how responsive your system feels—on desktops, laptops, and even small VPS servers.

Most of these are rarely mentioned (or uknown) by novice sys admins / sys ops / system engineers / dev ops but time has proved them extremely effective.

1. Enable zram (compressed RAM) instead of traditional swap

Most distros ship with a slow swap partition on disk. Enabling zram keeps swap in memory and compresses it on the fly. On low/mid RAM systems, the difference is night and day.

On Debian/Ubuntu:

# apt install zram-tools

# systemctl enable –now zramswap.service

Expect snappier multitasking, fewer freezes, and far less disk thrashing.

2. Speed up SSH connections with ControlMaster multiplexing

If you SSH multiple times into the same server, you can cut connection time to almost zero using SSH socket multiplexing:

Add to ~/.ssh/config:

Host *

    ControlMaster auto

    ControlPath ~/.ssh/cm-%r@%h:%p

    ControlPersist 10m

Your next ssh/scp commands will feel instant.

 

3. Replace grep, find, and ls with faster modern tools

 

The classic GNU tools are fine—until you try the modern replacements:

  • ripgrep (rg) → insanely faster grep
  • fd → user-friendly, colored alternative to find
  • exa or lsd → modern ls with icons, colors, Git info

# apt install ripgrep fd-find exa

You’ll be surprised how often you use them.

4. Improve boot time via systemd-analyze

Run cmd:

# systemd-analyze blame

This shows what’s delaying your boot. Disable useless services like:
 

# systemctl disable bluetooth.service

# systemctl disable ModemManager

# systemctl disable cups

Great for lightweight servers and old laptops.

5. Make your terminal 2–3× faster with GPU rendering (Kitty/Alacritty)

Gnome Terminal and xterm are CPU-bound and choke when printing thousands of lines.

Switch to a GPU-rendered terminal:

  • Kitty
  • Alacritty

They scroll like butter – even on old ThinkPads.

6. Use eatmydata when installing packages

APT and DPKG spend a lot of time syncing packages safely to disk. If you're working in a Docker container, VM, or test machine where safety is less critical, use:

# apt install eatmydata

# eatmydata apt install PACKAGE

It cuts install time by 30–70% !

7. Enable TCP BBR congestion control (massive network speed boost)

Google’s BBR can double upload throughput on many servers.

Check if supported:

sysctl net.ipv4.tcp_available_congestion_control

Enable:

# echo "net.core.default_qdisc=fq" | sudo tee -a /etc/sysctl.conf

# echo "net.ipv4.tcp_congestion_control=bbr" | sudo tee -a /etc/sysctl.conf

# sysctl -p

On VPS hosts with slow TCP stacks, this is a cheat code.

8. Reduce SSD wear and speed up disk with noatime

Every time a file is read, Linux updates its “access time.” Useless on most systems.

Edit /etc/fstab and change:

defaults

to:

defaults,noatime

Fewer writes + slightly better I/O performance.

9. Use htop + iotop + dstat to diagnose “mystery lag”

When a Linux system hangs, it’s rarely the CPU. It’s almost always:

  • I/O wait
  • swap usage
  • blocked processes
  • misbehaving snaps or flatpaks

Install the golden server stats trio:

# apt install htop iotop dstat

Check:

  • htop → load, processes, swap
  • iotop → who’s killing your disk
  • dstat -cdlmn → real-time performance overview

Using them can solve you 90% of slowdowns in minutes.

10. Speed up your shell prompt dramatically

Fancy PS1 prompts (Git branches, Python envs, emojis) look nice but slow down your shell launch.

Fix it by using async Git status tools:

  • starship prompt
  • powerlevel10k for zsh

Both load instantly even in large repos.

11. Another underrated improvement (if not already done), 
/home directory storage location

Put your $HOME on a separate partition or disk.

Why?

  • faster reinstalls
  • easier backups
  • safer experiments
  • fewer “oops I nuked my system” moments by me or someone else who has account doing stuff on the system

Experienced sysadmins and a good corporate server still does this for a reason.

Final Thoughts

Performance isn’t just about raw hardware—it's about removing the subtle bottlenecks Linux accumulates over time. With the changes above, even a 10-year-old laptop or low-tier VPS feels like a new machine.

How to keep your Linux server Healthy for Years: Hard learned lessons

Friday, November 28th, 2025

how-to-keep-your-linux-servers-healthy-every-year-doctor_tux

I’ve been running Linux servers long enough to watch hardware die, kernels panic, filesystems fill up at midnight hours, and network cards slowly burn out like old light bulbs.

Over time, you learn that keeping a server alive is less about “perfect architecture” and more about steady discipline – the small habits built to manage the machines, helps prevent big disasters.

Here are some practical, battle-tested lessons that keep my boxes running for years with minimal downtime. Most of them were learned the hard way.

1. Monitor Before You Fix – and Fix Before It Breaks

Most Linux disasters come from things we should have noticed earlier. The lack of monitoring, there is modern day saying that should become your favourite if you are a sysadmin or Dev Ops engineer.

"Monitoring everything !"

  • The disk that was at 89% yesterday will be at 100% tonight.
  • The log file that grew by 500 MB last week will explode this week.
  • The swap usage creeping from 1% → 5% → 20% means your next heavy task will choke.
  • The unseen failing BIOS CMOS battery
  • The RAID disks degradation etc.

You don’t need enterprise monitoring to prevent this. And even simple tools like monit or a simple zabbix-agent -> zabbix-server or any other simplistic scripted  monitoring gives you a basic issues pre-warning.

Even a simple cronjob shell one liner can save you hours of further sh!t :

#!/bin/bash

df -h / | awk 'NR==2 { if($5+0 > 85) print "Disk Alert: / is at " $5 }' \
| mail -s "Disk Warning on $(hostname)" admin@example.com

2. Treat /etc directory as Sacred – Treat It Like an expensive gem

Every sysadmin eventually faces the nightmare of a broken config overwritten by a package update or a hasty command at 2 AM.

To avoid crying later, archive /etc automatically:

# tar czf /root/etc-$(date +%Y-%m-%d).tar.gz /etc


If you prefer the backup to be more sophisticated you can use my clone of the dirs_backup.sh (an old script I wrote for easifying backup of specific directories on the filesystem ) the etc_backup.sh you can get here.
Run it weekly via cron.
This little trick has saved me more times than I can count — especially when migrating between Debian releases or recovering from accidental edits.

3. Automate everything what you have to repeatevely do

If you find yourself doing something manually more than twice, script it and forget it.

Examples:

  • rotating logs for misbehaving apps
  • restarting services that occasionally get “stuck”
  • syncing backups between machines
  • cleaning temp directories

Here’s a small example I still use today:

#!/bin/bash

# Kill zombie PHP-FPM children that keep leaking memory

ps aux | grep php-fpm | awk '{if($6 > 300000) print $2}' | xargs -r kill -9

Dirty way to get rid of misfunctioning php-fpm ?
Yes. But it works.

4. Backups Don’t Exist Unless You Test Them

It’s easy to feel proud when you write a backup script.
It’s harder – and far more important – to test the restore.

Once a month  or at least once in a few months, try restore a random backup to a dummy VM.
Sometimes backup might fails, or you might get something different from what you originally expected and by doing so
you can guarantee you will not cry later helplessly.

A broken backup doesn’t fail quietly – it fails on the day you need it most.

5. Don’t Ignore Hardware – It Ages like Everything Else

Linux might run forever, but hardware doesn’t.

Signs of impending doom:

  • dmesg spam with I/O errors
  • slow SSD response
  • increasing SMART reallocated sectors
  • random freezes without logs
  • sudden network flakiness

Run this monthly:

6. Document Everything (Future You Will Thank Past You)

There are moments when you ask yourself:

“Why did I configure this machine like this?”

If you don’t document your decisions, you’ll have no idea one year later.

A simple markdown file inside /root/notes.txt or /root/README.md is enough.

Document:

  • installed software
  • custom scripts
  • non-standard configs
  • firewall rules
  • weird hacks you probably forgot already

This turns chaos into something you can actually maintain.

7. Keep Things Simple – Complexity Is the Enemy of Uptime

The longer I work with servers, the more I strip away:

  • fewer moving parts
  • fewer services
  • fewer custom patches
  • fewer “temporary” hacks that become permanent

A simple system is a reliable system.
A complex one dies at the worst possible moment.

8. Accept That Failure Will Still Happen

No matter how careful you are, servers will surely:

  • crash
  • corrupt filesystems
  • lose network connectivity
  • inexplicably freeze
  • reboot after a kernel panic

Don’t aim for perfection.Aim for resilience.

If you can restore the machine in under an hour, you're winning and in the white.

Final Thoughts

Linux is powerful – but it rewards those who treat it with respect and perseverance.
Over many years, I’ve realized that maintaining servers is less about brilliance and more about humble, consistent care and hard work persistence.

I hope this article helps some sysamdmin to rethink and rebundle servers maintenance strategy in a way that will avoid a server meltdown at  night hours like 3 AM.

Cheers ! 

 

Migrating Server environments to Docker Containers a brief step-by-step guide

Wednesday, November 12th, 2025

migrating-server--applications-environment-to-docker-containers

In modern IT environments, containerization has become an essential strategy for improving application portability, scalability, and consistency. Docker, as a containerization platform, allows you to package applications and their dependencies into isolated containers that can be easily deployed across different environments. Migrating an existing server environment into Docker containers is a common scenario, and this guide will walk you through the key steps of doing so.

Why Migrate to Docker?

Before we dive into the specifics, let’s briefly understand why you might want to migrate your server environment to Docker:

  • Portability: Containers encapsulate applications and all their dependencies, making them portable across any system running Docker.
  • Scalability: Containers are lightweight and can be scaled up or down easily, offering flexibility to handle varying loads.
  • Consistency: With Docker, you can ensure that your application behaves the same in development, testing, and production.
  • Isolation: Docker containers run in isolation from the host system, minimizing the risk of configuration conflicts.
     

Steps to Migrate a Server Environment into Docker

Migrating an environment of servers into Docker typically involves several steps: evaluating the current setup, containerizing applications, managing dependencies, and orchestrating deployment. Here’s a breakdown:

1. Evaluate the Existing Server Environment

Before migrating, it's essential to inventory the current server environment to understand the following:

  • The applications running on the servers
  • Dependencies (e.g., databases, third-party services, libraries, etc.)
  • Networking setup (e.g., exposed ports, communication between services)
  • Storage requirements (e.g., persistent data, volumes)
  • Security concerns (e.g., user permissions, firewalls)
     

For example, if you're running a web server with a backend database and some caching layers, you'll need to break down these services into their constituent parts so they can be containerized.

2. Containerize the Application

The next step is to convert the services running on your server into Docker containers. Docker containers require a Dockerfile, which is a blueprint for how to build and run the container. Let's walk through an example of containerizing a simple web application.

Example: Migrating a Simple Node.js Web Application

Assume you have a Node.js application running on your server. To containerize it, you need to:

  • Create a Dockerfile
  • Build the Docker image
  • Run the containerized application

2.1. Write a Dockerfile

A Dockerfile defines how your application will be built within a Docker container. Here’s an example for a Node.js application:

# Step 1: Use the official Node.js image as a base image FROM node:16

# Step 2: Set the working directory inside the container WORKDIR /usr/src/app # Step 3: Copy package.json and package-lock.json to the container COPY package*.json ./

# Step 4: Install dependencies inside the container RUN npm install

# Step 5: Copy the rest of the application code to the container COPY . .

# Step 6: Expose the port that the application will listen on EXPOSE 3000

# Step 7: Start the application CMD ["npm", "start"]

This Dockerfile:

  1. Uses the official node:16 image as a base.
  2. Sets the working directory inside the container to /usr/src/app.
  3. Copies the package.json and package-lock.json files to the container and runs npm install to install dependencies.
  4. Copies the rest of the application code into the container.
  5. Exposes port 3000 (assuming that’s the port your app runs on).
  6. Defines the command to start the application.

2.2: Build the Docker Image

Once the Dockerfile is ready, build the Docker image using the following command:

# docker build -t my-node-app .

This command tells Docker to build the image using the current directory (.) and tag it as my-node-app.

2.3: Run the Docker Container

After building the image, you can run the application as a Docker container:

# docker run -p 3000:3000 -d my-node-app

This command:

  • Maps port 3000 from the container to port 3000 on the host machine.
  • Runs the container in detached mode (-d).

3. Handling Dependencies and Services

If your server environment includes multiple services (e.g., a database, caching layer, or message queue), you'll need to containerize those as well. Docker Compose can help you define and run multi-container applications.

Example: Dockerizing a Node.js Application with MongoDB

To run both the Node.js application and a MongoDB database, you’ll need a docker-compose.yml file.

Create a docker-compose.yml file in your project directory:

    version: '3'

    services:
      web:
        build: .
        ports:
          – "3000:3000"
        depends_on:
          – db
      db:
        image: mongo:latest
        volumes:
          – db-data:/data/db
        networks:
          – app-network

    volumes:
      db-data:

    networks:
      app-network:
        driver: bridge

This docker-compose.yml file:

  1. Defines two services: web (the Node.js app) and db (the MongoDB container).
  2. The depends_on directive ensures the database service starts before the web application.
  3. Uses a named volume (db-data) for persistent data storage for MongoDB.
  4. Defines a custom network (app-network) for communication between the two containers.

3.1. Start Services with Docker Compose

To start the services defined in docker-compose.yml, use the following command:

# docker-compose up -d

This command will build the web service (Node.js app), pull the MongoDB image, create the necessary containers, and run them in the background.

4. Manage Data Persistence

Containers are ephemeral by design, meaning data stored inside a container is lost when it stops or is removed. To persist data across container restarts, you’ll need to use volumes.

In the example above, the MongoDB service uses a named volume (db-data) to persist the database data. Docker volumes allow you to:

  • Persist data on the host machine outside of the container.
  • Share data between containers.

To check if the volume is created and inspect its usage, use:

# docker volume ls # docker volume inspect db-data

5. Networking Between Containers

In Docker, containers communicate with each other over a network. By default, Docker Compose creates a network for each application defined in a docker-compose.yml file. Containers within the same network can communicate with each other using container names as hostnames.

For example, in the docker-compose.yml above:

  • The web container can access the db container using db:27017 as the database URL (MongoDB's default port).

6. Scaling and Orchestrating with Docker Swarm or Kubernetes

If you need to scale your application to multiple instances or require orchestration, Docker Swarm and Kubernetes are the two most popular container orchestration platforms.

Docker Swarm:

Built into Docker, Swarm allows you to easily manage a cluster of Docker nodes and scale your containers across multiple machines. To initialize a swarm:

# docker swarm init

Kubernetes:

Kubernetes is a powerful container orchestration tool that provides high availability, automatic scaling, and management of containerized applications. If you’re migrating a more complex server environment, Kubernetes will offer additional features like rolling updates, automatic recovery, and more sophisticated networking options.

7. Security and Permissions

When migrating to Docker, it's important to pay attention to security best practices, such as:

 

  • Running containers with the least privileges (using the USER directive in the Dockerfile).
  • Using multi-stage builds to keep the image size small and reduce the attack surface.
  • Regularly scanning Docker images for known vulnerabilities using tools like Anchore, Trivy, or Clair.
  • Configuring network isolation for sensitive services.

Conclusion

Migrating a server environment into Docker containers involves more than just running an application in isolation. It requires thoughtful planning around dependencies, data persistence, networking, scaling, and security. By containerizing services with Docker, you can create portable, scalable, and consistent environments that streamline both development and production workflows.

By following the steps outlined in this guide—writing Dockerfiles, using Docker Compose for multi-container applications, and ensuring data persistence—you can successfully migrate your existing server environment into a Dockerized architecture. For larger-scale environments, consider leveraging orchestration tools like Docker Swarm or Kubernetes to manage multiple containerized services across a cluster.

How to Set Up SSH Two-Factor Authentication (2FA) on Linux Without Google Authenticator with OATH Toolkit

Wednesday, November 5th, 2025

install-2-factor-free-authentication-google-authentication-alternative-with-oath-toolkit-linux-logo

Most tutorials online on how to secure your SSH server with a 2 Factor Authentication 2FA will tell you to use Google Authenticator to secure SSH logins.

But what if you don’t want to depend on Google software – maybe for privacy, security, or ideological reasons ?

Luckily, you have a choice thanks to free oath toolkit.
The free and self-hosted alternative: OATH Toolkit has its own PAM module  libpam-oath to make the 2FA work  the openssh server.

OATH-Toolkit is a free software toolkit for (OTP) One-Time Password authentication using HOTP/TOTP algorithms. The software ships a small set of command line utilities covering most OTP operation related tasks.

In this guide, I’ll show you how to implement 2-Factor Authentication (TOTP) for SSH on any Linux system using OATH Toolkit, compatible with privacy-friendly authenticator apps like FreeOTP, Aegis, or and OTP.

It is worthy to check out OATH Toolkit author original post here, that will give you a bit of more insight on the tool.

1. Install the Required Packages

For Debian / Ubuntu systems:

# apt update
# apt install libpam-oath oathtool qrencode
...

For RHEL / CentOS / AlmaLinux:
 

# dnf install pam_oath oathtool

The oathtool command lets you test or generate one-time passwords (OTPs) directly from the command line.

2. Create a User Secret File

libpam-oath uses a file to store each user’s secret key (shared between your server and your phone app).

By default, it reads from:

/etc/users.oath

Let’s create it securely and set proper permissions to secure it:
 

# touch /etc/users.oath

# chmod 600 /etc/users.oath

Now, generate a new secret key for your user (replace hipo with your actual username):
 

# head -10 /dev/urandom | sha1sum | cut -c1-32

This generates a random 32-character key.
Example:

9b0e4e9fdf33cce9c76431dc8e7369fe

Add this to /etc/users.oath in the following format:

HOTP/T30 hipo - 9b0e4e9fdf33cce9c76431dc8e7369fe

HOTP/T30 means Time-based OTP with 30-second validity (standard TOTP).

Replace hipo with the Linux username you want to protect.

3. Add the Key to Your Authenticator App

Now we need to add that secret to your preferred authenticator app.

You can create a TOTP URI manually (to generate a QR code):

$ echo "otpauth://totp/hipo@jericho?secret=\
$(echo 9b0e4e9fdf33cce9c76431dc8e7369fe \
| xxd -r -p | base32)"

You can paste this URI into a QR code generator (e.g., https://qr-code-generator.com) and scan it using FreeOTP , Aegis, or any open TOTP app.
The FreeOTP Free Ap is my preferred App to use, you can install it via Apple AppStore or Google Play Store.

Alternatively, enter the Base32-encoded secret manually into your app:

# echo 9b0e4e9fdf33cce9c76431dc8e7369fe | xxd -r -p | base32

You can also use qrencode nice nifty tool to generate out of your TOTP code in ASCII mode and scan it with your Phone FreeOTP / Aegis App and add make it ready for use:

# qrencode –type=ANSIUTF8 otpauth://totp/hipo@jericho?secret=$( oathtool –verbose –totp 9b0e4e9fdf33cce9c76431dc8e7369fe –digits=6 -w 1 | grep Base32 | cut -d ' ' -f 3 )\&digits=6\&issuer=pc-freak.net\&period=30

qrencode-generation-of-scannable-QR-code-for-freeotp-or-other-TOTP-auth

qrencode will generate the code. We set the type to ANSI-UTF8 terminal graphics so you can generate this in an ssh login. It can also generate other formats if you were to incorporate this into a web interface. See the man page for qrencode for more options.
The rest of the line is the being encoded into the QR code, and is a URL of the type otpauth, with time based one-time passwords (totp). The user is “hipo@jericho“, though PAM will ignore the @jericho if you are not joined to a domain (I have not tested this with domains yet).

The parameters follow the ‘?‘, and are separated by ‘&‘.

otpauth uses a base32 hash of the secret password you created earlier. oathtool will generate the appropriate hash inside the block:

 $( oathtool –verbose –totp 9b0e4e9fdf33cce9c76431dc8e7369fe | grep Base32 | cut -d ' ' -f 3 )

We put the secret from earlier, and search for “Base32”. This line will contain the Base32 hash that we need from the output:

Hex secret: 9b0e4e9fdf33cce9c76431dc8e7369fe
Base32 secret: E24ABZ2CTW3CH3YIN5HZ2RXP
Digits: 6
Window size: 0
Step size (seconds): 30
Start time: 1970-01-01 00:00:00 UTC (0)
Current time: 2022-03-03 00:09:08 UTC (1646266148)
Counter: 0x3455592 (54875538)

368784 
From there we cut out the third field, “E24ABZ2CTW3CH3YIN5HZ2RXP“, and place it in the line.

Next, we set the number of digits for the codes to be 6 digits (valid values are 6, 7, and 8). 6 is sufficient for most people, and easier to remember.

The issuer is optional, but useful to differentiate where the code came from.

We set the time period (in seconds) for how long a code is valid to 30 seconds.

Note that: Google authenticator ignores this and uses 30 seconds whether you like it or not.

4. Configure PAM to Use libpam-oath

Edit the PAM configuration for SSH:

# vim /etc/pam.d/sshd

At the top of the file, add:

auth required pam_oath.so usersfile=/etc/users.oath window=30 digits=6

This tells PAM to check OTP codes against /etc/users.oath.

5. Configure SSH Daemon to Ask for OTP

Edit the SSH daemon configuration file:
 

# vim /etc/ssh/sshd_config

Ensure these lines are set:
 

UsePAM yes
challengeresponseauthentication yes
ChallengeResponseAuthentication yes
AuthenticationMethods publickey keyboard-interactive
##KbdInteractiveAuthentication no
KbdInteractiveAuthentication yes

N.B.! The KbdInteractiveAuthentication yes variable is necessery on OpenSSH servers with version > of version 8.2_ .

In short This setup means:
1. The user must first authenticate with their SSH key (or local / LDAP password),
2. Then enter a valid one-time code generated from TOTP App from their phone.

You can also use Match directives to enforce 2FA under certain conditions, but not under others.
For example, if you didn’t want to be bothered with it while you are logging in on your LAN,
but do from any other network, you could add something like:

Match Address 127.0.0.1,10.10.10.0/8,192.168.5.0/24
    Authenticationmethods publickey

6. Restart SSH and Test It

Apply your configuration:
 

# systemctl restart ssh

Now, open a new terminal window and try logging in (don’t close your existing one yet, in case you get locked out):

$ ssh hipo@your-server-ip

You should see something like:

Verification code:

Enter the 6-digit code displayed in your FreeOTP (or similar) app.
If it’s correct, you’re logged in! Hooray ! 🙂

7. Test Locally and Secure the Secrets

If you want to test OTPs manually with a base32 encrypted output of hex string:

# oathtool --totp -b \
9b0e4e9fdf33cce9c76431dc8e7369fe

As above might be a bit confusing for starters, i recommend to use below few lines instead:

$ secret_hex="9b0e4e9fdf33cce9c76431dc8e7369fe"
$ secret_base32=$(echo $secret_hex | xxd -r -p | base32)
$ oathtool –totp -b "$secret_base32"
156874

You’ll get the same 6-digit code your authenticator shows – useful for debugging.

If you rerun the oathtool again you will get a difffefrent TOTP code, e.g. :

$ oathtool –totp -b "$secret_base32"
258158


Use this code as a 2FA TOTP auth code together with local user password (2FA + pass pair),  when prompted for a TOTP code, once you entered your user password first.

To not let anyone who has a local account on the system to be able to breach the 2FA additional password protection,
Ensure the secrets file is protected well, i.e.:

# chown root:root /etc/users.oath
# chmod 600 /etc/users.oath

How to Enable 2FA Only for Certain Users

If you want to force OTP only for admins, create a group ssh2fa:

# groupadd ssh2fa
# usermod -aG ssh2fa hipo

Then modify /etc/pam.d/sshd:

auth [success=1 default=ignore] pam_succeed_if.so \
user notingroup ssh2fa
auth required pam_oath.so usersfile=/etc/users.oath \
window=30 digits=6

Only users in ssh2fa will be asked for a one-time code.

Troubleshooting

Problem: SSH rejects OTP
Check /var/log/auth.log or /var/log/secure for more details.
Make sure your phone’s time is in sync (TOTP depends on accurate time).

Problem: Locked out after restart
Always keep one root session open until you confirm login works.

Problem: Everything seems configured fine but still the TOTP is not accepted by remote OpenSSHD.
– Check out the time on the Phone / Device where the TOTP code is generated is properly synched to an Internet Time Server
– Check the computer system clock is properly synchornized to the Internet Time server (via ntpd / chronyd etc.), below is sample:

  • hipo@jeremiah:~$ timedatectl status
                   Local time: Wed 2025-11-05 00:39:17 EET
               Universal time: Tue 2025-11-04 22:39:17 UTC
                     RTC time: Tue 2025-11-04 22:39:17
                    Time zone: Europe/Sofia (EET, +0200)
    System clock synchronized: yes
                  NTP service: n/a
              RTC in local TZ: no

Why Choose libpam-oath?

  • 100% Free Software (GPL)
  • Works completely offline / self-hosted
  • Compatible with any standard TOTP app (FreeOTP, Aegis, andOTP, etc.)
  • Doesn’t depend on Google APIs or cloud services
  • Lightweight (just one PAM module and a text file)

Conclusion

Two-Factor Authentication doesn’t have to rely on Google’s ecosystem.
With OATH Toolkit and libpam-oath, you get a simple, private, and completely open-source way to harden your SSH server against brute-force and stolen-key attacks.

Once configured, even if an attacker somehow steals your SSH key or password, they can’t log in without your phone’s one-time code – making your system dramatically safer.