If you don't have tried it yet Redhat and CentOS and other RPM based Linux operationg systems that use anaconda installer is generating a kickstart file after being installed under /root/{anaconda-ks.cfg,initial-setup- ks.cfg,original-ks.cfg} immediately after the OS installation completes. Using this Kickstart file template you can automate installation of Redhat installation with exactly the same configuration as many times as you like by directly loading your /root/original-ks.cfg file in RHEL installer.
Here is the official description of Kickstart files from Redhat:
"The Red Hat Enterprise Linux installation process automatically writes a Kickstart file that contains the settings for the installed system. This file is always saved as /root/anaconda-ks.cfg. You may use this file to repeat the installation with identical settings, or modify copies to specify settings for other systems."
Kickstart files contain answers to all questions normally asked by the text / graphical installation program, such as what time zone you want the system to use, how the drives should be partitioned, or which packages should be installed. Providing a prepared Kickstart file when the installation begins therefore allows you to perform the installation automatically, without need for any intervention from the user. This is especially useful when deploying Redhat based distro (RHEL / CentOS / Fedora …) on a large number of systems at once and in general pretty useful if you're into the field of so called "DevOps" system administration and you need to provision a certain set of OS to a multitude of physical servers or create or recreate easily virtual machines with a certain set of configuration.
1. Create /vmprivate storage directory where Virtual machines will reside
First step on the Hypervisor host which will hold the future created virtual machines is to create location where it will be created:
[root@redhat ~]# lvcreate –size 140G –name vmprivate vg00
[root@redhat ~]# mkfs.ext4 -j -b 4096 /dev/mapper/vg00-vmprivate
[root@redhat ~]# mount /dev/mapper/vg00-vmprivate /vmprivate
To view what is the situation with Logical Volumes and VG group names:
[root@redhat ~]# vgdisplay -v|grep -i vmprivate -A7 -B7
Segments 1
Allocation inherit
Read ahead sectors auto
– currently set to 8192
Block device 253:0
— Logical volume —
LV Path /dev/vg00/vmprivate
LV Name vmprivate
VG Name vg00
LV UUID VVUgsf-FXq2-TsMJ-QPLw-7lGb-Dq5m-3J9XJJ
LV Write Access read/write
LV Creation host, time lpgblu01f.ffm.de.int.atosorigin.com, 2021-01-20 17:26:11 +0100
LV Status available
# open 1
LV Size 150.00 GiB
Note that you'll need to have the size physically available on a SAS / SSD Hard Drive physically connected to Hypervisor Host.
To make the changes Virtual Machines storage location directory permanently mounted add to /etc/fstab
/dev/mapper/vg00-vmprivate /vmprivate ext4 defaults,nodev,nosuid 1 2
[root@redhat ~]# echo '/dev/mapper/vg00-vmprivate /vmprivate ext4 defaults,nodev,nosuid 1 2' >> /etc/fstab
2. Second we need to install the following set of RPM packages on the Hypervisor Hardware host
[root@redhat ~]# yum install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager libguestfs-tools virt-install virt-top -y
3. Enable libvirtd on the host
[root@redhat ~]# lsmod | grep -i kvm
[root@redhat ~]# systemctl enable libvirtd
4. Configure network bridging br0 interface on Hypervisor
In /etc/sysconfig/network-scripts/ifcfg-eth0 you need to include:
NM_CONTROLED=NO
Next use nmcli redhat configurator to create the bridge (you can use ip command instead) but since the tool is the redhat way to do it lets do it their way ..
[root@redhat ~]# nmcli connection delete eno3
[root@redhat ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0
[root@redhat ~]# nmcli connection modify br0 ipv4.addresses 10.80.51.16/26 ipv4.method manual
[root@redhat ~]# nmcli connection modify br0 ipv4.gateway 10.80.51.1
[root@redhat ~]# nmcli connection modify br0 ipv4.dns 172.20.88.2
[root@redhat ~]# nmcli connection add type bridge-slave autoconnect yes con-name eno3 ifname eno3 master br0
[root@redhat ~]# nmcli connection up br0
5. Prepare a working kickstart.cfg file for VM
Below is a sample kickstart file I've used to build a working fully functional Virtual Machine with Red Hat Enterprise Linux 8.3 (Ootpa) .
#version=RHEL8 #install # Run the Setup Agent on first boot firstboot --enable ignoredisk --only-use=vda # Use network installation #url --url=http://hostname.com/rhel/8/BaseOS ##url --url=http://171.23.8.65/rhel/8/os/BaseOS # Use text mode install text #graphical # System language #lang en_US.UTF-8 keyboard --vckeymap=us --xlayouts='us' # Keyboard layouts ##keyboard us lang en_US.UTF-8 # Root password rootpw $6$gTiUCif4$YdKxeewgwYCLS4uRc/XOeKSitvDJNHFycxWVHi.RYGkgKctTMCAiY2TErua5Yh7flw2lUijooOClQQhlbstZ81 --iscrypted # network-stuff # place ip=your_VM_IP, netmask, gateway, nameserver hostname network --bootproto=static --ip=10.80.21.19 --netmask=255.255.255.192 --gateway=10.80.21.1 --nameserver=172.30.85.2 --device=eth0 --noipv6 --hostname=FQDN.VMhost.com --onboot=yes # if you need just localhost initially configured uncomment and comment above ##network В --device=lo --hostname=localhost.localdomain # System authorization information authconfig --enableshadow --passalgo=sha512 --enablefingerprint # skipx skipx # Firewall configuration firewall --disabled # System timezone timezone Europe/Berlin # Clear the Master Boot Record ##zerombr # Repositories ## Add RPM repositories from KS file if necessery #repo --name=appstream --baseurl=http://hostname.com/rhel/8/AppStream #repo --name=baseos --baseurl=http://hostname.com/rhel/8/BaseOS #repo --name=inst.stage2 --baseurl=http://hostname.com ff=/dev/vg0/vmprivate ##repo --name=rhsm-baseos В В --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/BaseOS/ ##repo --name=rhsm-appstream --baseurl=http://172.54.8.65/rhel/8/rhsm/x86_64/AppStream/ ##repo --name=os-baseos В В В --baseurl=http://172.54.9.65/rhel/8/os/BaseOS/ ##repo --name=os-appstream В --baseurl=http://172.54.8.65/rhel/8/os/AppStream/ #repo --name=inst.stage2 --baseurl=http://172.54.8.65/rhel/8/BaseOS # Disk partitioning information set proper disk sizing ##bootloader --location=mbr --boot-drive=vda bootloader --append=" crashkernel=auto tsc=reliable divider=10 plymouth.enable=0 console=ttyS0 " --location=mbr --boot-drive=vda # partition plan zerombr clearpart --all --drives=vda --initlabel part /boot --size=1024 --fstype=ext4 --asprimary part swap --size=1024 part pv.01 --size=30000 --grow --ondisk=vda ##part pv.0 --size=80000 --fstype=lvmpv #part pv.0 --size=61440 --fstype=lvmpv volgroup s pv.01 logvol / --vgname=s --size=15360 --name=root --fstype=ext4 logvol /var/cache/ --vgname=s --size=5120 --name=cache --fstype=ext4 --fsoptions="defaults,nodev,nosuid" logvol /var/log --vgname=s --size=7680 --name=log --fstype=ext4 --fsoptions="defaults,nodev,noexec,nosuid" logvol /tmp --vgname=s --size=5120 --name=tmp --fstype=ext4 --fsoptions="defaults,nodev,nosuid" logvol /home --vgname=s --size=5120 --name=home --fstype=ext4 --fsoptions="defaults,nodev,nosuid" logvol /opt --vgname=s --size=2048 --name=opt --fstype=ext4 --fsoptions="defaults,nodev,nosuid" logvol /var/log/audit --vgname=s --size=3072 --name=audit --fstype=ext4 --fsoptions="defaults,nodev,nosuid" logvol /var/spool --vgname=s --size=2048 --name=spool --fstype=ext4 --fsoptions="defaults,nodev,nosuid" logvol /var --vgname=s --size=7680 --name=var --fstype=ext4 --fsoptions="defaults,nodev,nosuid" # SELinux configuration selinux --disabled # Installation logging level logging --level=debug # reboot automatically reboot ### %packages @standard python3 pam_ssh_agent_auth -nmap-ncat #-plymouth #-bpftool -cockpit #-cryptsetup -usbutils #-kmod-kvdo #-ledmon #-libstoragemgmt #-lvm2 #-mdadm -rsync #-smartmontools -sos -subscription-manager-cockpit # Tune Linux vm.dirty_background_bytes (IMAGE-439) # The following tuning causes dirty data to begin to be background flushed at # 100 Mbytes, so that it writes earlier and more often to avoid a large build # up and improving overall throughput. echo "vm.dirty_background_bytes=100000000" >> /etc/sysctl.conf # Disable kdump systemctl disable kdump.service %end
Important note to make here is the MD5 set root password string in (rootpw) line this string can be generated with openssl or mkpasswd commands :
Method 1: use openssl cmd to generate (md5, sha256, sha512) encrypted pass string
[root@redhat ~]# openssl passwd -6 -salt xyz test
$6$xyz$rjarwc/BNZWcH6B31aAXWo1942.i7rCX5AT/oxALL5gCznYVGKh6nycQVZiHDVbnbu0BsQyPfBgqYveKcCgOE0
Note: passing -1 will generate an MD5 password, -5 a SHA256 encryption and -6 SHA512 encrypted string (logically recommended for better security)
Method 2: (md5, sha256, sha512)
[root@redhat ~]# mkpasswd –method=SHA-512 –stdin
The option –method accepts md5, sha-256 and sha-512
Theoretically there is also a kickstart file generator web interface on Redhat's site here however I never used it myself but instead use above kickstart.cfg
6. Install the new VM with virt-install cmd
Roll the new preconfigured VM based on above ks template file use some kind of one liner command line like below:
[root@redhat ~]# virt-install -n RHEL8_3-VirtualMachine –description "CentOS 8.3 Virtual Machine" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location=/vmprivate/rhel-server-8.3-x86_64-dvd.iso –disk path=/vmprivate/RHEL8_3-VirtualMachine.img,bus=virtio,size=70 –graphics none –initrd-inject=/root/kickstart.cfg –extra-args "console=ttyS0 ks=file:/kickstart.cfg"
7. Use a tiny shell script to automate VM creation
For some clarity and better automation in case you plan to repeat VM creation you can prepare a tiny bash shell script:
#!/bin/sh
KS_FILE='kickstart.cfg';
VM_NAME='RHEL8_3-VirtualMachine';
VM_DESCR='CentOS 8.3 Virtual Machine';
RAM='8192';
CPUS='8';
# size is in Gigabytes
VM_IMG_SIZE='140';
ISO_LOCATION='/vmprivate/rhel-server-8.3-x86_64-dvd.iso';
VM_IMG_FILE_LOC='/vmprivate/RHEL8_3-VirtualMachine.img';virt-install -n "$VMNAME" –description "$VM_DESCR" –os-type=Linux –os-variant=rhel8.3 –ram=8192 –vcpus=8 –location="$ISO_LOCATION" –disk path=$VM_IMG_FILE,bus=virtio,size=$IMG_VM_SIZE –graphics none –initrd-inject=/root/$KS_FILE –extra-args "console=ttyS0 ks=file:/$KS_FILE"
A copy of virt-install.sh script can be downloaded here
Wait for the installation to finish it should be visualized and if all installation is smooth you should get a login prompt use the password generated with openssl tool and test to login, then disconnect from the machine by pressing CTRL + ] and try to login via TTY with
[root@redhat ~]# virst list –all
Id Name State
—————————
2 RHEL8_3-VirtualMachine running…
[root@redhat ~]# virsh console RHEL8_3-VirtualMachine
One last thing I recommend you check the official documentation on Kickstart2 from CentOS official website
In case if you later need to destroy the VM and the respective created Image file you can do it with:
[root@redhat ~]# virsh destroy RHEL8_3-VirtualMachine
[root@redhat ~]# virsh undefine RHEL8_3-VirtualMachine
Don't forget to celebreate the success and give this nice article a credit by sharing this nice tutorial with a friend or by placing a link to it from your blog 🙂
Enjoy !
How to configure and enable Xen Linux dedicated server’s Virtual machines Internet to work / Enable multipe real IPs and one MAC only in (SolusVM) through NAT routed and iptables
Saturday, June 4th, 2011I’ve been hired as a consultant recently to solve a small task on a newly bought Xen based dedicated server.
The server had installed on itself SolusVM
The server was a good hard-iron machine running with CentOS Linux with enabled Xen virtualization support.
The Data Center (DC) has provided the client with 4 IP public addresses, whether the machine was assigned to possess only one MAC address!
The original idea was the dedicated server is supposed to use 4 of the IP addresses assigned by the DC whether only one of the IPs has an external internet connected ethernet interface with assigned MAC address.
In that case using Xen’s bridging capabilities was pretty much impossible and therefore Xen’s routing mode has to be used, plus an Iptables Network Address Translation or an IP MASQUERADE .
In overall the server would have contained 3 virtual machines inside the Xen installed with 3 copies of:
The scenario I had to deal with is pretty much explained in Xen’s Networking wiki Two Way Routed Network
In this article I will describe as thoroughfully as I can how I configured the server to be able to use the 3 qemu virtual machines (running inside the Xen) with their respective real interner visible public IP addresses.
1. Enable Proxyarp for the eth0 interface
To enable proxyarp for eth0 on boot time and in real time on the server issue the commands:
[root@centos ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp[root@centos ~]# echo 'net.ipv4.conf.all.proxy_arp = 1' >> /etc/sysctl.conf
2. Enable IP packet forwarding for eth interfaces
This is important pre-requirement in order to make the iptables NAT to work.
[root@centos ~]# echo 'net.ipv4.ip_forward = 1' >> /etc/sysctl.conf
[root@centos ~]# echo 'net.ipv6.conf.all.forwarding=1' >> /etc/sysctl.conf
If you get errors during execution of /etc/init.d/xendomains , like for example:
[root@centos ~]# /etc/init.d/xendomains restart
/etc/xen/scripts/network-route: line 29: /proc/sys/net/ipv4/conf/eth0/proxy_arp: No such file or directory
/etc/xen/scripts/network-route: line 29: /proc/sys/net/ipv6/conf/eth0/proxy_arp: No such file or directory
in order to get rid of the message you will have to edit /etc/xen/scripts/network-route and comment out the lines:
echo 1 >/proc/sys/net/ipv4/conf/${netdev}/proxy_arp
echo 1 > /proc/sys/net/ipv6/conf/eth0/proxy_arp
e.g.
#echo 1 >/proc/sys/net/ipv4/conf/${netdev}/proxy_arp
#echo 1 > /proc/sys/net/ipv6/conf/eth0/proxy_arp
3. Edit /etc/xen/xend-config.sxp, disable ethernet bridging and enable eth0 routing (route mode) and NAT for Xen’s routed mode
Make absolutely sure that in /etc/xen/xend-config.sxp the lines related to bridging are commented.
The lines you need to comment out are:
(network-script network-bridge)
(vif-script vif-bridge)
make them look like:
#(network-script network-bridge)
#(vif-script vif-bridge)br />
Now as bridging is disabled let’s enable Xen routed network traffic as an bridged networking alternative.
Find the commented (network-script network-route) and (vif-script vif-route) lines and uncomment them:
#(network-script network-route)
#(vif-script vif-route)
The above commented lines should become:
(network-script network-route)
(vif-script vif-route)
Next step is to enable NAT for routed traffic in Xen (necessery to make routed mode work).
Below commented two lines in /etc/xen/xend-config.sxp, should be uncommented e.g.:
#(network-script network-nat)
#(vif-script vif-nat)
Should become:
(network-script network-nat)
(vif-script vif-nat)
4. Restart Xen control daemon and reload installed Xen’s Virtual Machines installed domains
To do so invoke the commands:
[root@centos ~]# /etc/init.d/xend
[root@centos ~]# /etc/init.d/xendomains restart
This two commands will probably take about 7 to 10 minutes (at least they took this serious amount of time in my case).
If you think this time is too much to speed-up the procedure of restarting Xen and qemu attached virtual machines, restart the whole Linux server, e.g.:
[root@centos ~]# restart
5. Configure iptables NAT rules on the CentOS host
After the server boots up, you will have to initiate the following ifconfig & iptables rules in order to make the Iptables NAT to work out:
echo > > /proc/sys/net/ipv4/conf/tap1.0/proxy_arp
/sbin/ifconfig eth0:1 11.22.33.44 netmask 255.255.252.0
/sbin/ifconfig eth0:2 22.33.44.55 netmask 255.255.252.0
/sbin/ifconfig eth0:3 33.44.55.66 netmask 255.255.252.0
/sbin/iptables -t nat -A PREROUTING -d 11.22.33.44 -i eth0 -j DNAT --to-destination 192.168.1.2
/sbin/iptables -t nat -A PREROUTING -d 22.33.44.55 -i eth0 -j DNAT --to-destination 192.168.1.3
/sbin/iptables -t nat -A PREROUTING -d 33.44.55.66 -i eth0 -j DNAT --to-destination 192.168.1.4
/sbin/iptables -t nat -A POSTROUTING -s 192.168.1.2 -o eth0 -j SNAT --to-source 11.22.33.44
/sbin/iptables -t nat -A POSTROUTING -s 192.168.1.3 -o eth0 -j SNAT --to-source 22.33.44.55
/sbin/iptables -t nat -A POSTROUTING -s 192.168.1.4 -o eth0 -j SNAT --to-source 33.44.55.66
In the above ifconfig and iptables rules the IP addresses:
11.22.33.44, 22.33.44.55, 33.44.55.66 are real IP addresses visible from the Internet.
In the above rules eth0:1, eth0:2 and eth0:3 are virtual ips assigned to the main eth0 interface.
This ifconfig and iptables setup assumes that the 3 Windows virtual machines running inside the Xen dedicated server will be configured to use (local) private network IP addresses:
192.168.1.2, 192.168.1.3 and 192.168.1.4
You will have also to substitute the 11.22.33.44, 22.33.44.55 and 33.44.55.66 with your real IP addreses.
To store the iptables rules permanently on the fedora you can use the iptables-save command:
[root@centos ~]# /sbin/iptables-save
However I personally did not use this approach to save my inserserted iptable rules for later boots but I use my small script set_ips.sh to add virtual interfaces and iptables rules via the /etc/rc.local invokation:
If you like the way I have integrated my virtual eths initiation and iptables kernel firewall inclusion, download my script and set it to run in /etc/rc.local, like so:
[root@centos ~]# cd /usr/sbin
[root@centos sbin]# wget https://pc-freak.net/bshscr/set_ips.sh
...
[root@centos ~]# chmod +x /usr/sbin/set_ips.sh
[root@centos ~]# mv set_ips.sh /usr/sbin
[root@centos ~]# echo '/usr/sbin/set_ips.sh' >> /etc/rc.local
Note that you will have to modify my set_ips.sh script to substitute the 11.22.33.44, 22.33.44.55 and 33.44.55.66 with your real IP address.
So far so good, one might think that all this should be enough for the Virtual Machines Windows hosts to be able to connect to the Internet and Internet requests to the virtual machines to arrive, but no it’s not!!
6. Debugging Limited Connectivity Windows LAN troubles on the Xen dedicated server
Even though the iptables rules were correct and the vif route and vif nat was enabled inside the Xen node, as well as everything was correctly configured in the Windows 2008 host Virtual machines, the virtual machines’s LAN cards were not able to connect properly to connect to the internet and the Windows LAN interface kept constantly showing Limited Connectivity! , neither a ping was available to the gateway configured for the Windows VM host (which in my case was: 192.168.1.1).
You see the error with Limited connectivity inside the Windows on below’s screenshot:
Here is also a screenshot of my VNC connection to the Virtual machine with the correct IP settings – (TCP/IPv4) Properties Window:
This kind of Limited Connectivity VM Windows error was really strange and hard to diagnose, thus I started investigating what is wrong with this whole situation and why is not able the Virtualized Windows to connect properly to the Internet, through the Iptables NAT inbound and outbound traffic redirection.
To diagnose the problem, I started up with listing the exact network interfaces showing to be on the Xen Dedicated server:
[root@centos ~]# /sbin/ifconfig |grep -i 'Link encap' -A 1
eth0 Link encap:Ethernet HWaddr 00:19:99:9C:08:3A
inet addr:111.22.33.55 Bcast:111.22.33.255
Mask:255.255.252.0
--
eth0:1 Link encap:Ethernet HWaddr 00:19:99:9C:08:3A
inet addr:11.22.33.44 Bcast:11.22.33.255
Mask:255.255.252.0
--
eth0:2 Link encap:Ethernet HWaddr 00:19:99:9C:08:3A
inet addr:22.33.44.55 Bcast:22.33.44.255
Mask:255.255.252.0
--
eth0:3 Link encap:Ethernet HWaddr 00:19:99:9C:08:3A
inet addr:33.44.55.66 Bcast:33.44.55.255
Mask:255.255.252.0
--
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
--
tap1.0 Link encap:Ethernet HWaddr FA:07:EF:CA:13:31
--
vifvm101.0 Link encap:Ethernet HWaddr FE:FF:FF:FF:FF:FF
inet addr:111.22.33.55 Bcast:111.22.33.55
Mask:255.255.255.255
I started debugging the issue, using the expelling logic.
In the output concerning my interfaces via ifconfig on eth0, I have my primary server IP address 111.22.33.55 , this one is working for sure as I was currently connected to the server through it.
The other virtual IP addresses assigned on the virtual network interfaces eth0:1, eth0:2 and eth0:3 were also assigned correctly as I was able to ping this ips from my Desktop machine from the Internet.
The lo , interface was also properly configured as I could ping without a problem the loopback ip – 127.0.0.1
The rest of the interfaces displayed by my ifconfig output were: tap1.0, vifvm101.0
After a bit of ressearch, I’ve figured out that they’re virtual interfaces and they belong to the Xen domains which are running qemu virtual machines with the Windows host.
I used tcpdump to debug what kind of traffic does flow through the tap1.0 and vifvm101.0 interfaces, like so
[root@centos ~]# tcpdump -i vifvm101.0
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vifvm101.0, link-type EN10MB (Ethernet), capture size 96 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
[root@centos ~]# tcpdump -i tap1.0
cpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on tap1.0, link-type EN10MB (Ethernet), capture size 96 bytes
^C
08:55:52.490249 IP 229.197.34.95.customer.cdi.no.15685 > 192.168.1.2.12857: UDP, length 42
I’ve figured out as it’s also observable in above’s two tcpdump commands output, that nothing flows through the vifvm101.0 interface, and that there was some traffic passing by tap1.0 interface.
7. Solving the Limited Connectivy Windows Internet network connection problems
As below’s ifconfig output reveals, there is no IP address assigned to tap1.0 interface, using some guidelines and suggestions from guys in irc.freenode.net’s #netfilter irc channel, I’ve decided to give a go to set up an IP address of 192.168.1.1 to tap1.0 .
I choose for a reason as this IP address is configured to be my Gateway’s IP Address inside the Emulated Windows 2008 hosts
To assign the 192.168.1.1 to tap1.0, I issued:
[root@centos ~]# /sbin/ifconfig tap1.0 192.168.1.1 netmask 255.255.255.0
To test if there is difference I logged in to the Virtual Machine host with gtkvncviewer (which by the way is a very nice VNC client for Gnome) and noticed there was an established connection to the internet inside the Virtual Machine 😉I issued a ping to google which was also returned and opened a browser to really test if everything is fine with the Internet.
Thanks God! I could browse and everything was fine 😉
8. Making tap1.0 192.168.1.1 (VM hosts gateway to be set automatically, each time server reboots)
After rebooting the server the tap1.0 assignmend of 192.168.1.1 disappeared thus I had to make the 192.168.1.1, be assigned automatically each time the CentoS server boots.
To give it a try, I decided to place /sbin/ifconfig tap1.0 192.168.1.1 netmask 255.255.255.0 into /etc/rc.local, but this worked not as the tap1.0 interface got initialized a while after all the xendomains gets initialized.
I tried few times to set some kind of sleep time interval with the sleep , right before the /sbin/ifconfig tap1.0 … ip initialization but this did not worked out, so I finally completely abandoned this methodology and make the tap1.0 get initialized with an IP through a cron daemon.
For that purpose I’ve created a script to be invoked, every two minutes via cron which checked if the tap1.0 interface is up and if not issues the ifconfig command to initialize the interface and assign the 192.168.1.1 IP to it.
Here is my set_tap_1_iface.sh shell script
To set it up on your host in /usr/sbin issue:
[root@centos ~]# cd /usr/sbin/
In order to set it on cron to make the tap1.0 initialization automatically every two minutes use the cmd:[root@centos sbin]# wget https://pc-freak.net/bshscr/set_tap_1_iface.sh
...
[root@centos ~]# crontab -u root -e
After the cronedit opens up, place the set_tap_1_iface.sh cron invokation rules:
*/2 * * * * /usr/sbin/set_tap_1_iface.sh >/dev/null 2>&1
and save.
That’s all now your Xen dedicated and the installed virtual machines with their public internet IPs will work 😉
If this article helped you to configure your NAT routing in Xen drop me a thanks message, buy me a beer or hire me! Cheers 😉
Tags: addr, amount, arp, arpecho, Bcast, boot time, center dc, control, dedicated server, echo 1, eth, execution, external internet, host, ip masquerade, ips, iptables nat, ipv, ipv4, ipv6, iron machine, mac address, memory, Metric, microsoft windows, modeMake, necessery, Netmask, network address translation, POSTROUTING, proxy arp, public addresses, public ip addresses, qemu, Restart, root, screenshot, sxp, time, uncomment, vif, Virtual, virtual machines, work, xend
Posted in Linux, System Administration | 3 Comments »