Reading Time: < 1minute
Just few weeks ago it was the feast of our temple here in Dobrich the holy triniy. For this event a lot of priests either from Dobrich and the district as well as the Metropolit Kiril of Varna. Father Veliko has insisted to ordain me with a church rank. The church rank I received grants me oportunity to enter the holy altar and help in the church services (the holy liturgies). My sister has captured a couple of snapshots both video and pictures the pictures Iâ€™ve upload and can be seen
Posts Tagged ‘snapshots’
Reading Time: < 1minute
Reading Time: 2minutes
If you're a using GNU / Linux for Desktop and you're already tired of creating backups by your own hacks using terminal and you want to make your life a little bit more easier and easily automate your important files back up through GUI program take a look at luckyBackup.
Luckibackup is a GUI frontend to the infamous rsync command line backup tool. Luckibackup is available as a package in almost all modern Linux distributions its very easy to setup and can save you a lot of time especially if you have to manage a number of your Workplace Desktop Office Linux based computers.
Luckibackup is an absolute must have program for Linux Desktop start-up users. If you're migrating from Microsoft Windows realm and you're used to BackupPC, Luckibackup is probably the defacto Linux BackupPC substitute.
The sad news for Linux GNOME Desktop users is luckibackup is written in QT and it using it will load up a bit your notebook.
It is not installed by default so once a new Linux Desktop is installed you will have to install it manually on Debian and Ubuntu based Linux-es to install Luckibackup apt-get it.
debian:~# apt-get install --yes luckibackup
On Fedora and CentOS Linux install LuckiBackup via yum rpm package manager
[root@centos :~]# yum -y install luckibackup
Luckibackup is also ported for OpenSuSE Slackware, Gentoo, Mandriva and ArchLinux. In 2009 Luckibackup won the prize of Sourceforge Community Choice Awards for "best new project".
luckyBackup copies over only the changes you've made to the source directory and nothing more.
You will be surprised when your huge source is backed up in seconds (after the first backup).
Whatever changes you make to the source including adding, moving, deleting, modifying files / directories etc, will have the same effect to the destination.
Owner, group, time stamps, links and permissions of files are preserved (unless stated otherwise).
Luckibackup creates different multiple backup "snapshots".Each snapshot is an image of the source data that refers to a specific date-time.
Easy rollback to any of the snapshots is possible. Besides that luckibackup support Sync (just like rsync) od any directories keeping the files that were most recently modified on both of them.
Useful if you modify files on more than one PCs (using a flash-drive and don't want to bother remembering what did you use last. Luckibackup is capable of excluding certain files or directories from backups – Exclude any file, folder or pattern from backup transfer.
After each operation a logfile is created in your home folder. You can have a look at it any time you want.
luckyBackup can run in command line if you wish not to use the gui, but you have to first create the profile that is going to be executed.
Type "luckybackup –help" at a terminal to see usage and supported options.
There is also TrayNotification – Visual feedback at the tray area informs you about what is going on.
Reading Time: 2minutes
I needed a handy way to recover some old data of an expired domain containing a website, with some really imprtant texts.
The domains has expired before one year and it was not renewed for the reason that it’s holder was not aware his website was gone. In the meantime somebody registered this domain as a way to generate ads profit from it the website was receiving about 500 to 1000 visitors per day.
Now I have the task to recover this website permanently lost from the internet data. I was not able to retrieve anything from the old domain name be contained via google cache, yahoo cache, bing etc.
It appears most of the search engines store a cached version of a crawled website for only 34 months. I’ve found also a search engine gigablast which was claimed to store crawled website data for 1 year, but unfortunately gigablast contained not any version of the website I was looking for.Luckily (thanks God) after a bit of head-banging there I found a website that helped me retrieve at least some parts from the old lost website.
The website which helped me is called WayBack Machine
The Wayback Machine , guys keeps website info snapshots of most of the domain names on the internet for a couple of years back, here is how wayback machine website describes its own provided services:
The Internet Archive's Wayback Machine puts the history of the World Wide Web at your fingertips.
Another handy feature wayback machine provides is checking out how certain websites looked like a couple of years before, let’s say you want to go back in the past and see how yahoo’s website looked like 2 years ago.
Just go to web.archive.org and type in yahoo and select a 2 years old website snapshot and enjoy 😉
It’s really funny how ridiculous many websites looked like just few years from now 😉