Posts Tagged ‘Files’

How to do Diff (compare files) over SSH between local and remote servers on Linux

Monday, December 16th, 2019

how-to-diff-files-between-remote-servers-and-local-machines-on-linux

In system administration often we need to compare files located on a different servers, copying the files from Server A to Server B is easy to do but is time consuming as you have use some console ftp or sftp client scp or sftp to copy the files from server A to server B and then use diff command on one of the both systems.

Thanksfully there are other ways to do so by using simply one liner of diff + ssh or rsync + ssh and even for the vim lovers there is vimdiff.
In this short article I'll give few examples on quick ways to compare files between 2 Servers via SSH Protocol.

 

1.  Compare files for differences on 2 Linux servers via SSH protocol

 

Assuming you're logged on first server where certain config is located and you want to compare with a remote server via ssh.

 

 diff local-file <(ssh myServer 'cat remote-file')

 

If you're on a server and you want to compare file configurations between 2 remote servers both running ssh, generally you need something like:

 

diff <(ssh myServer1 'cat /etc/ssh/sshd_config') <(ssh myServer2 'cat /etc/ssh/sshd_config')​

 

To compare multiple files in directories with diff
 

diff <(/usr/bin/ssh user1@192.168.122.1 'ls /opt/lib/') <(/usr/bin/ssh user2@192.168.122.1 'ls /tmp/') | grep -i ">" | sed 's/> //g'

 

2. Interactively check 2 or more config files and show differences in a vim text editor style

 

vimdiff package is not installed across all Linux distributions so on paranoid Security tightened Linux environments, but on most servers should be either there or could be installed with apt / yum whatever package manager. You will need vimdiff installed only on one of the Nth servers you want to check config.

Here is how to compare 3 Linux servers, running OpenSSHD services existing files in vimdiff

vimdiff /path/to/file scp://remotehost//path/to/file scp://remotehost2//path/to/file


vimdiff-with-3-servers-comparing-sshd-config-file

Note here that the double slash – // syntax is mandatory without it vimdiff will return files. Also be aware that the files you want to check should be present on each of the server directory locations, otherwise you will end up with weird errors.

vimdiff is the Mercedes of comparison especially for VIM UNIX addicts and due to its nice coloring makes reading, the difference between server files very easy.

3. File comparison with diff or vimdiff via SSHFS mount

mkdir remote_path
sshfs user@hostname:/dir/ remote_path
diff -r local_path/file remote_path/file

4. Comparing files with diff by printing local and remote server files with diff

Most servers doesn't have sshfs by default and for servers following PCI High Security standards, there are other means to compare files on both or more hosts in a minimalistic way, here is idea how:
 

diff <(ssh remote-host-server find /var/www -printf '"%8s %P\n"') \
     <(find /var/www -printf '%8s %P\n')

5. Comparing files content on local and remote server directory with rsync

The best UNIX tool to compare mutliple files and directory across Local remote servers and a mixture of both is our lovely rsync 
together wtih SSH, ssh compes with the –-dry-run (-n) – test what rsync will do option.

To compare files over SSH protocol with rsync on local and remote server

rsync -rvnc root@10.10.10.50:/var/www/html/phpcode /var/www/html/phpcode


To compare 2 remote hosts:

rsync -rvnc root@187.50.200.73:/var/www/html/phpcode/ root@192.168.5.50:/var/www/html/phpcode 


To compare more hosts even a mixture of local and remote servers do.

rsync -rvnc root@187.50.200.73:/var/www/html/phpcode/  \
root@192.168.5.50:/var/www/html/phpcode  \
root@192.168.5.70:/var/www/html/phpcode \
./var/www/html/phpcode

The rsync options given are as so:

r=recursive,
v=verbose,
n= dry-run,
c=checksum

Add gzip compression to optimize web server served files in Apache, Nginx and LiteSpeed

Wednesday, November 15th, 2017

Enable-Gzip-Compression-quick-howto-on-apache-nginx-litespeed

What is GZIP Compression and why you need it?

no-gzip-support-illustration

  • What is gzip? – In Linux / Unix gzip of files is used to compress files so they can take less space when they're transferred from server to server via network in order to speed up file transfer.
  • Usually gzipped files are named as filename.gz
  • Why GZIp compression is important to be enabled on servers, well because that reduces the transferred (served) file by webserver to client browser
  • The effect of this is the faster file transfer of the file and increased overall web user performance


how-gzip-works-with-nginx-illustrated

Most webservers / websites online currently use gzipping of a sort, those who still did not use it has websites which are up to 40% slower than those of competitor websites

How to enable GZIP Compression on Apache Webserver

The easiest way for most people out there who run there websites on a shared hosting is to add the following Apache directives to dynamic loadable .htaccess file:
 

<ifModule mod_gzip.c>
mod_gzip_on Yes
mod_gzip_dechunk Yes
mod_gzip_item_include file .(html?|txt|css|js|php|pl)$
mod_gzip_item_include handler ^cgi-script$
mod_gzip_item_include mime ^text/.*
mod_gzip_item_include mime ^application/x-javascript.*
mod_gzip_item_exclude mime ^image/.*
mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.*
</ifModule>

 

You can put a number of other useful things in .htaccess the file should already be existing in most webhostings with Cpanel or Kloxo kind of administration management interface.

Once the code is included to .htaccess you can reflush site cache.
To test whether the just added HTTP gzip compression works for the Webserver you can use The Online HTTP Compression test

If for some reason after adding this code you don't rip the benefits of gzipped content served by webserver you can try to add altenatively to .htaccess

 

AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript

 


Howto Enable GZIP HTTP file compression on NGINX Webserver?

Open NGINX configuration file and add to it the following command parameters:

 

gzip on;
gzip_comp_level 2;
gzip_http_version 1.0;
gzip_proxied any;
gzip_min_length 1100;
gzip_buffers 16 8k;
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript;

 

# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";

# Add a vary header for downstream proxies to avoid sending cached gzipped files to IE6
gzip_vary on;

Enable HTTP file Compression on LiteSpeed webserver

In configuration under TUNING section check whether "enable compression" is enabled, if it is not choose "Edit"
and turn it on.

litespeed-how-to-enable-gzip-compressible_type-illustrated

What is the speed benefits of using HTTP gzip compression?

By using HTTP gzip compression you can save your network and clients abot 50 to 70% (e.g. transferred data) of the original file size.
This would mean less time for loading pages and fetched files and decrease in used bandwidth.

effect-of-gzip-compression-diagram-illustrated

A very handy tool to test whether HTTP Compression is enabled as well as how much is optimized for Speed your Website is Google PageSpeed Insights
as well as GTMetrix.com

Where are Apache log files on my server – Apache log file locations on Debian / Ubuntu / CentOS / Fedora and FreeBSD ?

Tuesday, November 7th, 2017

apache-where-are-httpd-access-log-files

Where are Apache log files on my server?

1. Finding Linux / FreeBSD operating system distribtion and version

Before finding location of Apache log files it is useful to check what is the remote / local Linux operating system version, hence

First thing to do when you login to your remote Linux server is to check what kind of GNU / Linux you're dealing with:

cat /etc/issue
cat /etc/issue.net


In most GNU / Linux distributions should give you enough information about the exact Linux distribution and version remote server is running.

You will get outputs like

# cat /etc/issue
SUSE LINUX Enterprise Server 10.2 Kernel \r (\m), \l

or

# cat /etc/issue
Debian GNU/Linux 8 \n \l

If remote Linux is Fedora look for fedora-release file:

cat /etc/fedora-release Fedora release 7 (Moonshine)

The proposed freedesktop.org standard with the introduction of systemd across all Linux distributions is

/etc/os-release

 

# cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 8 (jessie)"
NAME="Debian GNU/Linux"
VERSION_ID="8"
VERSION="8 (jessie)"
ID=debian
HOME_URL="http://www.debian.org/"
SUPPORT_URL="http://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"


Once we know what kind of Linux distribution we're dealing with, we can proceed with looking up for standard location of Apache config:

2. Apache config file location for Fedora / CentOS / RHEL and other RPM based distributions

RHEL / Red Hat / CentOS / Fedora Linux Apache access file location
 

/var/log/httpd/access_log


3. Apache config file location for Debian / Ubuntu and other deb based Linux distributions

Debian / Ubuntu Linux Apache access log file location

/var/log/apache2/access.log


4. Apache config file location for FreeBSD

FreeBSD Apache access log file location –

/var/log/httpd-access.log


5. Finding custom Apache access log locations
 

If for some reason the system administrator on the remote server changed default path for each of distributions, you can find custom configured log files through:

a) On Debian / Ubuntu / deb distros:

debian:~# grep CustomLog /etc/apache2/apache2.conf


b) On CentOS / RHEL / Fedora Linux RPM based ones:

[root@centos:  ~]# grep CustomLog /etc/httpd/conf/httpd.conf


c) On FreeBSD OS

 

freebsd# grep CustomLog /etc/httpd/conf/httpd.conf
 # a CustomLog directive (see below).
    #CustomLog "/var/log/httpd-access.log" common
    CustomLog "/var/log/httpd-access.log" combined

How to use zip command to archive directory and files in GNU / Linux

Monday, November 6th, 2017

how-to-use-zip-command-to-archive-directory-and-files-in-gnu-linux-and-freebsd

How to zip directory or files with ZIP command in LInux or any other Unix like OS?

Why would you want to ZIP files in Linux if you have already gzip and bzip archive algorithms? Well for historical reasons .ZIP is much supported across virtually all major operating systems like Unix, Linux, VMS, MSDOS, OS/2, Windows NT, Minix, Atari and Macintosh, FreeBSD, OpenBSD, NetBSD, Amiga and Acorn RISC and many other operating systems.

Assuming that zip command line tool is available across most GNU / Linux and WinZIP is available across almost all Windowses, the reason you might need to create .zip archive might be to just transfer the files from your Linux / FreeBSD desktop system or a friend with M$ Windows.

So below is how to archive recursively files inside a directory using zip command:
 

 $ zip -r myvacationpics.zip /home/your-directory/your-files-pictures-text/

 


or you can write it shorter with omitting .zip as by default zip command would create .zip files

 

$ zip -r whatever-zip-file-name /home/your-directory/your-files-pictures-text/

 


The -r tells zip to recurse into directories (e.g. archive all files and directories inside your-files-pictures-text/)

If you need to archive just a files recursively with a file extension such as .txt inside current directory

 

$ zip -R my-zip-archive.zip '*.txt'


Above command would archive any .txt found inside your current directory if the zip command is for example issued from /home/hipo all found files such as /home/hipo/directory1, /home/hipo/directory2, /home/hipo/directory2/directory3/directory4 and all the contained subdirs that contain any .txt extension files will be added to the archive.

For the Linux desktop users that are lazy and want to zip files without much typing take a look at PeaZip for Linux 7Z / ZIP GUI interface tool

 

How to use find command to find files created on a specific date , Find files with specific size on GNU / Linux

Monday, October 16th, 2017

How to use find command to find files created on a specific date on GNU / Linux?

 

The easiest and most readable way but not most efficient ) especially for big hard disks with a lot of files not the best way) to do it is via:

 

find ./ -type f -ls |grep '12 Oct'

 


Example: To find all files modified on the 12th of October, 2017:

find . -type f -newermt 2017-10-12 ! -newermt 2017-10-13

To find all files accessed on the 29th of september, 2008:

$ find . -type f -newerat 2015-09-29 ! -newerat 2015-09-30

Or, files which had their permission changed on the same day:

$ find . -type f -newerct 2015-09-29 ! -newerct 2015-09-30

If you don't change permissions on the file, 'c' would normally correspond to the creation date, though.

 

Another more cryptic way but perhaps more efficient  to find any file modified on October 12th,2017,  would be with below command:

 

find . -type f -mtime $(( ( $(date +%s) – $(date -d '2017-10-12' +%s) ) / 60 / 60 / 24 – 1 ))

 

 

 

You could also look at files between certain dates by creating two files with touch

touch -t 0810010000 /tmp/f-example1
touch -t 0810011000 /tmp/f-example2

This will find all files between the two dates & times of the 2 files /tmp

 

find / -newer /tmp/f-example1 -and -not -newer /tmp/f-exampl2

 


How to Find Files with a certain size on GNU / Linux?

 

Lets say you got cracked and someone uploaded a shell php file of 50296 bytes a , that's a real scenario that just happened to me:

root@pcfreak:/var/www/blog/wp-admin/js# ls -b green.php 
green.php
root@pcfreak:/var/www/blog/wp-admin/js# ls -al green.php 
-rw-r–r– 1 www-data www-data 50296 окт 12 02:27 green.php

root@pcfreak:/home/hipo# find /var/www/ -type f -size 50296c -exec ls {} \;
/var/www/blog/wp-content/themes/default/green.php
/var/www/blog/wp-content/w3tc/pgcache/blog/tag/endless-loop/_index.html
/var/www/blog/wp-content/w3tc/pgcache/blog/tag/common/_index.html
/var/www/blog/wp-content/w3tc/pgcache/blog/tag/apacheroot/_index.html
/var/www/blog/wp-content/w3tc-bak/pgcache/blog/tag/endless-loop/_index.html
/var/www/blog/wp-content/w3tc-bak/pgcache/blog/tag/common/_index.html
/var/www/blog/wp-content/w3tc-bak/pgcache/blog/tag/apacheroot/_index.html
/var/www/pcfreakbiz/wp-includes/css/media-views.css
 

 

Convert PDF .pdf to Plain Text .txt files on GNU / Linux and FreeBSD / pdftotext

Friday, November 16th, 2012

Convert PDF .pdf to .txt Plain Text on GNU / Linux Redhat, Debian, CentOS, Fedora and FreeBSD with pdftotext poppler-utils

If you need to convert Adobe PDF to Plain Text on Linux or FreeBSD, you will have to take a look at a poppler-utils – (PDF Utilities).

For those who wonder why you need at all a .PDF in .TXT, I can think of at least 4 good reasons. 
 

PDF to text convertion on Linux and other UNIX-es is possible through a set of tools called poppler-utils

poppler-utils is installable on most Linux distributions on Debian Ubuntu based Linux-es it is installable with the usual:

noah:~# apt-get install --yes poppler-utils
....

On Fedora it is available and installable from default repositories with yum

[root@fedora~]# yum -y install poppler-utils 

On Mandriva Linux:
[root@mandriva~] # urpmi poppler
....

On FreeBSD (and possibly other BSDs) you can install via ports or install it from binary with:

freebsd# pkg_add -vr poppler-utils
....

Here is a list of poppler-utils contents from the .deb Debian package, on other distros and BSD the /bin content tools are same.
noah:~ # dpkg -L poppler-utils|grep -i /usr/bin/
/usr/bin/pdftohtml
/usr/bin/pdfinfo
/usr/bin/pdfimages
/usr/bin/pdftops
/usr/bin/pdftoabw
/usr/bin/pdftoppm
/usr/bin/pdffonts
/usr/bin/pdftotext

1. Converting  .pdf to .txt 

Converting whole PDF document to TXT is done with:

$ pdftotext PeopleWare-Productive_Projects.pdf PeopleWare-Productive_Projects.txt
 
2. Extracting from PDF to Text file only selected pages

 Dumping to .TXT only specific pages from a PDF file: is done through -f and -l arguments (First and Last) pages number.

$ pdftotext -f 3 -l 10 PeopleWare-Productive_Projects.pdf PeopleWare-Productive_Projects.txt

3. Converting PDF to TXT  protected with password

  $ pdftotext -opw 'Password' Password-protected-file.pdf Unprotected-file-dump.txt

the -opw arguments stand for 'Owner Password'. As suggested by man page -opw will bypass all PDF security restrictions. In PDFs there are file permission password protection as well as user password. 

To remove permissions password protection of file

$ pdftotext -upw 'Password' Password-protected-file.pdf Unprotected-file-dump.txt

 
4. Converting .pdf to .txt and setting type of end of file

Depending on the type of Operating System the TEXT file will be red further, you can set the type of end of lines (for those who don't know it here is the 3 major OSes UNIX, Windows, and MAC end of line codes:

DOS & Windows: \r\n 0D0A (hex), 13,10 (decimal)
Unix & Mac OS X: \n, 0A, 10
Macintosh (OS 9): \r, 0D, 13

$ pdftotext -eol unix PeopleWare-Productive_Projects.pdf
PeopleWare-Productive_Projects.txt

The -eol accepts (mac, unix or dos) as options

A bit off topic but very useful thing is to then listen to converted .txt files using festival.

5. Reading .PDF in Linux Text Console and Terminals

$ pdftotext PDF_file_to_Read.pdf -

Btw it is interesting to mention Midnight Commander ( mcview ), component which supports opening .pdf files in console uses pdftotext for extracting PDFs and visualizing in plain text in exactly same way

Well that's it happy convertion.

Save ( Extract ) only images from PDF files on GNU / Linux in console and in GNOME nautilus

Wednesday, November 14th, 2012

extract only pictures / ( images ) from PDF / PDF save only images

Some time ago, I've blogged how it is possible to dump a PDF individual pages into JPG / PNG etc. pics.

Today interestingly, I've learned it is possible to not only dump single or whole PDF document pages into pictures but also to selectively dump only the pictures contained within  PDF file into JPEGs.

Dumping only PDF (contained) images into external JPEG files is doable on GNU / Linux with pdfimages.

1. Extracting pictures from PDF in text console / terminal

pdfimages is part of poppler-utils deb package, if for some reasons you don't have pdfimages on ur system install poppler-utils with;

apt-get install --yes poppler-utils

To extract images of a certain PDF from terminal / console command line it is as simple as:

pdfimages -j pdf-file-name.pdf prefix-of-output-file

pdfimages
will extract all pictures, but bear in mind with some PDF versions it might incorrectly dump some text pages thinking it is pictures too. Also with some PDFs which contain scanned very old paper documents (as pictures) trying to force pdfimages to dump it will just provide you with all pages of the PDF in JPGs. Option -j instructs dumping images from PDF in JPEG picture format, whether the second argument will save pictures in files like: prefix-of-output-file-000.jpg, prefix-of-output-file-001.jpg, prefix..-file-002.jpg etc.

2. Adding GNOME nautilus capability to extract images from PDF files

Enabling extracting images in nautilus is possible with one non-default nautilus plugin  nautilus-scripts-manager

nautilis-scripts-manager is very nice, but I'm sure many Linux users did not know it yet. It makes possible to add any custom shell script that does an opeartion to be visible in nautilus via one extra menu Scripts. As not normally needed on most Linux distributions, it is not installed by default so you have to install it:

noah:~# apt-get install --yes nautilus-scripts-manager

Below is a Screenshot from my nautilus Scripts menu (my locale is in Bulgarian), so Scripts word is in Cyrillic "Скриптове" 🙂

nautilus scripts menu screenshot - allowing users to add custom shell scripts to run in GNOME desktop extract pictures from PDF Linux

After nautilus-scripts-manager is installed. To use it in your user home directory you will have to create ~/.gnome/nautilus-scripts, i.e.

$ mkdir ~/.gnome2/nautilus-scripts

Any script placed inside can be then invoked via the newly appeared nautilus "Scripts" menu. Thus to use extract_images_from_pdfs.sh from GUI place it there.

Download the following extract_images_from_pdfs.sh shell script

$ cd ~/.gnome2/nautilus-scripts
$ wget -q https://www.pc-freak.net/files/extract_images_from_pdfs.sh
$ chmod +x extract_images_from_pdfs.sh

If you prefer to copy paste script content:

$ cat ~/.gnome2/nautilus-scripts/extract_images_from_pdf.sh
#!/bin/bash
# Extracts image files from PDF files
# For more information see www.boekhoff.info
## Added check for $1 existence and $1_images dir
existence check by hip0
# https://www.pc-freak.net/blog/
if [ $1 ]; then
if [ ! -d ./$1_images ]; then
mkdir -p ./"$1_images"
fi
pdfimages -j "$1" ./"$1_images"/PDFimage
gdialog --title "Report" --msgbox "Images were successfully extracted!"
exit 0
fi

 

Well that's all. Once you select a PDF and you click with last mouse button on it selecting Scripts -> extract_images_from_pdf.sh a new directory containing the filename prefix of the selected PDF with _images will appear. For exmpl. if pictures are extracted from PDF named filename.PDF in same directory where the file is present you will get new filename_images folder with all pictures dumped from the PDF.

I've learned about pdfimages existence from Sven Boekhoff's blog which btw has plenty of interesting other stuff

 

Well that's it hope this helps, someone. Comments are welcome 🙂

How to check about infected files in clamav log files

Tuesday, October 30th, 2012

How to check about infected files in clamav log files

I've just run clamav with low priority to check the whole drive of a server for infected files Phpshells and other unwanted script kiddie tools. This was part of my check up if the server is compromised, after yesterday's unexpected cracker break in one of our company servers

# nice -n 19 clamav -r /* -l /var/log/clamav-scan.log

This exact server has about 100 Gigabytes of data all contained on one hard disk partition;, thus check up of all files took a few hours. clamav is relatively slow, compared to DrWeb or nod32. But since I'm not in a hurry plus, we can't afford to spend some extra money to buy AV just for one scan I left it scanning in a separate screen sesion.

clamscan execution put some extra load on the server (which btw is used mainly for processing a multitude of SQL queries and provides some HTTP access to few websites via Apache server. After the scan was completed I ended up with enormous very clamav log file, listing all scanned files:

I checked the file content in vim, but as reviewing 119MB of log line by one! – is unthinkable task, e.g.:

debian:~# du -hsc /var/log/clamav_scan.log
119M /var/log/clamav_scan.log
119M total

I did quick review of clamav_scan.log and tailing it displays me::

# tail -n 10 /var/log/clamav_scan.log
----------- SCAN SUMMARY -----------
Known viruses: 1270572
Engine version: 0.97.3
Scanned directories: 18927
Scanned files: 221445
Infected files: 44
Total errors: 287
Data scanned: 12457.43 MB
Data read: 97007.10 MB (ratio 0.13:1)
Time: 1842.362 sec (30 m 42 s) 

Thus I needed a way to not read screen by screen all by screen to see what was detected as Infected Files, but just show only infected files found by clamav.

I didn't know how this done, so did a quick search in Google and found the question how to only grep infected files from clamav.log  answered in Clamav-Users Mailing List read whole thread here

The thread suggests using:

[root@mail clamav]# cat clamd.log | grep -i "found"

Since cat-ing the log is worthless however it is much better to only do grep "found"  clamd.log or as in my case file is clamav_scan.log do:

# grep -i 'found' /var/log/clamav_scan.log

/usr/share/clamav-testfiles/clam.bz2.zip: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.d64.zip: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.ppt: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.tnef: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-aspack.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.rtf: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.7z: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam_IScab_ext.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.odc.cpio: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.newc.cpio: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.pdf: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-wwpack.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.ole.doc: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.cab: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-mew.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-petite.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.sis: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-fsg.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam_cache_emax.tgz: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.bz2: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam_ISmsi_int.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.szdd: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.chm: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.arj: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam_IScab_int.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.ea05.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.tar.gz: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.html: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.binhex: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.impl.zip: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-upack.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.bin-be.cpio: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.mail: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.mbox.uu: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.zip: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-nsis.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam_ISmsi_ext.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-yc.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.bin-le.cpio: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-upx.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam-pespin.exe: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.exe.mbox.base64: ClamAV-Test-File FOUND
/usr/share/clamav-testfiles/clam.ea06.exe: ClamAV-Test-File FOUND
 

Surprisingly all the "Infected" files turned to be a regular clamav scan (virus, spyware badware testfiles – i.e. clamav just use this file to check its database definitions works okay). Thus the supposingly  Infected files: 44 turned to be just another false positive.

Actually this grepping and logging of all scanned files, nevertheless they're not infected is completely useless. Thus it would have been much better if instead have run clamscan with cmd options:

debian:~# clamscan -r /* --infected

I hope ppl reading this article wouldn't repeat my "mistake".
In mean time after this thing here, maybe it will be a good idea to schedule 2 weeks or 1 months period clamscan of whole file system to make sure someone doesn't uploaded some malicious PHPShell script, exploit or other unwanted stuff.

How to Delete Windows XP temporary files from command line / Batch script to Delete Windows temp files on every system restart

Thursday, August 23rd, 2012

In case you need to DELete Windows temporary files directory to save some free space on an old PC or (group of PCs) in a raw you might prefer to use this CLI command lines:

DEL /F /S /Q %TEMP%
DEL /f /q /s "%SYSTEMDRIVE%\Documents and Settings\LocalService\Cookies\*.*"
DEL /f /q /s "%SYSTEMDRIVE%%HOMEPATH%\Cookies\*.*"
DEL /f /q /s "%SYSTEMDRIVE%\Documents and Settings\LocalService\Local Settings\Temp\*.*"
DEL /f /q /s "%SYSTEMDRIVE%\Documents and Settings\NetworkService\Local Settings\Temp\*.*"
DEL /f /q /s "%SYSTEMDRIVE%\Documents and Settings\Default User\Local Settings\Temp*.*"
DEL /f /q /s "%SYSTEMDRIVE%%HOMEPATH%\Local Settings\Temp\*.*"
DEL /f /q /s "%WINDIR%\Temp\*.*"
DEL /f /q /s "%TEMP%\*.*"
DEL /f /q /s "%SYSTEMDRIVE%\Documents and Settings\LocalService\Local Settings\Temporary Internet Files\*.*"
DEL /f /q /s "%SYSTEMDRIVE%%HOMEPATH%\Local Settings\Temporary Internet Files\*.*"
RD /q /s %TEMP%
RD /q /s %WINDIR%\Temp
RD /q /s "%SYSTEMDRIVE%\Documents and Settings\Default User\Local Settings\Temp"
RD /q /s "%SYSTEMDRIVE%\Documents and Settings\LocalService\Local Settings\Temp"
RD /q /s "%SYSTEMDRIVE%\Documents and Settings\Default User\Local Settings\Temp"
RD /q /s "%SYSTEMDRIVE%%HOMEPATH%\Local Settings\Temp"

Another helpful thing for MS Windows users is cleaning up Windows Tempory Files on every system restart (reboot); doing so is possible by setting below’s short batch script to exec on every system boot:

@ECHO OFF
IF NOT %temp% == %tmp% GOTO both_
GOTO single
:both
DEL %temp%\*.* /F /S /Q
DEL %tmp%\*.* /F /S /Q
CLS
ECHO Deleted all files in the TEMP folder: %temp%
ECHO Deleted all files in the TMP folder: %tmp%
GOTO end
:single
DEL %temp%\*.* /F /S /Q
CLS
ECHO Deleted all files in the TEMP folder: %temp%
:end

You can download the script clean_windows_temp_files_on_win_start.com here . The script is great tool for Windows administrators of Win Domain Controllers or University / educational M$ Windows based networks, where PC security is at high risk. Setting the script to run on even “non-critical” home PCs is a great idea as it can save you a lot of troubles with SpyWare, Malware Viruses and other Windows targetted “Bad-Wares” 🙂

Cheers 🙂

How to list Files in a directory and generate web URLS with PHP

Tuesday, April 10th, 2012

I needed a short PHP script that reads all, my .html files in a directory and then generates html a hrefs links pointing to each of the html files stored in the directory.

Here is the short code I come up:

$directory_to_open=”my-dir/”;
$max_files=100;
$i=0;
if ($handle = opendir(“$directory_to_open”)) {
while (false !== ($file = readdir($handle)) && $i <= $max_files)
{
$i=$i+1;
if ($file != “.” && $file != “..”)
{
$thelist .= ‘| ‘.str_replace(“.html”,””,$file).’ |’;
echo “$thelist”; }
}
closedir($handle);
}

In my case the directories with html were planned to contain, less than 100 files a directory, so in order to show links to only the first 100 files in the directory, I used the $max_files=100 and a check if value is reached in the while loop. For anyone who want to build html you see in above while if $max_files is reached then the while loop exits.

Because by default the files returned contained the naming format file_name.html, whether I wanted to show only the file name without the .html extensions used str_replace(); to get rid of file extensions string.