Wednesday, 25 December 2013

Quickest way to setup - KEY-BASED AUTHENTICATION: LINUX

For example, we gonna setup key-based authentication from server 192.168.x.y to 192.168.x.z
( we will use this auth for root uesr )

1). Create SSH-Kegen Keys on – 192.168.x.y

[root@192.168.x.y~] # ssh-keygen -t rsa

2). Create .ssh Directory on – 192.168.x.z

# ssh root@192.168.x.z mkdir -p .ssh
The authenticity of host '192.168.x.z (192.168.x.z)' can't be established.
RSA key fingerprint is 3x:x7:a4:e5:af:89:c5:dx:b1:3c:9d:xx:66:47:03:xx.
Are you sure you want to continue connecting (yes/no)?  "press yes"

3). Upload Generated Public Keys to – 192.168.x.z

# cat .ssh/id_rsa.pub | ssh root@192.168.x.z 'cat >> .ssh/authorized_keys'

4). Set Permissions on – 192.168.x.z

# ssh root@192.168.x.z "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"

5). Now you can login 192.168.x.z without password:

[root@192.168.x.y~] # ssh root@192.168.x.z
Last login: xxxxxxxxxxxxxxxxxxxxxx from 'last login ip here'
[root@192.168.x.z ~]#

Done !!!

Backup and Restore of LINUX System Disk using "dd" command:

Backup and Restore of LINUX System Disk using "dd" command:

creating disk1:

# dd if=/dev/zero of=disk1 bs=1024000 count=2048

# mkdir d1

# mkfs.ext3 disk1

# mount disk1 d1 // It will end with error, so try,

# mount -o loop disk1 d1

creating disk2:

# dd if=/dev/zero of=disk2 bs=1024000 count=2048

#mkfs.ext3 disk2

# mkdir d2

# mount -o loop disk2 d2

# df -h -----  result will be,

We can also use conv=notrunc,noerror options with "dd" command ,

- The notrunc conversion option means do not truncate the output file — that is, if the output file already exists, just replace the specified bytes and leave the rest of the output file alone.

- Noerror means to keep going if there is an error. Dd normally terminates on any I/O error.

to confirm this,

# losetup -a
/dev/loop0: [fd00]:142387 (/root/disk1)
/dev/loop1: [fd00]:142390 (/root/disk2)

copy some file into d1 for example:

# wget http://ipv4.download.thinkbroadband.com/512MB.zip

shows like:

# ll /root/d1
total 541048
-rw-r--r--   1 root root  16589672 Dec 23 22:14 1GB.zip
-rw-r--r--   1 root root 536870912 May 30  2008 512MB.zip
drwxr-xr-x. 69 root root      4096 Dec 23 22:06 etc
drwx------   2 root root     16384 Dec 23 21:58 lost+found

Making image for /root/disk1:

Now it creates the image of /root/disk1.

[root@ranjith ~]# dd if=/root/disk1 of=~/d1backup.img
dd: writing to `/root/d1backup.img': No space left on device
3562841+0 records in
3562840+0 records out
1824174080 bytes (1.8 GB) copied, 47.1943 s, 38.7 MB/s

# pwd
/root/d2
# ls
lost+found

Restore the image on another partition:

# dd if=d1backup.img of=/root/disk2
3562840+0 records in
3562840+0 records out
1824174080 bytes (1.8 GB) copied, 46.6134 s, 39.1 MB/s

The above command will restore the image (d1backup.img) of /root/disk1 to /root/disk2.

# ll /root/d2
total 541048
-rw-r--r--   1 root root  16589672 Dec 23 22:14 1GB.zip
-rw-r--r--   1 root root 536870912 May 30  2008 512MB.zip
drwxr-xr-x. 69 root root      4096 Dec 23 22:06 etc
drwx------   2 root root     16384 Dec 23 22:05 lost+found

SHELL SCRIPT to generate random PASSWORD and CAPTCHA

1). create a file named "password.sh"
 
#!/bin/bash
while :
do
    clear
    cat<<EOF
                ===================
                PASSWORD GENERATOR:
                -------------------
            Enter the (P)assword length
            Enter the (C)aptcha length
                      (Q)uit
                -------------------
EOF
    read -n1 -s
    case "$REPLY" in
    "E")
    echo -e -n "\n\t: "
    read b
    a=$(tr -dc "A-Za-z0-9~!@#$%^&*-_" < /dev/urandom | head -c$b)
    echo -e "\n\n\t\t$a"
    ;;
    "C")
    echo -e -n "\n\t: "
    read b
    echo -e "\n\n\t\t" `/usr/bin/shuf -i 1-$b -z`
    ;;
    "Q")  exit 0                    ;;
    "q")  echo "case sensitive!!"   ;;
    "c")  echo "case sensitive!!"   ;;
    "e")  echo "case sensitive!!"   ;;
    esac
    sleep 2
done

2). copy & paste the below code in file: password.sh

3). chmod 755  password.sh

4). ./password.sh
output:



for PASSWORD:



 for CAPTCHA:



Press Q (not "q") to exit.

=========================
Simple script for only PASSWORD:
=========================

 #!/bin/bash
b=$1
if [ $# -gt 1 ] || [ $# -eq 0 ]
then
echo "Enter the password length:"
read b
a=$(tr -dc "A-Za-z0-9~!@#$%^&*-_" < /dev/urandom | head -c$b)
echo "$a"
else
a=$(tr -dc "A-Za-z0-9~!@#$%^&*-_" < /dev/urandom | head -c$b)
echo "$a"
fi
=========================

Saturday, 23 November 2013

mount: could not find any free loop device

Mount warning "mount: could not find any free loop device" facing while increasing /tmp ?

here is the solution for: losetup

# mount -o loop,noexec,nosuid,rw /usr/tmpDSK /tmp
mount: could not find any free loop device

# losetup -a
/dev/loop0: [0802]:72617509 (/usr/tmpDSK)
/dev/loop1: [0802]:72617509 (/usr/tmpDSK)
/dev/loop2: [0802]:72618319 (/usr/tmpDSK)
/dev/loop3: [0802]:72618319 (/usr/tmpDSK)
/dev/loop4: [0802]:72618322 (/usr/tmpDSK)
/dev/loop5: [0802]:72618322 (/usr/tmpDSK)
/dev/loop6: [0802]:72618323 (/usr/tmpDSK)
/dev/loop7: [0802]:72618324 (/usr/tmpDSK)


# losetup -d /dev/loop[0-7]

# losetup -a

# mount -o loop,noexec,nosuid,rw /usr/tmpDSK /tmp


Done !!!

Thursday, 31 October 2013

How to find the last argument passed to a Shell Script:-

To find the last argument passed to a Shell Script:

$1 - first arguments.

$* / $@ -  all arguments.

$# - number of arguments.

Here is a script to find the last argument passed,

# cat arguments.sh
#!/bin/bash
if [ $# -eq 0 ]
then
echo "No Arguments supplied"
else
echo $* > .ags
sed -e 's/ /\n/g' .ags | tac | head -n1 > .ga
echo "Last Argument is: `cat .ga`"
fi

Output:

# ./arguments.sh
No Arguments supplied

# ./arguments.sh testing for the last argument value
Last Argument is: value

Saturday, 12 October 2013

Run commands periodically without cron is possible?

In Linux - can run periodical commands without cron ??!??.......

Yeah, Running commands periodically without cron is possible when we go with "while".

As a command:

# while true; do <your command here> ; sleep 100; done &

Example: # while true; do echo "Hello World" ; sleep 100; done &

do not forget the last "&" as it will put your loop in the background.

Same way to call a script,

crate file name: while_check.sh

# cat while_check.sh
#!/bin/bash
while true; do /bin/sh script.sh ; sleep 100; done &

# cat script.sh
echo "Hello World"

# ./while_check.sh

Is it useful??

Random Manual(command's) Pages while login SSH:

  • If you wish to know the linux command's random man page in every SSH login, Kindly add below line in .bashrc file,

/usr/bin/man $(ls /bin | shuf | head -1)

  • Now you got it, useful right???

find difference between two files using Shell Script - BASH

Use below script to find difference between two file in faster way.....

1). copy paste below script in simplediff.sh

2). chmod 755 simplediff.sh

=====
#!/bin/bash
echo -e "Enter the full path for FILE1:"
read f
echo -e "Enter the full path for FILE2:"
read g
if [ ! -f $f ] || [ ! -f $g ]
then
echo "FILE1 or FILE2 MISSING"
else
echo -e "Enter ( 1 ) to konw Different contents in FILE1\nEnter ( 2 ) to know the Different contents in FILE2"
read h
case "$h" in
1) clear
echo -e "File1:"
echo -e "(First line shows the file's Last Modify time)\n"
/usr/bin/diff -u $f $g | grep '^-'
if [ $? == 1 ]
then
clear
echo -e "\033[33;32m"
echo -e "\n\nNo Different contents Found !\n\n"
echo -e "\033[33;0m"
fi
;;
2) clear
echo -e "File2:"
echo -e "(First line shows the file's Last Modify time)\n"
/usr/bin/diff -u $f $g | grep '^+'
if [ $? == 1 ]
then
clear
echo -e "\033[33;32m"
echo -e "\n\nNo Different contents Found !\n\n"
echo -e "\033[33;0m"
fi
;;
*) clear
echo -e "\033[33;31m"
echo -e "\nWrong Option selected !!!\n"
echo -e "\033[33;0m"
;;
esac
fi

=====



If there is no difference in the mentioned files, you will get



If there is no files or wrong path you have mentioned,




When wrong choice made !!


Otherwise you will get result .........


Saturday, 28 September 2013

SELinux: Tiny Tip

SELinux Modes:

Enforcing - SELinux security policy is enforced. IF this is set SELinux is enabled and will try to enforce the SELinux policies strictly.

Permissive – SELinux prints warnings instead of enforcing. This setting will just give warning when any SELinux policy setting is breached.

Normal models(When SELinux Disabled):

In the regular permissions models, processes run as users, and the files and other resources on the system are labeled with permissions that control which users have what access to which files.

SELinux:

SELinux adds a parallel set of permissions, in which each process runs with a SElinux security context, and files and other resources on the system are also labeled with a security context. The difference from normal permissions is that a configurable SELinux policy controls which process contexts can access which file contexts. Red Hat provides a default policy which most people use.

 Another difference with SELinux, is that to have access to a file, you have to have both regular access and SELinux access. So, even if your process is running as the superuser, root, it may be denied access to a file or resource based on the SELinux security context of the process and of the file or resource!


This allows us to limit the scope of security compromises on the system, even to the root account, by ensuring that processes are confined by the SELinux policy and their security context into only being able to do things that they should normally authorized to do.

Here is an example of a normal system that does not have SELinux turned on, which is running Apache HTTPD server :


The web server is available to remote access over the internet. That means that malicious people can try to break into the system through a security bug in the web server. If they succeed, they will have control of a process running as user apache and group apache. Anything readable by that user can now be accessed by that attacker. This includes files and directories that the web server normally has no business working with. A further local-only security bug in one of those may enable the attacker to gain superuser access.

Here, How can SELinux change this?

This is the same system, with SELinux turned on. By default, the SELinux policy denies all the access, unless rules are included in the policy which permits certain processes contexts to access certain file and resource contexts. (The reference policy provided by REDHAT has a carefully tuned set of rules for production systems provided for you.)

cheers !

Saturday, 31 August 2013

unexpectedly shrunk window (repaired) in dmesg log - TCP Peer

Don't Panic,


This normally occurs when a client decides to reduce its TCP window size, without the server expecting it. This can be the case when fragmentation is an issue, or when the client is using an embedded device with very little NIC buffer memory. This is a completely normal behaviour, and you’re likely to see quite a few such packets in your log. The messages are informational only, and are used to debug networking issues.

I’d be worried if you saw hundreds of thousands of these packets, since there are attacks that involve packet fragmentation and small window sizes, but otherwise it’s just the normal sort of noise you should expect to see on any internet-facing network. In fact, the “repaired” part of your message is showing that your network driver fixed the issue, which is usually done by concatenating the payloads of two fragmented packets together. Shouldn’t be an issue at all.

… … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … 

Tuesday, 27 August 2013

Solved: ERROR: ld.so: object '/lib/libdevmapper-event.so.1.20.0' from /etc/ld.so.preload cannot be preloaded: ignored.

Error:

ERROR: ld.so: object '/lib/libdevmapper-event.so.1.20.0' from /etc/ld.so.preload cannot be preloaded: ignored.

Fix:

Use Your favorite editor, here i use vi to edit the file /etc/ld.so.preload and comment out the line: /lib/libdevmapper-event.so.1.20.0 then it will look so,

# cat /etc/ld.so.preload
#/lib/libdevmapper-event.so.1.20.0

For: ERROR: ld.so: object '/lib/libsafe.so.2' from /etc/ld.so.preload cannot be preloaded: ignored

Fix:

comment out the line: /lib/libsafe.so.2 in /etc/ld.so.preload

Thanks.

Saturday, 24 August 2013

VMware ESX vs ESXi

What is VMware ESX ?

ESX (Elastic Sky X) is the VMware’s enterprise server virtualization platform. In ESX, VMkernel is the virtualization kernel which is managed by a console operating system which is also called as Service console. Which is linux based and its main purpose is it to provide a Management interface for the host and lot of management agents and other thrid party software agents are installed on the service console to provide  the functionalists like hardware management and monitoring of ESX hypervisor.

What is VMware ESXi ?

ESXi (Elastic sky X Integrated) is also the VMware’s enterprise server virtualization platform. In ESXi, Service console is removed. All the VMware related agents and third party agents such as management and monitoring agents can also run directly on the VMkernel. ESXi is ultra-thin architecture which is highly reliable and its small code-base allows it to be more secure with less codes to patch. ESXi uses Direct Console User Interface (DCUI) instead of a service console to perform management of ESXi server. ESXi installation will happen very quickly as compared to ESX installation.


Wednesday, 21 August 2013

possible SYN flooding on port xxxx. Sending cookies.

This could be a form of DOS attack on the box and It is likely to be TCP backlog queue maximum size has been reached.

1). To Ascertain the current maximum size:

# cat /proc/sys/net/ipv4/tcp_max_syn_backlog
1024

Adjust the size, 4096 is recommended unless the box has a minute amount of memory in modern standards (<1Gb).

# echo "4096" >/proc/sys/net/ipv4/tcp_max_syn_backlog

2). To Enable fast recycling TIME-WAIT sockets. add the following to /etc/sysctl.conf, then run 'sysctl -p'

net.ipv4.tcp_tw_recycle = 1

Check dmesg to see if the problem persist.

!

Monday, 19 August 2013

No running copy - squid: ERROR

# squid -k reconfigure
squid: ERROR: No running copy

In /var/log/messages,

"Squid Parent: child process 1147 exited due to signal 6"

In syslog:

"Failed to verify one of the swap directories, Check cache.log#012#011for details.
Run 'squid -z' to create swap directories#012#011if needed, or if running Squid for the first time."

In syslog, A warning appears to make swap directory running the squid,

# squid -z
(-z : Create swap directories)
# squid start

Thursday, 8 August 2013

Dsniff - Network Monitoring:

Dsniff:

It is a suit of tools for auditing the network and penetration testing. We can use this tool for passive monitoring a network for some important data (passwords, e-mail, files, etc.).

Installation:

# wget http://www.monkey.org/~dugsong/dsniff/beta/dsniff-2.4b1.tar.gz

# tar zxf dsniff-2.4b1.tar.gz

# wget http://www.enzotech.net/files/dsniff-2.4.fixed.FC.patch

# patch -p0 < dsniff-2.4.fixed.FC.patch

# cd dsniff-2.4

# ./configure && make && make install


Thanks!

Sunday, 28 July 2013

Permanent(301) VS Temporary(302) redirect

301 vs 302 redirect:

301:

  • Status 301 means that the resource (page) is moved permanently to a new location. The client/browser should not attempt to request the original location but use the new location from now on. It's like a Change of Address form from the Postal Service. All traffic intended for URL A is permanently routed to URL B, and all link popularity and existing SEO value for URL A should also be transferred to URL B.

302:

  • Status 302 means that the resource is temporarily located somewhere else, and the client/browser should continue requesting the original url. There are very few instances where this type of redirect should be used, but unfortunately it is the easiest to implement. This means that many webmasters unfamiliar with search engine mechanics use the wrong type of redirect.
Is it okay?

Tuesday, 16 July 2013

Linux- Kickstart Based Installation

KickStart provides a way for users to automate a Red Hat Enterprise Linux installation.

Here are simple steps:


1.Install Apache.

[root@ranjith ~]# rpm -qa |grep -i httpd

[root@ranjith ~]# yum -y install httpd

[root@ranjith ~]# lsof -i tcp:80
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
httpd   10787   root    4u  IPv6 223062      0t0  TCP *:http (LISTEN)
httpd   10789 apache    4u  IPv6 223062      0t0  TCP *:http (LISTEN)
httpd   10790 apache    4u  IPv6 223062      0t0  TCP *:http (LISTEN)


2. Create a install root where we gonna installing the contents of the cdrom to, We can use under default document root in /var/www/html.


[root@ranjith ~]# cd /var/www/html/

[root@ranjith html]# mkdir centos5

[root@ranjith html]# cd centos5/

[root@ranjith centos5]# pwd
/var/www/html/centos5


3. Download and mount the centos ISO file from: centos.org

Here i download: CentOS-6.4-x86_64

4. Mount the iso you downloaded to a mount point.

[root@ranjith mnt]# mount -o loop CentOS-6.4-x86_64-bin-DVD1.iso /mnt

[root@ranjith mnt]# du -sch /mnt
4.1G    /mnt
4.1G    total



5. Copy the contents of the iso into the directory you created.

[root@ranjith mnt]# cp -var /mnt/* /var/www/html/centos5/

6. Create a directory to house your kickstart configurations under Apache’s default document root of /var/www/html


[root@ranjith html]# mkdir kstart

[root@ranjith html]# cd kstart/

[root@ranjith kstart]# pwd
/var/www/html/kstart


7. Create the kickstart configuration file, kstart.cfg under kstart directroy. In kstart.cfg file kindlly place the below code.


#text-based insatllation.
text
install
url --url http://YOUR-SERVER-IP/centos5 // server-IP here
lang en_US.UTF-8
keyboard us
langsupport --default=en_US.UTF-8 en_US.UTF-8
network --device eth0 --bootproto dhcp
rootpw --iscrypted $1$/ABCabcABC/ // Password here
firewall --disabled
selinux --disabled
authconfig --enableshadow --enablemd5
timezone America/New_York
bootloader --location=mbr --append="console=xvc0"

# basic root, boot, swap partitioning here

zerombr yes
clearpart --all
part /boot --asprimary --fstype="ext3" --size=100 --bytes-per-inode=4096
part swap --asprimary --fstype="swap" --recommended --bytes-per-inode=4096
part / --asprimary --fstype="ext3" --grow --size=1 --bytes-per-inode=4096
reboot

# Utilities here

%packages --nobase
authconfig
crontabs
kbd
kudzu
man
ntp
openssh-clients
openssh-server
passwd
pciutils
rootfiles
rpm
system-config-securitylevel-

tui
traceroute
yum
yum-updatesd
vim-minimal
vixie-cron
which
wget
unzip
kudzu
man
ntp
openssh-clients
openssh-server
passwd
pciutils
rootfiles
rpm
system-config-securitylevel-
tui
traceroute
yum
yum-updatesd
vim-minimal
vixie-cron
which
wget
unzip
sudo
%post
(
chkconfig --level 3 ip6tables off
chkconfig --level 3 kudzu off
chkconfig --level 3 netfs off
chkconfig --level 3 yum-updatesd off
#
useradd -p 'webmaster' someotheruser
)  2&gt;&amp;1 | tee /root/post-install.log


8. Use the dvd you downloaded or grab a copy of the boot.iso from the install root. You can either use the install DVD iso you or use the 10MB boot.iso located in centos5/images/boot.iso I choose to use the boot.iso since its 10MB and is easily distributable.

9. start the web-server.


[root@ranjith html]# /etc/init.d/httpd start
Starting httpd:                        
                    [  OK  ]
10. Boot the system with boot.iso or install DVD. On the boot prompt enter:

linux ks=http://your_kickstart_ip/kstart/kstart.cfg

cheers!!!

Thursday, 13 June 2013

How to trace inode usage?

Here is the command to trace inode usage:

# echo "Detailed Inode usage for: $(pwd)" ; for d in `find -maxdepth 1 -type d |cut -d\/ -f2 |grep -xv . |sort`; do c=$(find $d |wc -l) ; printf "$c\t\t- $d\n" ; done ; printf "Total: \t\t$(find $(pwd) | wc -l)\n"

Sunday, 2 June 2013

Linux - DDoS Deflate To Block DDoS Attack

(D)DoS Deflate is a shell script developed by Zaf, originally for use on MediaLayer servers to assist in combating denial of service attacks. However, it was seen to be very effective for our purpose and It is a lightweight bash shell script designed to assist in the process of blocking a denial of service attack. It utilizes the command below to create a list of IP addresses connected to the server, along with their total number of connections. It is one of the simplest and easiest to install solutions at the software level.

# netstat -ntu | awk '{print $5}' | cut -d: -f1 | sort | uniq -c | sort -n

IP addresses with over a pre-configured number of connections are automatically blocked in the server's firewall, which can be direct iptables or Advanced Policy Firewall (APF). (We highly recommend that you use APF on your server in general, but deflate will work without it.)

Notable Features:

It is possible to white-list IP addresses, via /usr/local/ddos/ignore.ip.list.

Simple configuration file: /usr/local/ddos/ddos.conf

IP addresses are automatically unblocked after a preconfigured time limit (default: 600 seconds)

The script can run at a chosen frequency via the configuration file (default: 1 minute)

You can receive email alerts when IP addresses are blocked.

Installation:

# wget http://www.inetbase.com/scripts/ddos/install.sh
# chmod 0700 install.sh
# ./install.sh


Uninstallation:

# wget http://www.inetbase.com/scripts/ddos/uninstall.ddos
# chmod 0700 uninstall.ddos
# ./uninstall.ddos


If you start receiving mails like "Quote:Banned the following ip addresses on xxx xxx time xxx with xxx connections"

A fix is here,


which it requires that you replace the netstat command in the ddos.sh file (located in /usr/local/ddos directory if you installed in the default fashion).

In the original script line 117 reads…

Code:

# netstat -ntu | awk ‘{print $5}’ | cut -d: -f1 | sort | uniq -c | sort -nr > $BAD_IP_LIST

this should be rewritten to read as follows…

Code:

# netstat -ntu | grep ‘:’ | awk ‘{print $5}’ | sed ‘s/::ffff://’ | cut -f1 -d ‘:’ | sort | uniq -c | sort -nr > $BAD_IP_LIST

How To Check The Number Of Connected Ips:-

# sh /usr/local/ddos/ddos.sh

How To Edit Configuration File:-

# vi /usr/local/ddos/ddos.conf

How To Restart DDos Deflate:-

# sh /usr/local/ddos/ddos.sh -c

Cheers!!!

Saturday, 1 June 2013

KERNEL PARAMETER CONFIGURATION:

# PREVENT YOU SYSTEM FROM ANSWERING ICMP ECHO REQUESTS

echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all

# DROP ICMP ECHO-REQUEST MESSAGES SENT TO BROADCAST OR MULTICAST ADDRESSES

echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts

# DONT ACCEPT ICMP REDIRECT MESSAGES

echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects

# DONT SEND ICMP REDIRECT MESSAGES

echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects

# DROP SOURCE ROUTED PACKETS

echo 0 > /proc/sys/net/ipv4/conf/all/accept_source_route

# ENABLE TCP SYN COOKIE PROTECTION FROM SYN FLOODS

echo 1 > /proc/sys/net/ipv4/tcp_syncookies

# ENABLE SOURCE ADDRESS SPOOFING PROTECTION

echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter

# LOG PACKETS WITH IMPOSSIBLE ADDRESSES (DUE TO WRONG ROUTES) ON YOUR NETWORK

echo 1 > /proc/sys/net/ipv4/conf/all/log_martians

# DISABLE IPV4 FORWARDING

echo 0 > /proc/sys/net/ipv4/ip_forward

Sunday, 19 May 2013

Apache MaxClients Calculation

MaxClients:

The MaxClients directive sets the limit on the number of simultaneous requests that will be served. Any connection attempts over the MaxClients limit will normally be queued, up to a number based on the ListenBacklog directive. Once a child process is freed at the end of a different request, the connection will then be serviced.

For non-threaded servers (i.e., prefork), MaxClients translates into the maximum number of child processes that will be launched to serve requests. The default value is 256; to increase it, you must also raise ServerLimit.

calculating MaxClients value:

#!/bin/bash
tome=$(free -m | grep -i mem | awk '{print $2}')
htps=$(ps -aylC httpd |grep "httpd" |awk '{print $8'} |sort -n |tail -n 1)
mysme=$(ps aux | grep 'mysql' | awk '{print $6}' |sort -n |tail -n 1)
rafa=1024
nmysme=$(expr $mysme / $rafa)
nhtps=$(expr $htps / $rafa)
echo -e "\nTotal Memory = $tome"
echo -e "Largest httpd Process = $nhtps"
echo -e "Mysql Memory = $nmysme"
maxc=`expr $tome - $nmysme`
maxcl=`expr $maxc / $nhtps`
echo -e "\nSo, The MaxClients = $maxcl"
echo -e "(we can use nearest round of value from $maxcl)"


For example, copy the above code in a file named "maxcalc.sh" then

# chmod 755 maxcalc.sh
# ./maxcalc.sh


The output will belike,
Total Memory = 15905
Largest httpd Process = 16
Mysql Memory = 1934

So, The MaxClients = 873

(we can use nearest round of value from 873)


cheers !!!

Friday, 17 May 2013

Apache: Prefork MPM vs Worker MPM

Difference between Prefork and Worker MPM modules.

prefork

worker

(mpm_winnt This Multi-Processing Module is optimized for Windows NT.)

(mpm_netware Multi-Processing Module implementing an exclusively threaded web server optimized for Novell NetWare)

Prefork MPM:

A prefork mpm handles requests just like apche 1.3. As the name suggests this will pre fork necessary child process while starting apache. It is suitable for websites which avoids threading for compatibility for non-thread-safe libraries . It is also known as the best mpm for isolating each request.

Working:

A single control process is responsible for launching child processes which listen for connections and serve them when they arrive. Apache always tries to maintain several spare or idle server processes, which stand ready to serve incoming requests. In this way, clients do not need to wait for a new child processes to be forked before their requests can be served. We can adjust this spare process through the apche conf. For a normal server which is having 256 simultaneous connections can use the default prefork settings.

Perfork is the default module given by apache.

# StartServers: number of server processes to start
# MinSpareServers: minimum number of server processes which are kept spare
# MaxSpareServers: maximum number of server processes which are kept spare
# MaxClients: maximum number of server processes allowed to start
# MaxRequestsPerChild directive sets the limit on the number of requests that an individual child server process will handle. After MaxRequestsPerChild requests, the child process will die. If MaxRequestsPerChild is 0, then the process will never expire

Worker MPM:

A worker mpm is an Multi-Processing Module (MPM) which implements a hybrid multi-process multi-threaded server. By using threads to serve requests, it is able to serve a large number of requests with fewer system resources than a process-based server.

The most important directives used to control this MPM are ThreadsPerChild, which controls the number of threads deployed by each child process and MaxClients, which controls the maximum total number of threads that may be launched.

Strength : Memory usage and performanance wise its better than prefork

Weakness : worker will not work properly with languages like php

Working:

A single control process (the parent) is responsible for launching child processes. Each child process creates a fixed number of server threads as specified in the ThreadsPerChild directive, as well as a listener thread which listens for connections and passes them to a server thread for processing when they arrive.

Apache always tries to maintain a group of spare or idle server threads, which stand ready to serve incoming requests. In this way, clients do not need to wait for a new threads or processes to be created before their requests can be served. The number of processes that will initially launched is set by the StartServers directive. During operation, Apache assesses the total number of idle threads in all processes, and forks or kills processes to keep this number within the boundaries specified by MinSpareThreads and MaxSpareThreads. Since this process is very self-regulating, it is rarely necessary to modify these directives from their default values. The maximum number of clients that may be served simultaneously (i.e., the maximum total number of threads in all processes) is determined by the MaxClients directive. The maximum number of active child processes is determined by the MaxClients directive divided by the ThreadsPerChild directive

Event MPM(This MPM is experimental, so it may or may not work as expected):

The event Multi-Processing Module (MPM) is designed to allow more requests to be served simultaneously by passing off some processing work to supporting threads, freeing up the main threads to work on new requests. It is based on the worker MPM, which implements a hybrid multi-process multi-threaded server. Run-time configuration directives are identical to those provided by worker.

To use the event MPM, add --with-mpm=event to the configure script's arguments when building the httpd.

Worker MPM:
StartServers 35
MaxClients 256
MinSpareThreads 30
MaxSpareThreads 305
ThreadsPerChild 255
MaxRequestsPerChild 0

Prefork MPM:

StartServers 20
MinSpareServers 50
MaxSpareServers 100
MaxClients 200
MaxRequestsPerChild 20000

Thursday, 16 May 2013

MySQL storage on RamFS or TmpFS partition

Mount tmpfs to a folder:

# mkdir /var/ramfs

# mount -t ramfs -o size=1G ramfs /var/ramfs/


Here we mounted ramfs to /var/ramfs. I am using ramfs in oppose to tmpfs mainly because:

    ramfs grows dynamically(tmpfs doens’t)

    ramfs doesn’t use swap(while tmpfs does)


RAM-backed file system is mounted, so now I need to populate it with MySQL files for processing.
To do that I will need to stop mysql, copy it’s database files over to ramfs, adjust AppArmor and MySQL settings and start mysql server again. Here is the chain of commands to do that:

Copying files:

# /etc/init.d/mysql stop

# cp -R /var/lib/mysql /var/ramfs/

# chown -R mysql:mysql /var/ramfs/mysql


Tweaking MySQL config:

# cp /etc/mysql/my.cnf /etc/mysql/original-my.cnf

# vi /etc/mysql/my.cnf


Find line with ‘datadir‘ definition(it will look something like datadir = /var/lib/mysql) and change it to

datadir = /var/ramfs/mysql

Looks like we’re done with settings, let’s see if it will work:

# /etc/init.d/mysql start

If mysql daemon starts(double check /var/log/mysql.err for any errors) and you can connect to it, mostlikely now we’re running fully off of a RAM device. To double check it, run this from mysql client:

mysql> show variables where Variable_name = 'datadir' \G
*************************** 1. row ***************************
Variable_name: datadir
Value: /var/ramfs/mysql/
1 row in set (0.00 sec)


The real reason you don't see anyone wanting to run MyISAM from a tmpfs is because MySQL has the EMORY engine for that purpose.  If you want to keep a database in memory, that's the correct way to do it.  Someone actually wanting to do a memory database would start with converting the handful of TEXT and BLOB fields in the schema over to VARCHARs, since the MEMORY engine doesn't support either.

Disadvantages of Ramfs and Tmpfs:

Since both ramfs and tmpfs is writing to the system RAM, it would get deleted once the system gets rebooted, or crashed. So, you should write a process to pick up the data from ramfs/tmpfs to disk in periodic intervals. You can also write a process to write down the data from ramfs/tmpfs to disk while the system is shutting down. But, this will not help you in the time of system crash.

Tuesday, 7 May 2013

Protecting Web Servers from Distributed Denial of Service Attacks(DDoS):

Possible SYN flooding on port 80. Sending cookies:

If frequently faced an outage of web services. On investigating, I found that it had something creeping up in it's logs. Something which read -

    kernel: possible SYN flooding on port 80. Sending cookies.

It looked like a Denial of service attack. It was evident that I needed to beef up security!

Avoiding a DDOS attack on a web server:

iptables comes with a module (limit) using which a DDOS attack can be tackled. Depending on the type of web service running on the server, I decided a limit of 15 HTTP syn packets per second would be enough.

First, We had a look at the existing rules

    # iptables -L -v

This shows you the rules and the default policy that are set in the existing chains - INPUT, FORWARD and OUTPUT.

Then we followed these quick steps -

1. Create a new chain and name it, say, DDOS_SYNFLOOD,

    # iptables -N DDOS_SYNFLOOD


2. Add a limit to no.of packets 15 per second with a max burst of about 20, by using the limit module.

    # iptables -A DDOS_SYNFLOOD -m limit --limit 15/second --limit-burst 20 -j ACCEPT

Note: Other units - /minute ,  /hour , and /day

3. And of course, we will need to drop packets which exceed the above limitation

    # iptables -A DDOS_SYNFLOOD -j DROP

4. Now all that was left was to "jump" to this new chain for incoming tcp syn packets on port 80.

    # iptables -A INPUT -p tcp --syn --dport http -j DDOS_SYNFLOOD

And to look at what was set up -

    # iptables -L -v

    Chain INPUT (policy ACCEPT 95 packets, 4988 bytes)
     pkts bytes target     prot opt in     out     source               destination
        0     0 DDOS_SYNFLOOD  tcp  --  any    any     anywhere             anywhere            tcp dpt:http flags:FIN,SYN,RST,ACK/SYN

    ......
    ......
    ......
    ......

    Chain DDOS_SYNFLOOD (1 references)
     pkts bytes target     prot opt in     out     source               destination
        0     0 ACCEPT     all  --  any    any     anywhere             anywhere            limit: avg 15/sec burst 20
        0     0 DROP       all  --  any    any     anywhere             anywhere

And since then, We have had a few peaceful nights.

We should remember, iptables works sequentially and jumps to the target of the first match. Hence, you will need to ensure that there are no conflicting rules ahead of this one to avoid an undesired result.

I also added,

# echo 1 > /proc/sys/net/ipv4/tcp_syncookies

More info on limit module:

   limit
       This  module  matches at a limited rate using a token bucket filter.  A
       rule using this extension  will  match  until  this  limit  is  reached
       (unless  the `!' flag is used).  It can be used in combination with the
       LOG target to give limited logging, for example.

       [!] --limit rate[/second|/minute|/hour|/day]
              Maximum average matching rate: specified as a  number,  with  an
              optional  `/second',  `/minute',  `/hour', or `/day' suffix; the
              default is 3/hour.

       --limit-burst number
              Maximum initial number of packets to  match:  this  number  gets
              recharged  by  one  every  time the limit specified above is not
              reached, up to this number; the default is 5.

Relish !!!

Thursday, 2 May 2013

Recalling command history - Bash Shell

 Recalling command history:

!! - Last command and all arguments

!-3 - Third-to-last command and all arguments

!^ - First argument of last command

!:3 - Third argument of last command

!$ - Last argument of last command

!* - All arguments of the last command

!30 - Expands to the 30th command in history

!find - Last command beginning with 'find'

!?find - Last command containing 'find'

^name^type - Last command with first instance of 'name' replaced with 'type

!:gs/name/type - Last command with all instances of 'name' replaced with 'type'

<command>:p - Don't execute and print command.


Command to trim the Whitespace:

echo -e "Here is the command to trim \n \n White space" | /usr/bin/tr -d '[:space:]'

Saturday, 27 April 2013

InnoDB vs MyISAM

InnoDB and MyISAM:

  • InnoDB is newer while MyISAM is older.

  • InnoDB is more complex while MyISAM is simpler.

  • InnoDB is more strict in data integrity while MyISAM is loose.

  • InnoDB implements row-level lock for inserting and updating while MyISAM implements table-level lock.

  • InnoDB has transactions while MyISAM does not.

  • InnoDB has foreign keys and relationship contraints while MyISAM does not.

  • InnoDB has better crash recovery while MyISAM is poor at recovering data integrity at system crashes.

  • MyISAM has full-text search index while InnoDB has not.

In light of these differences, InnoDB and MyISAM have their unique advantages and disadvantages against each other. They each are more suitable in some scenarios than the other.

Advantages of InnoDB:

  •     InnoDB should be used where data integrity comes a priority because it inherently takes care of them by the help of relationship constraints and transactions.
  •     Faster in write-intensive (inserts, updates) tables because it utilizes row-level locking and only hold up changes to the same row that’s being inserted or updated.

Disadvantages of InnoDB:

  •     Because InnoDB has to take care of the different relationships between tables, database administrator and scheme creators have to take more time in designing the data models which are more complex than those of MyISAM.
  •     Consumes more system resources such as RAM. As a matter of fact, it is recommended by many that InnoDB engine be turned off if there’s no substantial need for it after installation of MySQL.
  •     No full-text indexing.

Advantages of MyISAM

  •     Simpler to design and create, thus better for beginners. No worries about the foreign relationships between tables.
  •     Faster than InnoDB on the whole as a result of the simpler structure thus much less costs of server resources.
  •     Full-text indexing.
  •     Especially good for read-intensive (select) tables.

Disadvantages of MyISAM:

  •     No data integrity (e.g. relationship constraints) check, which then comes a responsibility and overhead of the database administrators and application developers.
  •     Doesn’t support transactions which is essential in critical data applications such as that of banking.
  •     Slower than InnoDB for tables that are frequently being inserted to or updated, because the entire table is locked for any insert or update.

The comparison is pretty straightforward. InnoDB is more suitable for data critical situations that require frequent inserts and updates. MyISAM, on the other hand, performs better with applications that don’t quite depend on the data integrity and mostly just select and display the data.

Friday, 12 April 2013

Comment specific lines in VI editor

To Comment specific lines in VI Editor:

syntax:

:x,y s/^/#/g

x,y -> starting and ending line numbers.

^ -> points to line's begning

# -> usual way to comment in vim

Example:

:450,500 s/^/#/g

(or)


:.,+10 s/^/#/g

. is current line

+10 is ten lines from the current.

Tuesday, 9 April 2013

Difference Between VIRUS, TROJAN and ROOT-KIT

VIRUS:

A virus is normally runs in "stealth mode", hiding itself by infecting executalbes and system files., It still typically runs as an application which is why anti-virus software can detect and remove it.

TROJAN:

A trojan, which is an advanced virus, is meant to hide in a more soophisticated fashion.

ROOT-KIT:

A root-kit, on the other hand, subverts part of the operating system to hide it self and gain the maximum control possible. Due to this, it is capable of monitoring as well as performing all activities on a system. It can act as a vehicle for other root-kits and virues as well.

Root-Kits turn a computer into a remotely controllable victim, often also making it a spam-bot to send out unsolicited commercial email.

Monitor Remote Linux Host using Nagios:

Configuration steps on the Nagios monitoring server to monitor remote host:

Download NRPE Add-on:

Download nrpe-2.12.tar.gz from Nagios.org and move to /home/downloads:

Install check_nrpe on the nagios monitoring server:

# tar xvfz nrpe-2.12.tar.gz
# cd nrpe-2.1.2
# ./configure
# make all
# make install-plugin
./configure will give a configuration summary as shown below:

*** Configuration summary for nrpe date ***:

General Options:
————————
NRPE port: 5666
NRPE user: nagios
NRPE group: nagios
Nagios user: nagios
Nagios group: nagios

Note: I got the “checking for SSL headers… configure: error: Cannot find ssl headers” error message while performing ./configure. Install openssl-devel as shown below and run the ./configure again to fix the problem.

# rpm -ivh openssl-devel-0.9.7a-43.16.i386.rpm krb5-devel-1.3.4-47.i386.rpm zlib-devel-1.2.1.2-1.2.i386.rpm e2fsprogs-devel-1.35-12.5.

Verify whether nagios monitoring server can talk to the remotehost.
#/usr/local/nagios/libexec/check_nrpe -H 192.168.128.158
NRPE v2.12

Note: 192.168.128.158 in the ip-address of the remotehost where the NRPE and nagios plugin was installed as explained above.

Create host and service definition for remotehost:

Create a new configuration file /usr/local/nagios/etc/objects/remotehost.cfg to define the host and service definition for this particular remotehost. It is good to take the localhost.cfg and copy it as remotehost.cfg and start modifying it according to your needs.

host definition sample:

define host{
use linux-server
host_name remotehost
alias Remote Host
address 192.168.128.158
contact_groups admins
}

Service definition sample:

define service{
use generic-service
service_description Root Partition
contact_groups admins
check_command check_nrpe!check_disk
}

Note: In all the above examples, replace remotehost with the corresponding hostname of your

remotehost.
Dont forget to include
 cfg_file=/usr/local/nagios/etc/objects/remotehost.cfg
 in /usr/local/nagios/etc/nagios.cfg

Restart the nagios service:

Restart the nagios as shown below and login to the nagios web (http://nagios-server/nagios/) to verify the status of the remotehost linux sever that was added to nagios for monitoring.

# service nagios reload

Troubleshooting:

On Red Hat, For me the ./configure command was hanging with the the message: “checking for redhat spopen problem…”. Add --enable-redhat-pthread-workaround to the ./configure command as a work-around for the above problem.

You may also modify commands.cfg to add check_nrpe that was not by default in that file.

# ‘check_nrpe’ command definition
define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -t 30 -c $ARG1$
}

add the following in /objects/commands.cfg of the Nagios server.

define command{
command_name check_nrpe
command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$
}

Sunday, 7 April 2013

chkrootkit installation.

chkrootkit installation steps:

# cd /usr/local/src/

– Down load the chkrootkit.
# wget http://www.spenneberg.org/chkrootkit-mirror/files/chkrootkit.tar.gz

- Unpack the chkrootkit you just downloaded.
# tar -xvzf chkrootkit.tar.gz

- Change to new directory
# cd chkrootkit-*
(select the version )

- Compile chkrootkit
# make sense

- Run chkrootkit
# ./chkrootkit

How to setup a daily scan report?

- Load crontab
# crontab -e

- Add this line to the top:
==========================================================================
0 1 * * * (cd /usr/local/src/chkrootkit*; ./chkrootkit 2>&1 | mail -s “chkrootkit output” email@domain.com)
==========================================================================

Nagios: CRITICAL - Socket timeout after 10 seconds


Socket timeout after 10 seconds:

As any other monitoring system Nagios can produce false alarms. Usually it happens when Nagios fails to get the reply from the host being monitored during some pre-defined timeout. In order to mark service as down Nagios does three checks and if all of them are failed then the service is marked down and administrator will got an alert about its critical status. At the same time even if one of those checks fails Nagios will report administrator about it depending on configuration.

If you face some false alarms occasionally but the service is actually online then it makes sense to increase timeout value from default 10 seconds to, let’s say, 20 seconds.

FIX:

Open one of nagios’ configs where check commands are defined (usually it’s /etc/nagios/commands.cfg file) and find there a block named check_nrpe, add “-t 20″ to the end of its command_line so it will look like below:

define command {
    command_name    check_nrpe
    command_line    $USER1$/check_nrpe -H $HOSTADDRESS$ -c $ARG1$ -t 20
}

And restart Nagios.

Besides check_nrpe there are also other commands like check_http, check_smtp and others: all of them supports -t options so just modify them like check_nrpe depending on your Nagios timeout conditions.

NagiOS Server Monitoring Tool- LINUX


Quick installation steps:

Nagios is an enterprise-class open source computer/network monitoring software with on-going enhancements from its vibrant community made up of worldwide supporters.

User account and group ID:

[root@ranjith ~]# useradd -m nagios
[root@ranjith ~]# passwd nagios
[root@ranjith ~]# groupadd nagcmd
[root@ranjith ~]# usermod -a -G nagcmd nagios
[root@ranjith ~]# usermod -a -G nagcmd apache

Download the latest Nagios Core and Nagios Plugins source files or just use just uses wget to download as below,

Nagios Core:

[root@ranjith ~]# wget http://prdownloads.sourceforge.net/sourceforge/nagios/nagios-3.2.3.tar.gz

compile and install the Nagios core:

[root@ranjith ~]# tar -zxvf nagios-3.2.3.tar.gz
[root@ranjith ~]# cd nagios-3.2.3

FYI: From next execution onwards, you may use tee command to duplicate output to a file for examination, which could be useful to trace errors that triggered when compiling source code.

[root@ranjith ~]# ./configure --with-command-group=nagcmd



Compile the Nagios source code (piping to tee command is optional)

[root@ranjith ~]# make all | tee make_all.nagios_core.log

Install the compiled binaries of Nagios Core:

[root@ranjith ~]# make install
[root@ranjith ~]# make install-init
[root@ranjith ~]# make install-config
[root@ranjith ~]# make install-commandmode

Edit contact.cfg file to update email address of nagiosadmin for receiving alerts:

[root@ranjith ~]# vi /usr/local/nagios/etc/objects/contacts.cfg

replace your personal email ID instead of "nagios@localhost".

Install Nagios web config file to Apache conf.d directory:

[root@ranjith ~]# make install-webconf



Create an user account for logging into the Nagios web interface:

[root@ranjith ~]# htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

Restart Apache (httpd) to make the new settings take effect:

[root@ranjith ~]# service httpd restart

Add Nagios to System services and configure it to start up automatically when boots into runlevel 3 and 5:

[root@ranjith ~]# chkconfig --add nagios
[root@ranjith ~]# chkconfig --level 35 nagios on

Now, We need to install standard Nagios plugins which are used to monitor various computer/network status.

Nagios Plugins:

[root@ranjith ~]# wget http://prdownloads.sourceforge.net/sourceforge/nagiosplug/nagios-plugins-1.4.15.tar.gz

let’s unpack:

[root@ranjith ~]# tar -zxvf nagios-plugins-1.4.15.tar.gz
[root@ranjith ~]# cd nagios-plugins-1.4.15

Run configure file to explicitly set ownership to nagios:nagios respectively:

[root@ranjith ~]# ./configure --with-nagios-user=nagios --with-nagios-group=nagios

Ready to compile and install Nagios Plugins binary files:

[root@ranjith ~]# make
[root@ranjith ~]# make install

Verify the sample Nagios configuration files (the files used to define how and what services or hosts to monitor by Nagios Core via the various plugins):

Through the file: /usr/local/nagios/etc/nagios.cfg you can modify the useful options such LOG ROTATION METHOD, DEBUG LEVEL, AUTO-RESCHEDULING OPTION, SLEEP TIME, TIMEOUT VALUES, FLAP DETECTION THRESHOLDS and TIMEZONE OFFSET.

[root@ranjith ~]# /usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

Total Warnings: 0
Total Errors:   0



start up of Nagios:

[root@ranjith ~]# service nagios start



http://Your-server-IP/nagios ( kindly stop iptables )



To reset admin panel password:

[root@ranjith ~]# htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin

If you face blank page on the browser! after the successfull reach on nagios home page.?

kindly check the error log:

script not found or unable to stat: somefile.cgi

FIX:

[root@ranjith html]# chcon -R -t httpd_sys_content_t /usr/local/nagios




Enjoy!!!

Saturday, 30 March 2013

Cherokee Web Server on Linux ( Cross-platform Web server ) :

Cherokee is one of the fastest and most flexible web server's available. Cherokee is able to gracefully handle many concurrent connections while maintaining a low memory footprint. It supports a large variety of technologies, features, load balancing capabilities, platforms, and provides an administration interface to configure your server and sites.

Installation:

RPM:

[root@ranjith ~]# rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm

YUM:

[root@ranjith ~]# yum install cherokee

start Cherokee and add it to the default runlevel (to start at boot time).

[root@ranjith ~]# /etc/init.d/cherokee start
[root@ranjith ~]# chkconfig cherokee on


Configuration

Unlike Apache, Cherokee itself provides an interface for administering the web server. To start the admin interface, run the following command in the shell.

[root@ranjith ~]# cherokee-admin -b

The output of this command will show the login pass, and URL. Similar to the following:

Login:
User: admin
One-time Password: password

Web Interface:
URL: http://localhost:9090/


Cherokee Web Server 1.0.6 (Aug 6 2010): Listening on port ALL:9090, TLS
disabled, IPv6 enabled, using epoll, 4096 fds system limit, max. 2041
connections, caching I/O, single thread

Cherokee's admin will now be listening on port 9090 of your server, web browser on http://your-servers-ip:9090.

Enjoy!!!

Thursday, 28 March 2013

Bash Test Operators

STRINGS:

-------------------------------------
syntax:

if [ "$str1" operator "$str2" ]
then
   command
fi


-------------------------------------

=     is equal to
==     is equal to     if (( $1 == $2 )) [Note: Used within double parentheses]

The == comparison operator behaves differently within a double-brackets test than within single brackets.
[[ $a == z* ]]     True if $a starts with an “z” (pattern matching).
[[ $a == "z*" ]]     True if $a is equal to z* (literal matching).

[ $a == z* ]     File globbing and word splitting take place.
[ "$a" == "z*" ]     True if $a is equal to z* (literal matching).

!=     is not equal to
<     is less than, in ASCII alphabetical order
>     is greater than, in ASCII alphabetical order
-n     string is not “null.”
-z     string is “null, ” that is, has zero length


INTEGERS: 
--------------------------------------------
syntax:

if [ "$string1" operator "$string2" ]
then
   command
fi


---------------------------------------------


-eq     is equal to     if [ $1 -eq 200 ]
-ne     is not equal to if [ $1 -ne 1 ]
-gt     is greater than if [ $1 -gt 15 ]
-ge     is greater than or equal to if [ $1 -ge 10 ]
-lt     is less than     if [ $1 -lt 5 ]
-le     is less than or equal to     if [ $1 -le 0 ]
<     is less than (within double parentheses)
<=     is less than or equal to (within double parentheses)
>     is greater than (within double parentheses)
>=     is greater than or equal to (within double parentheses)


FILES/DIRECTORIES:

-----------------------------------
syntax:

 if [ -operator "$filename" ]
then
   command
fi


-------------------------------------

-e     file exists
-f     file is a regular file (not a directory or device file)
-s      file is not zero size (lowercase ‘s’)
-S     file is a socket
-d     file is a directory
-b     file is a block device (floppy, cdrom, etc.)
-c     file is a character device (keyboard, modem, sound card, etc.)
-p     file is a pipe
-h     file is a symbolic link
-L     file is a symbolic link
-t     file (descriptor) is associated with a terminal device
-r     file has read permission (for the user running the test)
-w     file has write permission (for the user running the test)
-x     file has execute permission (for the user running the test)
-g     set-group-id (sgid) flag set on file or directory
-u     set-user-id (suid) flag set on file
-k     sticky bit set
-O     you are owner of file
-G     group-id of file same as yours
-N     file modified since it was last read
f1 -nt f2     file f1 is newer than f2
f1 -ot f2     file f1 is older than f2
f1 -ef f2     files f1 and f2 are hard links to the same file

Monday, 25 March 2013

Zombie Processes in Linux

What is a zombie?

Zombie is a process state when the child dies before the parent process. In this case the structural information of the process is still in the process table. Since this process is not alive, it cannot react to signals. Zombie state can finish when the parent dies. All resources of the zombie state process are cleared by the kernel.

Causes of Zombie Processes:

When a subprocess exits, its parent is supposed to use the "wait" system call and collect the process's exit information. The subprocess exists as a zombie process until this happens, which is usually immediately. However, if the parent process isn't programmed properly or has a bug and never calls "wait," the zombie process remains, eternally waiting for its information to be collected by its parent.

Killing Zombie Processes:

Zombie processes persist until their parent process ends, at which point they are adopted by the "init" system process and shortly cleaned up. However, there's no way to get rid of a zombie process without ending its parent process. If you have a lot of zombie processes, close and restart the parent process or service. Init adopts and cleans up the orphaned zombie processes. If you can't close the parent process, don't worry, zombies won't affect the performance of your computer unless a very large amount are present. However, bear in mind that, if a process is creating a lot of zombies, it has a programming bug or error in its code and isn't working correctly.
Viewing Zombie Processes

The Execute the "top" command in a Terminal window. The top command shows the number of zombie processes at the upper-right side of its output, in the Tasks: row.

You can also list running processes by executing the "ps aux" command. Zombie processes have a "z" listed in their Stat column in the output of the 'ps aux" command.

Risks of Zombie Processes:

While zombie processes aren't a problem in and of themselves and take up very little resources, there is one concern. Linux systems have a maximum amount of processes and thus process ID numbers. If a computer has enough zombie processes, the maximum amount is reached and new processes can't be launched.

The maximum amount of processes can be listed by typing the "cat /proc/sys/kernel/pid_max" in a Terminal window and is usually 32768. Thus, zombie processes are usually not a concern.

However, if the parent process creating zombie processes is server software that isn't written properly, a large amount of zombies could be created under load. Or, zombies could gradually accumulate over long periods of time until the maximum process limit is reached.