Archive for the ‘Linux’ Category

Basic Git commands

Posted: 4p in Linux

Here is a list of some basic Git commands to get you going with Git.

Git task Notes Git commands
Tell Git who you are
Configure the author name and email address to be used with your commits.

Note that Git strips some characters (for example trailing periods) from
git config –global “Renjith Raju”

git config –global
Create a new local repository
git init
Check out a repository
Create a working copy of a local repository:
git clone /path/to/repository
For a remote server, use:
git clone username@host:/path/to/repository
Add files
Add one or more files to staging (index):
git add <filename>

git add *
Commit changes to head (but not yet to the remote repository):
git commit -m “Commit message”
Commit any files you’ve added with git add, and also commit any files you’ve changed since then:
git commit -a
Send changes to the master branch of your remote repository:
git push origin master
Status List the files you’ve changed and those you still need to add or commit:
git status
Connect to a remote repository
If you haven’t connected your local repository to a remote server, add the server to be able to push to it:
git remote add origin <server>
List all currently configured remote repositories: git remote -v
Create a new branch and switch to it:
git checkout -b <branchname>
Switch from one branch to another:
git checkout <branchname>
List all the branches in your repo, and also tell you what branch you’re currently in:
git branch
Delete the feature branch:
git branch -d <branchname>
Push the branch to your remote repository, so others can use it:
git push origin <branchname>
Push all branches to your remote repository:
git push –all origin
Delete a branch on your remote repository:
git push origin :<branchname>
Update from the remote repository
Fetch and merge changes on the remote server to your working directory:
git pull
To merge a different branch into your active branch:
git merge <branchname>
View all the merge conflicts: View the conflicts against the base file:

Preview changes, before merging:
git diff

git diff –base <filename>

git diff <sourcebranch> <targetbranch>
After you have manually resolved any conflicts, you mark the changed file:
git add <filename>
You can use tagging to mark a significant changeset, such as a release:
git tag 1.0.0 <commitID>
CommitId is the leading characters of the changeset ID, up to 10, but must be unique. Get the ID using:
git log
Push all tags to remote repository:
git push –tags origin
Undo local changes
If you mess up, you can replace the changes in your working tree with the last content in head:

Changes already added to the index, as well as new files, will be kept.
git checkout — <filename>
Instead, to drop all your local changes and commits, fetch the latest history from the server and point your local master branch at it, do this:
git fetch origin

git reset –hard origin/master
Search the working directory for foo(): git grep “foo()”


In some rare cases, a long process may hang indefinitely and be difficult for system administrators to detect. The /bin/is_script_stuck script checks how long a script’s current PID has run, and can notify a WHM user or kill the process.

For example, if you experience problems with hung backup processes, you could use this script in a cron job to monitor backup processes.

Run the /bin/is_script_stuck script

To run the /bin/is_script_stuck script on the command line, use the following format:

/bin/is_script_stuck [options]

You can use the following options with this script:

--script The absolute path to the script that you wish to check.

Note:This option is required, unless you instead use the --help option.

--time The amount of time that the specified script can run before the /bin/is_script_stuck script determines that it is stuck.

You can append one of the following units of measure:

  • d — Days.
  • h — Hours.
  • m — Minutes.
  • s — Seconds.

If you do not append a unit of measure, the script treats this value as a number of seconds. For example, specify --time=60 for 60 seconds, or --time=4d for four days.

Note:This option is required, unless you instead use the --help option.

--notify The WHM username to which you wish to send a notification of the script’s results. --notify=root
--kill Use this option if you want the script to stop (kill) the specified script if it runs longer than the specified time. --kill
--help Print help information for the /bin/is_script_stuck script. --help

Reserved blocks in linux

Reserved blocks are disk blocks reserved by the kernel for processes owned by privileged users to prevent operating system from a crash due to unavailability of storage space for critical processes. The default percentage of reserved block is 5 % of the total size of file system and can be increased or decreased based upon the requirement.

Example : We have 40GB in root and the situation reached 100%, all the non-privileged users processes wouldn’t able to login or write (because it’s running under nobody’) and the privileged user (root) has reserved block of 5% which will able to troubleshoot the disk space issue.

Using tune2fs command we can check the information.

root@:~# tune2fs -l /dev/md9 | grep Reserved

tune2fs 1.42.5 (29-Jul-2012)

Reserved block count:     0

Reserved blocks uid:      0 (user root)

Reserved blocks gid:      0 (group root)


root@~# dumpe2fs /dev/md9

The uid and gid confirm the Unix userid and Unix groupid of the user who will be allowed to tap into the reserved space.

Default block size of file system

root@linux:~# tune2fs -l /dev/sda2 | grep Block

Block count:              241966592

Block size:               4096

Blocks per group:         32768

To incerease or decrease the block size use the below command.

# tune2fs -m 0 /dev/md9

Understanding about superblock and how to recover it from a corrupted partition.

what is superblock?

Superblock is the metadata of the file system, it stores following information

  • Blocks
  • Number of free blocks
  • Inodes per block group
  • Blocks per block group
  • Number of times the file system was mounted since last fsck.
  • Mount time
  • UUID of the file system
  • Write time
  • File System State
  • The file system type
  • The operating system in which the file system was formatted

In linux it usually maintains multiple superblock copies for better redundancy to avoid data loss. From this superblock we can recover data without any loss. Below output will show how to check superblock in a partition, using this superblock we can recover.

root@linux:~# dumpe2fs  /dev/sda2 | grep superblock

  • dumpe2fs 1.42.9 (4-Feb-2014)
  •   Primary superblock at 0, Group descriptors at 1-58
  •   Backup superblock at 32768, Group descriptors at 32769-32826
  •   Backup superblock at 98304, Group descriptors at 98305-98362
  •   Backup superblock at 163840, Group descriptors at 163841-163898
  •   Backup superblock at 229376, Group descriptors at 229377-229434
  •   Backup superblock at 294912, Group descriptors at 294913-294970
  •   Backup superblock at 819200, Group descriptors at 819201-819258
  •   Backup superblock at 884736, Group descriptors at 884737-884794
  •   Backup superblock at 1605632, Group descriptors at 1605633-1605690
  •   Backup superblock at 2654208, Group descriptors at 2654209-2654266

  Here below command will recover the corrupted partition using superblock.

# fsck -b 32768 /dev/sda2

After that mount the partition and create a file.

What is Virtual Memory

Virtual memory is an old concept. Before computers had cache, they had virtual memory. For a long time, virtual memory only appeared on mainframes. Personal computers in the 1980s did not use virtual memory. In fact, many good ideas that were in common use in the UNIX operating systems didn’t appear until the mid 1990s in personal computer operating systems (pre-emptive multitasking and virtual memory).

Initially, virtual memory meant the idea of using disk to extend RAM. Programs wouldn’t have to care whether the memory was “real” memory (i.e., RAM) or disk. The operating system and hardware would figure that out.

Later on, virtual memory was used as a means of memory protection. Every program uses a range of addressed called the address space.

The assumption of operating systems developers is that any user program can not be trusted. User programs will try to destroy themselves, other user programs, and the operating system itself. That seems like such a negative view, however, it’s how operating systems are designed. It’s not necessary that programs have to be deliberately malicious. Programs can be accidentally malicious (modify the data of a pointer pointing to garbage memory).

Virtual memory can help there too. It can help prevent programs from interfering with other programs.

Occasionally, you want programs to cooperate, and share memory. Virtual memory can also help in that respect.

Virtual Memory reference link

How to Find Out if a Drive is SSD or HDD

Method 1:

On the latest kernels the SSD disks are automatically detected. This is how you verify:

Replace sda with your hard drive path.

$ cat /sys/block/sda/queue/rotational


You will a number as an output

1 means you have a HDD
0 means you have a SSD

Method 2:

If you do not have the smartctl tool, you have to install it, like this:

$ sudo apt-get install smartmontools

Find out if you have a SSD or a normal HDD:

$ sudo smartctl -a /dev/sda

If you get this output, it means you have an SSD, else, it is an HDD:

Rotation Rate: Solid State Device

How to install or uninstall Nginx Admin Plugin for cPanel/WHM servers.

Nginx Admin Install:

cd /usr/local/src
tar xf nginxadmin.tar
cd publicnginx
./nginxinstaller install

Nginx Admin Uninstall:

cd /usr/local/src
tar xf nginxadmin.tar
cd publicnginx
./nginxinstaller uninstall

netstat states

Posted: 4p in Linux

Here I am stating some basic description about netstat states

The socket has an established connection.
The socket is actively attempting to establish a connection.
A connection request has been received from the network.
The socket is closed, and the connection is shutting down.
Connection is closed, and the socket is waiting for a shutdown
from the remote end.
The socket is waiting after close to handle packets still in the
CLOSE The socket is not being used.
The remote end has shut down, waiting for the socket to close.
The remote end has shut down, and the socket is closed. Waiting
for acknowledgement.
LISTEN The socket is listening for incoming connections. Such sockets
are not included in the output unless you specify the
–listening (-l) or –all (-a) option.
Both sockets are shut down but we still don’t have all our data
The state of the socket is unknown.

Boot Process

Posted: 2p in Linux

The Boot Process

  1. The Bootstrap Process – First Stage (BIOS)

The PC boot process is started on power-up . The processor will start execution of code contained in the Basic Input and Output System (BIOS). The BIOS is a program stored in Read Only Memory (ROM) and is the lowest level interface between the computer and peripherals.

BIOS then does the Power On Self Test, or POST routine runs to find certain hardware and to test that the hardware is working at a basic level. It compares the hardware settings in the CMOS (Complementary Metal Oxide Semiconductor) to what is physically on the system. It then initialize the hardware devices.

Once the POST is completed, the hardware jumps to a specific, predefined location in RAM. The instructions located here are relatively simple and basically tell the hardware to go look for a boot device. Depending on how your CMOS is configured, the hardware first checks your floppy and then your hard disk.

When a boot device is found (let’s assume that it’s a hard disk), the hardware is told to go to the 0th (first) sector (cylinder 0, head 0, sector 0), then load and execute the instructions there. This is the master boot record, or MBR .
The BIOS will first load the MBR into memory which is only 512 bytes in size and points to the boot loader (LILO: Linux boot loader) or GRUB.
Once the BIOS finds and loads the boot loader program into memory, it yields control of the boot process to it.

The Boot Loader – Stage 2

LILO or GRUB allows the root user to set up the boot process as menu-driven or command-line, and permits the user to choose from among several boot options.

It also allows for a default boot option after a configurable time-out, and current versions are designed to allow booting from broken Level 1 (mirrored) RAID arrays.

It has the ability to create a highly configurable, “GUI-fied” boot menu, or a simple, text-only, command-line prompt.
Depending on the kernel boot option chosen or set as default, lilo or grub will load that kernel .

The boot loader will load the kernel and initial root file system image into memory and then start the kernel, passing in the memory address of the image. At the end of its boot sequence, the kernel tries to determine the format of the image from its first few blocks of data:
* In the initrd scheme, the image may be a file system image (optionally compressed), which is made available in a special block device (/dev/ram) that is then mounted as the initial root file system.[3] The driver for that file system must be compiled statically into the kernel. Many distributions originally used compressed ext2 file system images. Others (including Debian 3.1) used cramfs in order to boot on memory-limited systems, since the cramfs image can be mounted in-place without requiring extra space for decompression.

Once the initial root file system is up, the kernel executes /linuxrc as its first process. When it exits, the kernel assumes that the real root file system has been mounted and executes /sbin/init to begin the normal user-space boot process.[3]

  • In the initramfs scheme (available in Linux 2.6.13 onwards), the image may be a cpio archive (optionally compressed). The archive is unpacked by the kernel into a special instance of a tmpfs that becomes the initial root file system. This scheme has the advantage of not requiring an intermediate file system or block drivers to be compiled into the kernel.[4] Some systems use the dracut package to create the initramfs.[5]
    On an initramfs, the kernel executes /init as its first process. /init is not expected to exit.

Kernel Loading – Stage 3

4. Final Stage – Init

The first thing the kernel does after completing the boot process is to execute init program.
The /sbin/init program (also called init) coordinates the rest of the boot process and configures the environment for the user.Init is the root/parent of all processes executing on Linux which becomes process number 1.

When the init command starts, it becomes the parent or grandparent of all of the processes that start up automatically on a Red Hat Linux system.
Based on the appropriate run-level in the /etc/inittab file , scripts are executed to start various processes to run the system and make it functional.

5. The Init Program

As seen in the previous section, the kernel will start a program called init or /sbin/init
The init process is the last step in the boot procedure and identified by process id “1”.
The init command then runs the /etc/inittab script.

The first thing init runs out of the inittab is the script /etc/rc.d/rc.sysinit , which sets the environment path, starts swap, checks the file systems, and takes care of everything the system needs to have done at system initialization.

Next, init looks through /etc/inittab for the line with initdefault in the third field. The initdefault entry tells the system what run-level to enter initially.

id:5:initdefault: ( 5 is the default runlevel)

What is Puppet

Posted: 4p in Linux

What is Puppet 

Puppet is a software which is using for system automation and managemnet. It manages your servers, your described machine configurations in an easy-to-read declarative language, and will bring your systems into the desired state and keep them there.Before talking more about puppet I want to refresh your thoughts about automation. The product is owned by puppetlabs Inc the leader in IT automation .

What is Automation ?

System automation is the use or introduction of automatic configurations , scripts or other process to perform the daily task automaticaly.

 Why Automation ?

  •  Speed           : It will help us to complete the tasks in less time
  •  consistency   : It will avoid human errors which may occur during the repetition
  •  Easy            :  Free from hazards and avoid boredom of repetition

What to Automate ?

Since the servers a infrastructure consist of a certain complexity and valuable data it will not be a wise decision if we choose a wrong thing to automate. So we have to consider few things before start with automation.

Choose the right thing to automate
  • Freaquency : How often we have to perform the task. If the task comes very rare the effort to make those thing automated will be a waist.
  • Variability  : How much similar the tasks are more similar more easy to automate

Dont Learn two things at a time

If we try to automate a technolgy or process in which we are not sounded enough. It will be very defficult to isolate the errors when things go wrong.That means we cant identify the exact issue, whther the issue is with the process we are doing or its with the puppet configurations.

Platform Support

Puppet will work on all operating systems but the pupppet master should be in linux . Windows machines can’t act as puppet master servers. Before installing any Windows agent nodes, be sure that you have a *nix puppet master installed and configured.

Comparison between Puppet and Chef:




Definition Puppet is an open source configuration management tool which is written in Ruby. It is a product of Puppet Labs. Chef is also a configuration tool, but it is written in Ruby and Erlang. It is a product of Opscode.
Supported Platforms It is officially supported on a broader range of OS. It is officially supported on a less broader range of OS.
Community Larger user base Comparatively smaller user base
Pricing It has a free open source version. Puppet Enterprise is free for the first 10 nodes and then $99 per node (per year) after that. It also has a free open source version. Private Chef ranges from $120 per month for 20 servers to $600 per month for 100 servers.
API Integration It seems to have no extended API Chef has an extended API
Type of application It is a user application It is also a user application but also can become a part of the application
Configuring the Configuration Server Comparatively difficult Comparatively easy
Code Execution Both on puppermaster and puppet client On the node/client
Ordered Execution Some support Better support
Company Puppet Labs Opscode
Notable Customers Twitter and Nokia Facebook and Splunk
Frindliness More sysadmin friendly More programmer friendly
Language Mainly Puppet’s custom JSON-like language, although a Ruby option is available beginning in version 2.6 A subset of Ruby