Posts

Showing posts from 2017

Cheat sheet for Hardware RAID health check - Megaraid, Adaptec, 3wareraid and HPraid.

Hi Folks,

In another post "raid type" I've mentioned how to find if the server is running with Software or Hardware RAID.

Here, we can go ahead and check the health of the Hardware RAID array in the server.

The first thing we need to detect is the controller in which the raid is configured and you can easily get it from the another post that I mentioned earlier. However, I'm providing it here again for your convenience.

1. Login to the server as the root user.
2. Execute the following command.

/sbin/lspci -vv | grep -i raid

3. This will show the raid controller running on the server. The different types of controller I've worked with includes

a. Megaraid which uses megacli
b. Adaptec which uses arcconf.
c. 3ware raid which uses tw_cli
d. HPraid which uses hpacucli.

How to check the Megaraid array health status?

The Megaraid is usually installed in the /opt/MegaRAID and you can execute the following command to verify the health of the array.

/opt/MegaRAID/MegaCli…

Logical volume vmxxxx_img is used by another device - Error on LVM removal

Hi Folks,

I've faced error while trying to remove an LVM from the server.

The exact LVM error will be "Logical volume vmxxxx_img is used by another device" on executing the lvremove command.

Feel free to remove the following steps to remove the LVM from the server.

Note: Please remember to replace <id> with your VMID in the below section.

----------------------------------------------------------
dmsetup ls dmsetup info -c xen-vm<id>_img dmsetup remove xen-vm<id>_img lvremove -f /dev/xen/vm<id>_img ----------------------------------------------------------

File system corrupted on the OpenVZ container.

It happens rarely but when it does, it is hard to handle. The OpenVZ VM's are usually stable in nature, but I've encountered situations where OpenVZ VM was showing file system error and was refusing to start.

You can follow the below steps to recover the VM.


1. ~# vzctl stop ctid

2. # ploop mount /vz/private/ctid/root.hdd/DiskDescriptor.xml
3. now do a df -h and you will get the ploop id for the VM.
4. fdisk -l /dev/ploop12345
5. e2fsck /dev/ploop12345p1
6. ploop umount -d /dev/ploop12345
7. vzctl start ctid

Change IO Scheduler for SSD and HDD server with Software RAID

Hi Guys,

It has been a while...

I faced a problem with IO on one of the RAID 1 Linux servers and the IO schedulers were the default cfq in the server. It doesn't matter if the drive is in software RAID or not.

If you are using SSD drives, then there are no mechanical parts in the HDD, so you need to change the IO schedulers to noop.

If the SSD drive is mounted as /hda, then below command will do it.

----------------------------------------------------------
echo noop > /sys/block/hda/queue/scheduler
----------------------------------------------------------

For the HDD drive, the scheduler that is recommended is `deadline` in case of poor performance problems.

----------------------------------------------------------
echo deadline > /sys/block/hdb/queue/scheduler
----------------------------------------------------------

In case of Software RAID, just make sure that the drive you are changing the scheduler is inside the RAID.

You can see the drives inside the software RAID…

Check the number of connections using nf_conntrack or using tcpdump

tcpdump -tnn -ieth0 -c 20000 | awk -F ">" '{print $1}' | awk -F " " '{print $2}' | awk -F "." '{print $1"."$2"."$3"."$4}' | sort | uniq -c | sort -nr | awk ' $1 > 100 '

Note : If the network interface card in use on the server is eth0

cat /proc/net/nf_conntrack > contoutput && cat contoutput | awk '{ print $7,$8 }' | sort | uniq -c | sort -rn | head