For 1G Nic, the maximum theoretical speed limit is 128MB/s.
In real world you get much lesser, adding the protocol overhead like we have scp, wget and other protocols.
If you are currently using mode 0.
In mode 0 there is chances for packet re-ordering. Which can delay the speed.
There may be slight improvement if you use mode 4.
With mode 4 the incoming connection is controlled by switch. So there will be no packet re-order issues or packet drops.
Even if you use mode 4, you cannot achieve 2G speed for a single connection. You can enjoy 2G bandwidth for multiple connections.
There is a assumption that configuration of bonding with multiple network cards gives double bandwidth. That is not at all true.
The content of the file is a series of lines. Each line starting with G: represents one glock, and the following lines, indented by a single space, represent an item of information relating to the glock immediately before them in the file
Lines in the debugfs file starting with H: (holders) represent lock requests either granted or waiting to be granted
The flags field on the holders line f: shows which: The 'W' flag refers to a waiting request, the 'H' flag refers to a granted request
The glocks which have large numbers of waiting requests are likely to be those which are experiencing particular contention.
Having identified a glock which is causing a problem, the next step is to find out which inode it relates to. The glock number (n: on the G: line) indicates this. It is of the form type/number and if type is 2, then the glock is an inode glock and the number is an inode number. To track down the inode, you can then run find -inum number where number is the inode number converted from the hex format in the glocks file into decimal.
If the glock that was identified was of a different type, then it is most likely to be of type 3: (resource group). If you see significant numbers of processes waiting for other types of glock under normal loads, then please report this to Red Hat support.
f you do see a number of waiting requests queued on a resource group lock there may be a number of reason for this. One is that there are a large number of nodes compared to the number of resource groups in the filesystem. Another is that the filesystem may be very nearly full (requiring, on average, longer searches for free blocks). The situation in both cases can be improved by adding more storage and using the gfs2_grow command to expand the filesystem.
The W flag indicates that this is waiting for a glock
UN (unlocked) state
SH (shared) lock
The error is generated because of recursive defunct processes going on in the memory and is unable to allocate specific amount of memory for the newly created process. Basically the general structure of process generation is as followed.
This is the normal life of a program.
fork()INIT->exec()->RUN->exit()->ZOMBIE->done
INIT(fork)
The program is started by a parent process, an action called fork()
The fork makes a copy(the child) of the calling process(the parent).
exec() The child then issues an exec() system call which replaces the new
process with the intended executable file.
SRUN/URUN (system/user run space) the new child program now runs. Now the parent is either waiting(in a SLEEP) for the child to finish or checks for the childs completion or the system notifys the parent on exit
of the child process.
exit()
The child exits and returns the resources(memory) to the system.
ZOMBIE At this point the child has terminated and is in ZOMBIE. THIS IS NORMAL!! It will stay in this state until the parent process acknowledges receipt of the appropriate signal or terminates.
If the parent process has died unexpectedly or something else has prevented acknowledgment from the parent then process ID #1 (init) takes over and becomes the childs parent until reboot.
So...
A zombie does not tie up memory but it still has a slot in the process table. I/O devices can get locked out.
You can't kill a ZOMBIE process because......IT'S ALREADY DEAD!!!
So finally try to check these defunct processes going on server You can do this by giving the ps -dfa command and killing manually. If this work you can start by simply relogin into the shell and if not you will need a reboot to the server in order to refresh the memory states.
That means packets drops at the NIC level. These are most likely caused by exhaustion of the RX ring buffer. Increase the size of the ethernet device ring buffer.
First inspect the output of "ethtool -g eth0". If the "Pre-set maximums" are more than the what's listed in the current hardware settings it's recommend to increase this number. As an example:
Provision a new logical unit on iSCSI or fibre channel storage. Use virsh to trigger a scan for it, and confirm that it appears correctly.
To discover logical units on a particular HBA, create a pool for that HBA using: virsh pool-create hbapool.xml
where hbapool.xml contains:
host6/dev/disk/by-id
Confirm that all the appropriate logical units are visible as volumes with: virsh vol-list host6
After creating the pool, add a new logical unit on a target that's visible on that host and refresh the pool with: virsh pool-refresh host6
and confirm that the new storage is visible. Note that the refresh code only scans for new LUs on existing targets and does not issue a LIP to discover new targets as that would be disruptive to I/O.
The above logs indicate that either means that the fence daemon was not running, the node was not a full member of cluster, or the node had joined the cluster in an unstable state. If any node did not join fence domain would cause to shared file systems hung.
To confirm the above run the following commands:
# group_tool dump fence | grep members
and
cman_tool services
type level name id state fence 0 default 00000000 JOIN_START_WAIT [1] dlm 1 clvmd 00000000 none
If the state is "JOIN_START_WAIT" the above description of the problem is correct.
1. Physically attach the hard disk device to the host. Configure the host if the drive is not accessible by default. 2. Configure the device with multipath and persistence on the host if required. 3. Use the virsh attach command. Replace: myguest with your guest's name, /dev/hdb1 with the device to add, and hdc with the location for the device on the guest. The hdc must be an unused device name.
# virsh attach-disk myguest /dev/hdb1 hdc --driver tap --mode readonly
4. The guest now has a new hard disk device called /dev/hdb on Linux. This device may require formatting.