That means packets drops at the NIC level. These are most likely caused by exhaustion of the RX ring buffer. Increase the size of the ethernet device ring buffer.
First inspect the output of "ethtool -g eth0". If the "Pre-set maximums" are more than the what's listed in the current hardware settings it's recommend to increase this number. As an example:
Provision a new logical unit on iSCSI or fibre channel storage. Use virsh to trigger a scan for it, and confirm that it appears correctly.
To discover logical units on a particular HBA, create a pool for that HBA using: virsh pool-create hbapool.xml
where hbapool.xml contains:
host6/dev/disk/by-id
Confirm that all the appropriate logical units are visible as volumes with: virsh vol-list host6
After creating the pool, add a new logical unit on a target that's visible on that host and refresh the pool with: virsh pool-refresh host6
and confirm that the new storage is visible. Note that the refresh code only scans for new LUs on existing targets and does not issue a LIP to discover new targets as that would be disruptive to I/O.
The above logs indicate that either means that the fence daemon was not running, the node was not a full member of cluster, or the node had joined the cluster in an unstable state. If any node did not join fence domain would cause to shared file systems hung.
To confirm the above run the following commands:
# group_tool dump fence | grep members
and
cman_tool services
type level name id state fence 0 default 00000000 JOIN_START_WAIT [1] dlm 1 clvmd 00000000 none
If the state is "JOIN_START_WAIT" the above description of the problem is correct.