Each step below provides instructions and links to the appropriate documents. The steps are ordered in the most appropriate sequence to isolate the issue and to identify the proper resolution. They are also ordered in the most appropriate sequence to minimize data loss.
Note: After completing each step, determine whether the performance issue still exists. Work through each troubleshooting step in order, and do not skip a step.
This article includes four main sections:
- CPU constraints
- Memory overcommitment
- Storage Latency
- Network latency
To determine whether the poor performance is due to a CPU constraint:
- Use the
esxtop
command to determine if the ESXi/ESX server is being overloaded. For more information about esxtop, see the Resource Management Guide for your version of ESXi/ESX:
- Examine the
load average
on the first line of the command output.
A load average of 1.00 means that the ESXi/ESX Server machine’s physical CPUs are fully utilized, and a load average of 0.5 means that they are half utilized. A load average of 2.00 means that the system as a whole is overloaded.
- Examine the
%READY
field for the percentage of time that the virtual machine was ready but could not be scheduled to run on a physical CPU.
Under normal operating conditions, this value should remain under 5%. If the ready time values are high on the virtual machines that experience bad performance, then check for CPU limiting:
If the load average is too high, and the ready time is not caused by CPU limiting, adjust the CPU load on the host. To adjust the CPU load on the host, either:
- Increase the number of physical CPUs on the host
OR
- Decrease the number of virtual CPUs allocated to the host. To decrease the number of virtual CPUs allocated to the host, either:
- If you are using ESX 3.5, determine whether IRQ sharing is an issue. For more information, see ESX has performance issues due to IRQ sharing (1003710).
Memory overcommitment
To determine whether the poor performance is due to memory overcommitment:
- Use the
esxtop
command to determine whether the ESXi/ESX server's memory is overcommitted. For more information about esxtop, see the Resource Management Guide for your version of ESXi/ESX:
- Examine the
MEM overcommit avg
on the first line of the command output. This value reflects the ratio of the requested memory to the available memory, minus 1.
Examples:
- If the virtual machines require 4 GB of RAM, and the host has 4 GB of RAM, then there is a 1:1 ratio. After subtracting 1 (from 1/1), the
MEM overcommit avg
field reads 0. There is no overcommitment and no extra RAM is required. - If the virtual machines require 6 GB of RAM, and the host has 4 GB of RAM, then there is a 1.5:1 ratio. After subtracting 1 (from 1.5/1), the
MEM overcommit avg
field reads 0.5. The RAM is overcommited by 50%, meaning that 50% more than the available RAM is required.
If the memory is being overcommited, adjust the memory load on the host. To adjust the memory load, either:
- Increase the amount of physical RAM on the host
OR
- Decrease the amount of RAM allocated to the virtual machines. To decrease the amount of allocated RAM, either:
- Determine whether the virtual machines are ballooning and/or swapping.
To detect any ballooning or swapping:
- Run
esxtop
. - Type m for memory
- Type f for fields
- Select the letter J for Memory Ballooning Statistics (MCTL)
- Look at the
MCTLSZ
value.
MCTLSZ (MB)
displays the amount of guest physical memory reclaimed by the balloon driver.
- Type f for Field
- Select the letter for Memory Swap Statistics (SWAP STATS).
- Look at the
SWCUR
value.
SWCUR (MB)
displays the current Swap Usage.
To resolve this issue, ensure that the ballooning and/or swapping is not caused by the memory limit being incorrectly set. If the memory limit is incorrectly set, reset it correctly. For more information, see:
Storage Latency
To determine whether the poor performance is due to storage latency:
- Determine whether the problem is with the local storage. Migrate the virtual machines to a different storage location.
- Reduce the number of Virtual Machines per LUN.
- Look for log entries in the Windows guests that look like this:
The device, \Device\ScsiPort0, did not respond within the timeout period.
- Using
esxtop
, look for a high DAVG latency time. For more information, see Using esxtop to identify storage performance issues (1008205). - Determine the maximum I/O throughput you can get with the
iometer
command. For more information, see Testing virtual machine storage I/O performance for VMware ESXi and ESX (1006821) and Best practices for performing the storage performance tests within a virtualized environment (2019131) - Compare the
iometer
results for a VM to the results for a physical machine attached to the same storage. - Check for SCSI reservation conflicts. For more information, see Analyzing SCSI Reservation conflicts on VMware Infrastructure 3.x and vSphere 4.x (1005009).
- If you are using iSCSI storage and jumbo frames, ensure that everything is properly configured. For more information, see:
- If you are using iSCSI storage and multipathing with the iSCSI software initiator, ensure that everything is properly configured. For more information, see these sections of the iSCSI SAN Configuration Guide:
If you identify a storage-related issue:
- Ensure that your hardware array and your HBA cards are certified for ESX/ESXi. For more information, see the VMware Hardware Compatibility List.
- Ensure that the BIOS of your physical server is up to date. For more information, see Checking your firmware and BIOS levels to ensure compatibility with ESX/ESXi (1037257).
- Ensure that the firmware of your HBA is up to date. For more information, see Slow performance caused by out of date firmware on a RAID controller or HBA (1006696).
- Ensure that the ESX can recognize the correct mode and path policy for your SATP Storage array type and PSP Path Selection. For more information, see Verifying correct storage settings on ESX 4.x, ESXi 4.x and ESXi 5.0 (1020100).
Network latency
Network performance can be highly affected by CPU performance. Rule out a CPU performance issue before investigating network latency.
To determine whether the poor performance is due to network latency:
- Test the maximum bandwidth from the virtual machine with the Iperf tool. This tool is available from https://github.com/esnet/iperf
Note: VMware does not endorse or recommend any particular third-party utility.
-
- While using Iperf, change the TCP windows size to 64 K. Performance also depends also on this value. To change the TCP windows size:
- On the server side, enter this command:
iperf -s
- On the client side, enter this command:
iperf.exe -c sqlsed -P 1 -i 1 -p 5001 -w 64K -f m -t 10 900M
- Run Iperf with a machine outside the ESXi/ESX host. Compare the results with what you expect you should have, depending on your physical environment.
- Run Iperf with another machine outside the ESXi/ESX host on the same VLAN on the same physical switch. If the performance is good, and the issue can only be reproduced with a machine at another geographical location, then the issue is related to your network environment.
- Run Iperf between 2 VMs on the same ESX server/portgroup/vswitch. If the result is good, you can exclude a CPU, memory or storage issue.
If you identify a bottleneck on the network:
- Work through the steps in Troubleshooting network performance issues (1004087).
- If you are using iSCSI storage and jumbo frames, ensure that everything is properly configured. For more information, see:
- If you are using Network I/O Control, ensure that the shares and limits are properly configured for your traffic. For more information, see Network I/O Resource Management in vSphere 4.1 with vDS (1022585).
- Ensure that traffic shaping is correctly configured. For more information, see Traffic Shaping Policy in the ESXi/ESX Configuration Guide.