The proactive vSAN performance test was introduced as a new feature in vSAN 6.1. This feature is accessible through the web client. The test is a good way to gauge the performance capabilities of your vSAN cluster.
The Proactive vSAN Storage performance test is run by clicking the Virtual SAN cluster you wish to test, then navigating to Monitor > Virtual SAN > Proactive Test. Configure the desired duration of the test and select the workload profile you wish to test. Details of the available workload profiles are in the table below.
Prior to running the proactive vSAN storage performance test it is important to understand how the test operates and its limitations.
How it operates
- It starts by creating a vSAN namespace directory for each host in the vSAN cluster. The purpose of this namespace is to store test virtual disk (vmdk) objects.
- Vmdk objects are created inside each of the host namespace directories until there are enough vmdk objects to perform the test. The amount of test vmdk objects is determined by the chosen Workload Test.
- IOBlazer (the embedded performance tool) is supplied the testing parameters based on the selected test.
- The testing begins.
Limitations
- Ensure you have enough free space to run the test prior to running it. The table below will help you calculate how much capacity is required.
- Ensure that the vSAN health check test returns green check marks for all tests.
- If you want to rerun the tests, select the cluster level object in the vSphere Web Client, navigate to Monitor > Virtual SAN > Health and click Retest.
- Ensure the proactive multicast performance test passes.
- To run the test, select the cluster level object in the vSphere Web Client, navigate to Monitor > Virtual SAN > Proactive Tests, select the Multicast performance test option and click the green start button.
Notes: It is very important to ensure you validate you are not hitting any of the limitations noted above prior to running the proactive vSAN performance test. If you experience the error message Failed to run storage load test, this means the test failed to run, validate and ensure you are not hitting any of the documented limitations prior to filing a support request with VMware.
This table defines the parameters that are passed for each available testing method:
Workload Test Name | Description | Dataset Per Host | # of VMDKs per host | Read IO % | # of Outstanding IOs | IO Size | Dataset size (per vmdk) | Random IO? |
Low Stress Test | low stress test with minimal latency | 200 MB | 1 | 100 | 1 | 4 KB | 200 MB | yes |
Basic sanity test,focus on Flash cache layer | Simulates a realistic workload with a 70/30 split using a small 1 GB dataset | 1 GB | 10 | 70 | 2 | 4 KB | 102 MB | yes |
Stress test | Designed to put a lot of stress on all storage layers. High latency is expected | 1 TB | 20 | 50 | 4 | 8 KB | 51 MB | yes |
Performance Characterization - 100% Read, optimal RC usage | Stress test against read cache capabilities to handle read IO | 10 GB | 10 | 100 | 2 | 4 KB | 1 GB | yes |
Performance Characterization - 100% Write, optimal WB usage | Stress test against the write buffer layer ability to handle write IO | 5 GB | 10 | 0 | 2 | 4 KB | 512 MB | yes |
Performance Characterization - 100% Read, optimal RC usage after warm-up | Start stress testing the read cache layer after it has warmed up. | 10 GB | 10 | 100 | 2 | 4 KB | 1 GB | yes |
Performance Characterization – 70/30 Read/Write, realistic, optimal flash cache usage | Simulates a realistic workload with a 70/30 split using a 30 GB dataset | 30 GB | 10 | 70 | 2 | 4 KB | 3 GB | yes |
Performance Characterization – 70/30 Read/Write, high I/O size, optimal flash cache usage | Simulates realistic workload using a 70/30 split but with a focus on performance with large IO sizes. | 30 GB | 10 | 70 | 2 | 64 KB | 3 GB | yes |
Performance Characterization - 100% Read, Low RC hit rate / All-Flash demo | Stress test for All-Flash vSAN clusters. Not intended for hybrid configurations | 1 TB | 10 | 100 | 2 | 4 KB | 102 GB | yes |
Performance Characterization - 100% Streaming Reads | Simulates workload of complete sequential read IOPS | 1 TB | 10 | 100 | 1 | 512 KB | 102 GB | no |
Performance Characterization - 100% Streaming Writes | Simulates workload of complete sequential write IOPS | 1 TB | 10 | 0 | 1 | 512 KB |
102 GB
| no |
So to help you understand the capacity required to execute these tests there are 2 formulas you can use. Every cluster is going to be different and the capacity required is dependent on the number of vSAN nodes in the cluster.
To understand approximately how much capacity will be consumed the following formula applies
# of hosts in the cluster * Dataset per host ( test specific) *2 (if using FTT=1) = consumed space.
example using a 3 node cluster with a stress test policy:
3 host cluster * 1TB * 2 (FTT=1) = 6TB of free space needed in the cluster to run the test
Note: The vSAN Multicast performance test should show when the cluster is running with multicast mode vs unicast mode.