Enabling vCPU HotAdd creates fake NUMA nodes on Windows
search cancel

Enabling vCPU HotAdd creates fake NUMA nodes on Windows

book

Article ID: 340275

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:

When selecting "Enable CPU Hot Add" for any Virtual Machine, vNUMA is disabled. When listing NUMA information within a Windows guest with a variety of tools, you might see a second NUMA node (or more). This node is displayed:

  • without any associated CPUs or Memory for VMs with less than or exactly 64 vCPUs (not listed in e.g. Task Manager)
  • with associated CPUs, but without any memory, for VMs with more than 64 vCPUs


Environment

VMware vSphere ESXi 7.0.0
VMware vSphere ESXi 6.7
VMware vSphere ESXi 6.5

Cause

Enabling vCPU Hot Add configures the virtual hardware for its supported and potentially hot plugged vCPU maximum, this depends on the selected Guest OS, ESXi version, Virtual Machine Compatibly (hardware version) and whether the virtual machine was powered on with more or less than 128 vCPUs.

For most Guest OS and Virtual Machine Compatibly selections and vCPUs below an initial configured amount of 128, vCPUs might be hot added up to 128 total vCPUs.

For some versions of Windows, when its Processor Group maximum of 64 is exceeded even with not yet live resources, Windows will create additional "fake" nodes to accommodate those potential vCPUs. This behavior has changed in a recent versions of Windows, please refer to Microsoft's NUMA Support documentation for details.

Resolution

As per Microsoft, this is expected behavior before Windows 10 Build 20348.

Workaround:

You can limit the maximum amount of vCPUs that can be hot added to be equal or below the Windows Processor Group maximum of 64 by configuring the following Virtual Machine advance configuration parameter and value:

cpuid.maxVCPUs = "integer value up to 64"


Additional Information

Enable CPU Hot Add

vNUMA is disabled if vCPU hotplug is enabled

 

 

The preceding links were correct as of June 22, 2021. If you find a link is broken, please provide feedback and a VMware employee will update the article.


Impact/Risks:
Especially for Virtual Machines with more than 64 vCPUs, applications might make sub-optimal placement and scheduling decisions as the underlying virtual machine's memory, if wide, is interleaved.