Virtual Machine on VMFS-6 datastore fails vMotion or HA failover with type 10c00003 lock message
search cancel

Virtual Machine on VMFS-6 datastore fails vMotion or HA failover with type 10c00003 lock message

book

Article ID: 314367

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction

Symptoms:
Attempting to migrate a virtual machine fails with type 10c00003 lock.
In the var/run/log/vmkernel.log file, you see entries similar to:

YYYY-MM-DDTHH:MM:SS.MILZ cpu22:*******)DLX: 4416: vol 'Name of Datastore, lock at *********: [Req mode 1] Checking liveness:
YYYY-MM-DDTHH:MM:SS.MILZ cpu22:*******)[type 10c00003 offset ********* v *******, hb offset *******
gen 31, mode 1, owner ********-********-****-************ mtime 9199655
num 0 gblnum 0 gblgen 0 gblbrk 0]

And

YYYY-MM-DDTHH:MM:SS.MILZ cpu22:*******)Fil3: 10041: Retry 10 for caller Fil3RemoveCoreVMFS6 (status 'Lock rank violation detected')

Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment. 

Environment

VMware vSphere 7.0.x

Cause

This type 10c00003 lock message is caused by affinity manager algorithm, which starts selecting the Sub-block resources from RC (Resource Cluster) 0 and keeps going sequentially. This issue occurs when multiple ESXi host connected to the same VMFS-6 volume in a vSphere cluster keep allocating the Resources from the same RCs and increase affinity counts on the RCs. Sub-blocks in VMFS-6 64 KB in size. They are used both as data blocks and pointer blocks. In the case of files less than 64 KB, they are used to store the data. For files larger than 320 MB, they need indirect addressing to cover the file address space. Any file less than 64 KB or greater than 320 MB can use Sub-blocks which may cause type 10c00003 lock.

Resolution

Fixed in ESXi 7.0U2c (7.0.2 P03)

Workaround:
There is no direct workaround for this issue. There is a workaround that can reduce the possibility of this issue occurring, but it cannot be completely suppressed. The solution is to create multiple VMFS-6 volumes and have fewer Virtual Machines per VMFS-6 Volumes. That would decrease the possibility of files of different Virtual Machines getting resources allocated from the same RCs.