Virtual Volumes datastore is inaccessible after moving to another vCenter Server or refreshing CA certificate
search cancel

Virtual Volumes datastore is inaccessible after moving to another vCenter Server or refreshing CA certificate

book

Article ID: 312742

calendar_today

Updated On:

Products

VMware vSphere ESXi

Issue/Introduction


This article provides information to resolve the certificate issue for vVols when vCenter Changes or CA certificate changes.

Symptoms:

After moving a host to another vCenter Server or after refreshing CA Certificate, you experience these symptoms:
  • Virtual Volumes (vVOL) datastores are not accessible.
  • The command esxcli storage vvol vasaprovider list displays VP status as syncError.
  • In the vvold.log, you see similar to:
2019-04-03T11:34:59.243Z warning vvold[4AC6B70] [Originator@6876 sub=Default] VasaSession::GetEndPoint: failed to get endpoint, err=SSL Exception: Verification parameters:
--> PeerThumbprint: 4A:78:E1:82:CC:08:27:27:70:EB:BD:61:E2:1F:0D:BC:E4:85:48:F8
--> ExpectedThumbprint:
--> ExpectedPeerName: <IP>
--> The remote host certificate has these problems:
-->
--> * unable to get local issuer certificate, using default
2019-04-03T11:34:59.243Z info vvold[47B1B70] [Originator@6876 sub=Default] VasaSession::Initialize url is empty
2019-04-03T11:34:59.243Z warning vvold[47B1B70] [Originator@6876 sub=Default] VasaSession::DoSetContext: Empty VP URL for VP (xVP)!
2019-04-03T11:34:59.243Z info vvold[47B1B70] [Originator@6876 sub=Default] Initialize: Failed to establish connection https://<IP>:8443/vasa/version.xml
2019-04-03T11:34:59.243Z error vvold[47B1B70] [Originator@6876 sub=Default] Initialize: Unable to init session to VP xVP state: 0
2019-04-03T11:34:59.552Z info vvold[4770B70] [Originator@6876 sub=Default] VVolUnbindManager::UnbindIdleVVols called
2019-04-03T11:34:59.553Z info vvold[4770B70] [Originator@6876 sub=Default] VVolUnbindManager::UnbindIdleVVols done for 0 VVols
2019-04-03T11:35:00.009Z info vvold[5ACBB70] [Originator@6876 sub=Default] Came to SI::GetVvolContainer: container 5fd41e3b-b676-4ad6-8a41-65f10bd73602
2019-04-03T11:35:00.009Z info vvold[5ACBB70] [Originator@6876 sub=Default] SI:GetVvolContainer successful for Datastore, id=, maxVVol=0 MB
  • Running the esxcli storage vvol storagecontainer list command returns similar to:
Datastore
   StorageContainer Name: Datastore
   UUID: vvol:5fd41e3bb6764ad6-8a4165f10bd73602
   Array: com.vmware.vim:020009763e06-1000000
   Size(MB): 0
   Free(MB): 0
   Accessible: true
   Default Policy:
  • Running the esxcli storage vvol vasaprovider list command returns similar to:
xVP
   VP Name: xVP
   URL:https://<IP>:8443/vasa/version.xml
   Status: syncError
   Arrays:
         Array Id: com.vmware.vim:020009763e06-1000000
         Is Active: true
         Priority: 0

Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.

Environment

VMware vSphere ESXi 6.7
VMware vSphere ESXi 6.5

Cause


This issue occurs because the vVol ssl_reset is not occurring automatically when VMCA signed certificate is pushed to the host.

Resolution


To work around this issue reset the vVold SSL certificate:
  1. Migrate the virtual machines from the host and place it in maintenance mode.
  2. Log in to the ESXi host through SSH as root.
  3. Run the command - /etc/init.d/vvold ssl_reset && /etc/init.d/vvold restart
  4. Run the command - tail -f /var/log/vvold.log
  5. Look for Empty VP URL for VP messages. If you still see Empty VP URL for VP messages the SSL certificate will need to re-generated on the ESXi host.
  6. Refer to VMware Product Documentation for re-generating self-sign certs Replace the Default Certificate and Key from the ESXi Shell
Edit the re-generated self-signed certificate.
  1. Log in to the ESXi Shell, either directly from the DCUI or from an SSH client, as a user with administrator privileges.
  2. Navigate to/etc/vmware/ssl.
  3. Rename the existing certificates:.
mv rui.crt orig.rui.crt
mv rui.key orig.rui.key
  1. Run the command /sbin/generate-certificates to generate new certificates.
  2. Confirm that the host successfully generated new certificates by using ls -l and comparing the time stamps of the new certificate files with orig.rui.crt and orig.rui.key.
  3. Go to vSphere  Client, right click the ESXi host, click Certificates, Click Renew Certificate.
  4. Run ls -l to ensure the date changed on the castore.pem file.
  5. Reboot the ESXi host.
  6. Once the host is up, run tail -f /var/log/vvold.log.
If you see errors, update the vCenter Server TRUSTED_ROOTS store.
  1. Disconnect and reconnect the ESXi host to the vCenter Serverto resolve a mismatched SSL thumbprint in vCenter Server compared to the ESXi host.
  2. Run tail -f /var/log/vvold.log. to verify the error is no longer seen.
  3. The expected output should be as below:
2019-04-03T11:35:54.169Z info vvold[8355B70] [Originator@6876 sub=default] SI:GetVvolVontainer successful for DataStoreName, id= maxVVol=0 MB ...

Note: The preceding log excerpts are only examples. Date, time, and environmental variables may vary depending on your environment.