Deploying vCenter High Availability with network addresses in separate subnets - vSphere 6.7 & 7.0
search cancel

Deploying vCenter High Availability with network addresses in separate subnets - vSphere 6.7 & 7.0

book

Article ID: 344909

calendar_today

Updated On:

Products

VMware vCenter Server

Issue/Introduction

This article provides steps to deploy vCenter High Availability (VCHA) in environments where the Primary and Secondary vCenter Server nodes are in separate subnets.

Terminology

  • Management Network: The main interface used to connect to the vCenter server (eth0)
  • VCHA Network: The private network configured strictly for VCHA replication traffic (eth1)
  • Management vCenter Server: When deploying the vCenter Server Appliance (VC1) to an ESXi host managed by another vCenter Server (VC2), the latter (VC2) is considered the “Management vCenter Server”.

Requirements

  • TCP ports 22, 5432, and 8182 are open and uninterrupted between all the nodes.
  • VCHA network latency between each node cannot exceed 10ms.
  • VCHA network throughput must be 1Gbps or higher.
  • vCenter HA has been tested and certified to perform on physically adjacent ESXi hosts and is not designed as a disaster recovery solution for locations connected over long distances. The replication traffic is too sensitive to disruption in these configurations and commonly becomes a point of failure, even after a successful deployment.  For these reasons, the VCHA network does not support being configured over a WAN topology."
Note: The use of routers within the VCHA Network LAN segment is discouraged for multi-subnet configurations.

For more information, see FAQ: vCenter High Availability (2148003).
 



Environment

VMware vCenter Server 6.7.x
VMware vCenter Server 7.0.x

Resolution

Step 1: Deploy the Primary vCenter Server that will be the Active VCHA node

  • When deploying a new environment VMware recommends deploying the vCenter Server Appliance to an ESXi host of the same version or above.
  • VMware recommends that the VCHA nodes are deployed to a DRS enabled cluster containing at minimum three ESXi hosts.

Step 2: Choose a network deployment strategy.

The steps to configure VCHA with differing networks will vary based on the decisions of which interfaces will have non-default configurations. The three options are:

The relevant steps for each of these deployment strategies are explained in the following sections.
 

VCHA Deployment with multiple management IPs and a single VCHA network

  1. Configure the ESXi hosts that will be running the nodes for management and VCHA network traffic with these conditions:
- The ESXi hosts must have at least 2 networks (VM portgroups) attached either to a Standard or a Distributed Switch.
- The management and VCHA networks must be on different subnets.

- The vCenter HA network must be within a single physical datacenter LAN
- A private network exists between the ESXi hosts with 1Gbps throughput dedicated for vCenter HA network traffic.
  1. Ensure the following are reserved:
- Two static IP's for the vCenter Server Management network (Active and Passive).
    Note: DNS will need to be configured to use the passive node's management IP address during a failover.

- Three non-routed static IPs on the same network for the vCenter HA network (Active, Passive, Witness).
  1. Add a second Network Adapter to the vCenter Server Appliance and attach it to the vCenter HA network port group. Ensure that the adapter type is same as the default Network Adapter the vCenter Server Appliance deployed with (the vCenter Server Appliance deploys with the VMXNet3 adapter).
  2. Configure the IP address for the second Network Adapter using the address for the vCenter HA network. For more information on configuring a second adapter, see How to manually add a second NIC to the vCenter Server Appliance 6.5 for VCHA (2147155).
Notes:
- Under IPv6 settings, ensure that Obtain IPv6 settings automatically is not enabled.
- Using dual IP stacks (IPv4 and IPv6) is not supported yet (as of 7.0 U2).
  1. Configure forward and reverse DNS lookup for the secondary management network IP address
  2. Log in to the Active node with the vSphere Client.
  3. Select the vCenter Server object in the inventory and select the Configure tab.
  4. Select vCenter HA under settings.
  5. Click on the Set Up vCenter HA button to start the setup wizard.
    • If the vCenter server is self-managed, the Resource settings page is displayed. Proceed to step 13.
    • If your vCenter server is managed by another vCenter server in the same SSO domain, proceed to step 13.
    • If your vCenter server is managed by another vCenter server in a different SSO domain, input the location and credential details of that management vCenter server.
  6. Click Management vCenter Server credentials. Specify the Management vCenter server FQDN or IP address, Single Sign-On username and password and click Next.
  7. If you do not have the Single Sign-On administrator credentials, select the second bullet and click Next.
  8. You may see a Certificate warning displayed. Review the SHA1 thumbprint and select Yes to continue.
  9. In the Resource settings section, first select the vCenter HA network for the active node from the drop-down menu.
  10. Select the checkbox to automatically create clones for Passive and Witness nodes.
  11. For the Passive node, click Edit.
  • Specify a unique name and target location.
  • Select the destination compute resource for the operation.
  • Select the datastore in which to store the configuration and disk files.
  • Select virtual machine Management (NIC 0) and vCenter HA (NIC 1) networks.
  • If there are issues with your selections, errors or compatibility warnings are displayed.
  1. Review your selections and click Finish.
  2. For the Witness node, click Edit.
  • Specify a unique name and target location.
  • Select the destination compute resource for the operation.
  • Select the datastore in which to store the configuration and disk files.
  • Select vCenter HA (NIC 1) network.
  • If there are issues with your selections, errors or compatibility warnings are displayed.
  1. Review your selections and click Finish.
  2. Click Next.
  3. In the IP settings section, select the IP version from the drop-down menu.
  4. Enter the IPv4 address (NIC 1) and Subnet mask or prefix length information for the Active, Passive and Witness nodes.
  5. Under the Passive Node section, click Edit Management Network Settings and enter the network settings for the secondary subnet that will be used.  
  6. Click Finish.

 

VCHA Deployment with multiple management IPs and multiple VCHA networks

  1. Configure the ESXi hosts that is running vCenter HA nodes for the vCenter Management traffic and the vCenter HA traffic with these conditions:
- The ESXi hosts must have at least 2 networks (VM portgroups) attached either to a Standard or a Distributed Switch.
- The management and VCHA networks must be on different subnets
- The vCenter HA network must be within a single physical datacenter LAN.
- A private network exists between the ESXi hosts with 1Gbps throughput dedicated for vCenter HA network traffic.
  1. Ensure the following are reserved:
- Two static IP's for the vCenter Server Management network (Active and Passive)
    Note: DNS will need to be configured to use the passive node's management IP address during a failover.
- Three static IPs that are routable to each other for the vCenter HA network (Active, Passive, Witness).
  1. Add a second Network Adapter to the vCenter Server Appliance and attach it to the vCenter HA network port group. Ensure that the adapter type is same as the default Network Adapter the vCenter Server Appliance deployed with (the vCenter Server Appliance deploys with the VMXNet3 adapter).
  2. Configure the IP address for the second Network Adapter using the address for the vCenter HA network. For more information on configuring a second adapter, see How to manually add a second NIC to the vCenter Server Appliance 6.5 for VCHA (2147155).
Notes:
- Under IPv6 settings, ensure that Obtain IPv6 settings automatically is not enabled.
- Using dual IP stacks (IPv4 and IPv6) is not supported yet (as of 7.0 U2).
  1. Configure forward and reverse DNS lookup for the secondary management network IP address.
  2. Use an SSH client to connect to the vCenter with root.
  3. Use a text editor (such as vi) manually add routes for the passive and witness VCHA networks on eth1 in /etc/systemd/network/10-eth1.network
Example:
  1. Restart the networkd service
systemctl restart systemd-networkd
  1. Log in to the Active node with the vSphere Client.
  2. Select the vCenter Server object in the inventory and select the Configure tab.
  3. Select vCenter HA under settings.
  4. Click on the Set Up vCenter HA button to start the setup wizard.
    • If the vCenter server is self-managed, the Resource settings page is displayed. Proceed to step 16.
    • If your vCenter server is managed by another vCenter server in the same SSO domain, proceed to step 16.
    • If your vCenter server is managed by another vCenter server in a different SSO domain, input the location and credential details of that management vCenter server.
  5. Click Management vCenter Server credentials. Specify the Management vCenter server FQDN or IP address, Single Sign-On username and password and click Next.
  6. If you do not have the Single Sign-On administrator credentials, select the second bullet and click Next.
  7. You may see a Certificate warning displayed. Review the SHA1 thumbprint and select Yes to continue.
  8. In the Resource settings section, first select the vCenter HA network for the active node from the drop-down menu.
  9. Deselect the box to automatically create clones for Passive and Witness nodes
  10. Click Next.
  11. In the IP settings section, select the IP version from the drop-down menu.
  12. Enter the VCHA network information for the Active, Passive and Witness nodes.
  13. Under the Passive Node section, click Edit Management Network Settings and enter the network settings for the secondary management IP that will be used.  
  14. Click Finish. The VCHA configuration should report that the other nodes are not reachable. This is expected until the end of the workflow.
  15. Right click the vCenter virtual machine and select Clone > Clone to Virtual Machine
  16. Select a name, location, and datastore for the passive node. Do not choose to customize or power on the virtual machine in clone or vApp options.
  17. Once the clone process is done, edit the new passive vm settings and place the network interfaces on the proper vm portgroups.
  18. Power on the Passive node

Note: The management interface will automatically be brought down at boot to prevent an IP conflict.

  1. Open a direct console session to the passive node and press ALT-F1 to login with root
  2. Configure the eth1 interface with the new VCHA network IP address and routes to the active and witness nodes by updating the /etc/systemd/network/10-eth1.network file.
Example:


  1. Configure the eth0 interface with the secondary management IP address by updating the /etc/systemd/network/10-eth0.network file
Example:


  1. Restart the networkd service
systemctl restart systemd-networkd
  1. Right click the vCenter virtual machine and select Clone > Clone to Virtual Machine
  2. Select a name, location, and datastore for the witness node. Do not choose to customize or power on the virtual machine in clone or vApp options
  3. Once the clone process is done, edit the new witness vm settings and place the network interfaces on the proper portgroups.
    • Optional: You may also elect to lower the vCPU count to 1 and the memory to a minimum of 1GB at this point. A memory reservation of 1GB is also recommended to match what a basic deployment would automatically assign the witness node. This will save resources in the cluster and will not affect the performance of the vCenter Server.
  4. Power on the Witness node
  5. Open a direct console session to the witness node and press ALT-F1 to login with root
  6. Configure the VCHA network eth1 interface with the new IP address and routes to the active and passive nodes by updating the /etc/systemd/network/10-eth1.network file
Example:


  1. The eth0 interface is not changed as it will be kept offline on the witness node.
  2. Restart the networkd service
systemctl restart systemd-networkd
  1. Test network connectivity over the VCHA network by pinging each VCHA network IP from each node. If necessary, you can reboot the primary node at this point without disrupting the VCHA configuration process.
  2. Reboot the passive and witness nodes. The VCHA configuration page should mark both online after a few minutes once the services have started.

 

VCHA Deployment with a single management IP and multiple VCHA networks

  1. Configure the ESXi hosts that is running vCenter HA nodes for the vCenter Management traffic and the vCenter HA traffic with these conditions:
- The ESXi hosts must have at least 2 networks (VM portgroups) attached either to a Standard or a Distributed Switch.
- The management and VCHA networks must be on different subnets.
- The vCenter HA network must be within a single physical datacenter LAN.
- A private network exists between the ESXi hosts with 1Gbps throughput dedicated for vCenter HA network traffic.
- There are no DNS configuration steps needed when a single management IP is used.
  1. Ensure the following are reserved:
- Three static IPs that are routable to each other for the vCenter HA network (Active, Passive, Witness).
  1. Add a second Network Adapter to the vCenter Server Appliance and attach it to the vCenter HA network port group. Ensure that the adapter type is same as the default Network Adapter the vCenter Server Appliance deployed with (the vCenter Server Appliance deploys with the VMXNet3 adapter).
  2. Configure the IP address for the second Network Adapter using the address for the vCenter HA network. For more information on configuring a second adapter, see How to manually add a second NIC to the vCenter Server Appliance 6.5 for VCHA (2147155).
Notes:
- Under IPv6 settings, ensure that Obtain IPv6 settings automatically is not enabled.
- Using dual IP stacks (IPv4 and IPv6) is not supported yet (as of 7.0 U2).
  1. Use an SSH client to connect to the vCenter with root.
  2. Use a text editor (such as vi) to manually add routes for the passive and witness VCHA networks on eth1 in /etc/systemd/network/10-eth1.network
Example:


  1. Restart the networkd service
systemctl restart systemd-networkd
  1. Log in to the Active node with the vSphere Client.
  2. Select the vCenter Server object in the inventory and select the Configure tab.
  3. Select vCenter HA under settings.
  4. Click on the Set Up vCenter HA button to start the setup wizard.
    • If the vCenter server is self-managed, the Resource settings page is displayed. Proceed to step 15.
    • If your vCenter server is managed by another vCenter server in the same SSO domain, proceed to step 15.
    • If your vCenter server is managed by another vCenter server in a different SSO domain, input the location and credential details of that management vCenter server.
  5. Click Management vCenter Server credentials. Specify the Management vCenter server FQDN or IP address, Single Sign-On username and password and click Next.
  6. If you do not have the Single Sign-On administrator credentials, select the second bullet and click Next.
  7. You may see a Certificate warning displayed. Review the SHA1 thumbprint and select Yes to continue.
  8. In the Resource settings section, first select the vCenter HA network for the active node from the drop-down menu.
  9. Deselect the box to automatically create clones for Passive and Witness nodes
  10. Click Next.
  11. In the IP settings section, select the IP version from the drop-down menu.
  12. Enter the VCHA network information for the Active, Passive and Witness nodes.
  13. Click Finish. The VCHA configuration should report that the other nodes are not reachable. This is expected until the end of the workflow.
  14. Right click the vCenter virtual machine and select Clone > Clone to Virtual Machine
  15. Select a name, location, and datastore for the passive node. Do not choose to customize or power on the virtual machine in clone or vApp options.
  16. Once the clone process is done, edit the new passive vm settings and place the network interfaces on the proper vm portgroups.
  17. Power on the Passive node
Note: The management interface will automatically be brought down at boot to prevent an IP conflict.
  1. Open a direct console session to the passive node and press ALT-F1 to login with root
  2. Configure the eth1 interface with the new VCHA network IP address and routes to the active and witness nodes by updating the /etc/systemd/network/10-eth1.network file.
Example:


  1. Restart the networkd service
systemctl restart systemd-networkd
  1. Right click the vCenter virtual machine and select Clone > Clone to Virtual Machine
  2. Select a name, location, and datastore for the witness node. Do not choose to customize or power on the virtual machine in clone or vApp options
  3. Once the clone process is done, edit the new witness vm settings and place the network interfaces on the proper portgroups
    • Optional: You may also elect to lower the vCPU count to 1 and the memory to a minimum of 1GB at this point. A memory reservation of 1GB is also recommended to match what a basic deployment would automatically assign the witness node. This will save resources in the cluster and will not affect the performance of the vCenter Server.
  4. Power on the Witness node
  5. Open a direct console session to the witness node and press ALT-F1 to login with root
  6. Configure the VCHA network eth1 interface with the new IP address and routes to the active and passive nodes by updating the /etc/systemd/network/10-eth1.network file
Example:

  1. Restart the networkd service
systemctl restart systemd-networkd
  1. Test network connectivity over the VCHA network by pinging each VCHA network IP from each node. If necessary, you can reboot the primary node at this point without disrupting the VCHA configuration process.
  2. Reboot the passive and witness nodes. The VCHA configuration page should mark both online after a few minutes once the services have started.