Design Update of NSX-T Data Center for VMware Cloud Foundation 3.11
search cancel

Design Update of NSX-T Data Center for VMware Cloud Foundation 3.11

book

Article ID: 319748

calendar_today

Updated On:

Products

VMware Cloud Foundation

Issue/Introduction

Purpose 

 

This article outlines the procedures to update NSX-T design changes that you must implement  VMware Cloud Foundation 3.10

NSX-T workload domains with multiple availability zones in VMware Cloud Foundation 3.10 uses NSX-T Data Center 2.5.1. NSX-T Data Center 2.5.x uses a three N-VDS edge node architecture and edge nodes pinned to Availability Zone 1 and Availability Zone 2. In NSX-T Data Center 3.x, this design changed to a single-N-VDS architecture. The edge nodes fabric design also changed to uplink networks stretched between availability zones instead of pinned to the individual zones with different uplinks. 

This documentation focusses on addressing two design changes related to NSX-T Data Center in VMware Cloud Foundation 3.11. 

  • Three N-VDS to Single N-VDS edge node architecture for network throughput and NSX scalability improvement. 
  • From four edge nodes across two availability zones to two edge nodes that fail over between availability zones. 
  • You use vSphere HA failover from Availability Zone 1 to Availability Zone 2 with uplinks stretched. 

Follow this procedure prior to updating the components to the versions described in the bill of materials for VMware Cloud Foundation 3.11

VMware Software Versions in the Update 

Product NameProduct Version in VMware Cloud Foundation 3.10 Product Version in VMware  Cloud Foundation 3.11 
NSX-T Data Center 2.5.1 15314288 3.0.3.1 19067109 


Symptoms:

 



    Environment

    VMware Cloud Foundation 3.11

    Resolution

    Create the NSX-T Segments for Uplink, and Overlay Traffic  

    Change the Teaming Policy in the Uplink Profile 

    Modify the existing host uplink profile standby uplinks.

    1. In a web browser, log in to the NSX Manager cluster at  https://vip_fqdn/.
    2. On the main navigation bar, click System.
    3. In the navigation pane, select Fabric > Profiles.
    4. Change the teaming policy in the host uplink profile.
      1. On the  Profiles page, select the HostUplinkProfile.
      2. On the Uplink profiles tab, select the host-overlay-profile profile and click Edit.
      3. In the Edit uplink profile dialog box, under Teamings, add standby uplinks for Uplink01 and Uplink02 and click Save
        NameTeaming PolicyActive UplinksStandby Uplinks
        Uplink01Failover Order uplink-1 uplink-2 
        Uplink02Failover Order uplink-2uplink-1

    Create an Overlay Uplink Profile 

    Create an overlay profile  required to configure the uplinks of the new NSX-T Edge nodes that are required for the migration to a single N-VDS architecture. 

    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>.
    2. On the main navigation bar, click System.
    3. In the navigation pane, select Fabric > Profiles.
    4. To define policies between NSX-T Edge nodes and top of rack switches, create uplink profiles.
      1. On the Profiles page, click the UplinkProfiles tab and click Add.
      2. On the New uplink profile page, enter the following values and click Add.
        NameTeaming – Teaming Policy Teaming – Active Uplinks Transport VLANMTU
        new-overlay-profile Load Balance Source uplink-1,uplink-2 VLANID9000
      • The Load Balance Source option represents load balancing that is based on the source port ID. 

    5. In the new-overlay-profile, create two teaming policies for ECMP uplinks. 
      1. On the Uplink profiles tab, select the new-overlay-profile profile and click Edit. 
      2. In the  Edit uplink profile dialog box, under Teamings, click the Add  button, add Standby Uplinks the following information, and click  Save
        NameTeaming PolicyActive Uplinks 
        uplink01Failover Order uplink1
        uplink02Failover Order uplink2

    Create the Transport Zones for Uplink Traffic 

    Create transport zones for uplink traffic. 

    1. In a web browser, log in to NSX Manager cluster at https://<vip_fqdn>.
    2. On the main navigation bar, click System.
    3. Navigate to Fabric > Transport zones and click Add.
    4. On the New transport zone page, configure the settings for the first transport zone and click Add
      SettingValue
      Nameedge-uplink-tz 
      N-VDS Name Enter the N-VDS name 
      N-VDS ModeStandard
      Traffic TypeVLAN
    5. Include the new teaming policies in the transport zone.
      1. Navigate to Fabric > Transport zones, select edge-uplink-tz and click Edit.
      2. In the Edit Transport Zone dialog box, add uplink01 and uplink02  to Uplink teaming policy names, and click Save

    Create Uplink Segments 

    Uplink segments are required to connect to the new uplink transport zone. 

    Segment Name Uplink & Type Transport ZoneVLAN
    nvds01-uplink01 Noneedge-uplink-tz 0-4094 
    nvds01-uplink01 Noneedge-uplink-tz 0-4094 
    1. In a web browser, log in to the NSX Manager cluster at  https://<vip_fqdn>/.
    2. On the main navigation bar, click Networking.
    3. In the navigation pane, select Segments.
    4. On the Segments tab, click Add segment.
    5. Enter the following values for the segment and clic  Save.
      SettingValue
      Namenvds01-uplink01 
      Transport Zone edge-uplink-tz 
      VLAN 0-4094
    6. Repeat this procedure to create the  nvds01-uplink02 segment.
    7. On the main navigation bar, click Advanced networking and security.
    8. Under Networking, click Switching.
    9. Change the teaming policy for the uplink segments.
      1. Select nvds01-uplink01 and click Edit.
      2. In the Edit dialog box, select uplink01 from the Uplink teaming policy name drop-down menu and click Save.
      3. Repeat this step to change the uplink teaming policy for the nvds01-uplink02 segment.
        SegmentUplink Teaming Policy 
        uplink01uplink01
        uplink02uplink02

    Deploy and Configure an NSX-T Edge Cluster 

    Deploy a new NSX Edge cluster with new edge nodes with the single N-VDS architecture. Then, you migrate the networking components to the new edge nodes and delete the legacy edge nodes.  

    Deploy the NSX-T Edge Appliances 

    SettingValue for en01 Value for en02
    Network 3 UnusedUnused
    Network 2nvds01-uplink02 nvds01-uplink02 
    Network 1nvds01-uplink01 nvds01-uplink01 
    Network 0management port group  management port group  
    Management IP address 172.16.41.21,172.16.41.22 172.16.41.21,172.16.41.22 
    Default gateway 172.16.41.253 172.16.41.253 
    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. On the main navigation bar, click  System.
    3. In the navigation pane, select Fabric > Nodes.
    4. Click on Edge Transport Nodes > Add Edge VM
    5. On the SelectName  and  Host name/FQDN page, configure the settings and click  Next
      SettingValue
      NameEnter the Name for Edge Node01 
      Host Name/FQDN Enter the Name for Edge Node01 
      Form Factor Medium
    6. On the Credentials page, configure the settings and click Next
      SettingValue
      CLI "admin" User Password / Confirm Password nsx_edge_admin_password 
      System Root User Password / Confirm Password nsx_edge_root_password 
      CLI "audit" User Password / Confirm Password nsx_edge_audit_password 
      Allow SSH Login Yes
      Allow Root SSH Login Yes
    7. On the Configure Deployment  page, configure the settings and click  Next.  
      SettingValue
      Compute Manager Select the Compute Manager 
      ClusterSelect the Cluster 
      DatastoreSelect the Datastore 
    8. On the Configure Node Settings, configure settings and click Next
      SettingValue
      IP Assignment  Static
      Management IP Enter the Management IP 
      Default Gateway Enter the Default Gateway 
      Management Interface Enter the Management Portgroup 
      Search Domain Names Enter DNS Search Names 
      DNS Servers Enter DNS Servers 
      NTP Servers Enter NTP Servers 
    9. On the Configure NSX page, configure the settings and click Finish
      SettingValue
      Transport Zone edge-uplink-tz, overlay-tz 
      Edge Switch Name sfo01-w-nvds01 
      Uplink Profile new-overlay-profile 
      IP Assignment Use Static IP List 
      Static IP List Enter Static IPs
      GatewayEnter Gateway  
      Subnet Mask Enter Subnet Mask 
      DPDK Fastpath Interfaces 
      • uplink1 > nvds01-uplink01 
      • uplink2 >nvds01-uplink02
    10. Repeat this procedure to deploy the edge_node2 NSX-T Edge appliance. 

    Create Anti-Affinity Rule for Edge Nodes 

    Create an anti-affinity rule to ensure that the edge nodes run on different ESXi hosts. If an ESXi host is unavailable, the edge nodes on the other hosts continue to provide support for the NSX management and control planes. 

    1. In a web browser, log in to the VI workload domain vCenter Server at https://<vcenter_server_fqdn>/ui.
    2. Select Menu > Hosts and Clusters.
    3. In the inventory, expand vCenterServer > Datacenter.
    4. Select the cluster and click the Configure tab.
    5. Select VM/Host rules and select Edge Affinity Rule
    6. Add the new edge nodes to the existing affinity rule. 
      OptionDescription
      NameSelect the Existing Edge affinity rule 
      MembersClick Add, select the two Edge new nodes, and click OK
    7. Click OK and then click OK again in the Create VM/Host rule dialog box.

    Move the NSX-T Edge Nodes to a Dedicated Resource Pool  

    After edge nodes deployed, you manually move the NSX-T Edge Nodes to the correct resource pool to ensure the correct allocation of resources during times of contention, move the NSX-T Edge Cluster Nodes to the available edge node resource pool. 

    1. In a web browser, log in to the VI workload domain vCenter Server at https://vcenter_server_fqdn/ui.
    2. Select Menu > Hosts and Clusters.
    3. In the inventory, expand vCenter Server > Datacenter > Cluster.
    4. Drag-and-drop the edge node VMs to the available edge node resource pool. 

    Create an NSX-T Edge Cluster Profile 

    To define a common configuration for NSX-T Edge nodes, you create an edge cluster profile. 

    1. In a web browser, log in to the NSX Manager cluster at  https://vip_fqdn/.
    2. On the main navigation bar, click System.
    3. In the navigation pane, select Fabric > Profiles.
    4. On the Edge cluster profiles tab, click Add.
    5. On the New Edge cluster profile page, configure the settings and click Add
      SettingValue
      NameEnter the Name of Edge Cluster Profile
      BFD Probe1000
      BFD Allowed Hops255
      BFD Declare Dead Multiple 3
      StandBy Relocation Threshold (mins)30

    Create an NSX-T Edge Cluster 

    Adding multiple NSX-T Edge nodes to a cluster increases the availability of networking services. An NSX-T Edge cluster is necessary to support the Tier-0 and Tier-1 gateways in the workload domain. 

    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. On the main navigation bar, click System.
    3. In the navigation pane, select Fabric > Nodes.
    4. On the Edge Clusters tab, click Add.
    5. In the Add Edge Cluster dialog box, configure the following settings.
      SettingValue
      NameEnter the Name for Edge Cluster 
      Edge Cluster Profile Select Name of Edge Cluster Profile 
    6. From the MemberType drop-down menu, select Edge Node.
    7. Move the New Edge_Node01,New Edge_Node02_New nodes to the Selected list.
    8. Click OK and click Add

    Create and Configure a New Tier-0 Gateway 

    The Tier-0 gateway in the NSX-T Edge cluster provides a gateway service between the logical and physical network. The NSX-T Edge cluster can back multiple Tier-0 gateways. 

    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. Create the Tier-0 gateway.
      1. On the main navigation bar, click Networking.
      2. Select Tier-0 Gateways and click Add Tier-0 Gateway.
      3. Enter the following values and click Save
        SettingValue
        Tier-0 Gateway Name tier0-01 
        High Availability Mode Active-Active 
        Edge Cluster Select Edge Cluster Newly Created 
    3. Confirm that you want to continue configuring the Tier-0 gateway.
    4. Configure route redistribution.
      1. Expand Route Re-Distribution and click Set.
      2. Select all sources and click Apply.
      3. On the Add Tier-0 gateway page, in the Route re-distribution section, click Save.
    5. Add the uplink interfaces to the NSX-T Edge nodes.
      1. Expand Interfaces and click Set.
      2. In the Set Interfaces dialog box, click Add Interface and enter the settings of the uplink interface. 
        NameNameIP Address/Mask Connected to (Segment) Edge NodeMTU
        en01-Uplink01 External172.16.47.2/24 segment of uplink01 edge_node1 9000
        en01-Uplink02 External172.16.48.2/24 segment of uplink02edge_node1 9000
        en02-Uplink01 External172.16.47.3/24 segment of uplink01 edge_node29000
        en02-Uplink02 External172.16.48.3/24 segment of uplink02edge_node29000
      3. Click Save
      4. Repeat this step for the other interfaces and click Close.
      5. On the Add Tier-0 gateway page, in the Interfaces section, click Save.
    6. Configure BGP.
      1. Expand BGP, configure the settings, and click Save.

        SettingValue
        Local AS bgp_asn 
        BGPOn
        Graceful Restart Disabled
        Inter SR iBGPOn
        ECMPOn
        Multipath Relax On
      2. Click Set for BGP Neighbors.
      3. In the Set BGP neighbors  dialog box, click Add BGP neighbor, configure the settings for the first neighbor, and click Save
        IP Address BFDRemote AS/Source Addresses Hold Down Time Keep Alive TimePasswordOut Filter In Filter
        ip_bgp_neighbor1Disabledbgp_asn 124bgp_password--
        ip_bgp_neighbor2Disabledbgp_asn124bgp_password--
      4. Note: Enable BFD if the network supports and is configured for BFD. 
      5. Repeat for the other neighbor, click Save and click Close
      6. On the Add Tier-0 gateway page, in the BGP section, click Close editing
    7. Generate a BGP summary for the Tier-0 gateway. 
      1. On the main navigation bar, click Advanced networking & security
      2. In the navigation pane, click Routers and select tier0-01
      3. From the Actions drop-down menu, select Generate BGP summary
      4. Verify that each transport node has established connections. 

    NSX-T Data Center Configuration for Availability Zone 2 

    Configure IP Prefixes in the New Tier-0 Gateway for Availability Zone 2  

    You configure default and any IP prefixes on the Tier-0 gateway to permit access to route advertisement by any network and by the 0.0.0.0/0 network. These IP prefixes are used in route maps to prepend a path to one or more autonomous systems (AS-path prepend) for BGP neighbors and to configure local-reference on the learned default-route for BGP neighbors in availability zone 2. 

    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. On the main navigation bar, click Networking.
    3. In the navigation pane, click Tier-0 gateways.
    4. Select the gateway and from the ellipsis menu, click Edit.
    5. Create the Any IP prefix list.
      1. Expand the Routing section and in the IP prefix list section, click Set.
      2. In the Set IP prefix list dialog box, click Add IP prefix list.
      3. Enter Any as the prefix name and under Prefixes, click Set.
      4. In the Set prefixes dialog box, click Add Prefix and configure the following settings.
        SettingValue
        Networkany
        ActionPermit
    6. Click Add and then click Apply.
    7. Repeat step 5 to create the default route IP prefix set with the following configuration. 
      NameDefault Route 
      Network0.0.0.0/0 
      ActionPermit
    8. On the Set IP prefix list dialog box, click Close.

    Configure Route Maps in the New Tier-0 Gateway for Availability Zone 2 

    To define which routes are redistributed in the workload domain, you configure route maps in the New Tier-0 Gateway. 

    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. On the NSX Manager main navigation bar, click Networking.
    3. In the navigation pane, click Tier-0 gateways.
    4. Select the gateway, and from the ellipsis menu, click Edit.
    5. Create a route map for traffic incoming to Availability Zone 2.
      1. Expand the Routing section and in the Route maps section, click Set.
      2. In the Set route maps dialog box, click Add route map
      3. Enter a name for the route map.
      4. In the Match criteria column, click Set.
      5. On the Set match criteria dialog box, click Add match criteria and configure the following settings. 
        SettingValue for Default Route Value for Any
        TypeIP Prefix IP Prefix 
        MembersDefault Route Any
        Local Preference 8090
        ActionPermitPermit
      6. Click Add and then click Apply
      7. In the Set route maps dialog box, click Save
    6. Repeat step 4 to create a route map for outgoing traffic from availability zone 2 with the following configuration. 

      SettingValue
      Route map name rm-out-az2 
      PermitIP Prefix 
      MembersAny
      As Path Prepend bgp_asn 
      Local Preference 100
      ActionPermit
    7. In the Set route maps dialog box, click Close.

    Configure BGP in the Tier-0 Gateway for Availability Zone 2  

    To enable fail-over from availability zone 1 to Availability Zone 2, you configure BGP neighbors on the Tier-0 gateway in the management or workload domain to be stretched. You add route filters to configure localpref on incoming traffic and prepend of AS on outgoing traffic. 

    You configure two BGP neighbors with route filters for the uplink interfaces in availability zone 2. 

    BGP Neighbors for Availability Zone 2 

    SettingBGP Neighbor 1 BGP Neighbor 2 
    IP Address ip_bgp_neighbor1 ip_bgp_neighbor2
    BFDDisabledDisabled
    Remote AS asn_bgp_neighbor1 asn_bgp_neighbor2
    Hold downtime 1212
    Keep alive time44
    Passwordbgp_passwordbgp_password

    Route Filters for BGP Neighbors for Availability Zone 2 

    SettingBGP Neighbor 1 BGP Neighbor 2 
    IP Address Family IPV4IPV4
    EnabledEnabledEnabled
    Out Filter rm-out-az2  rm-out-az2 
    In Filter rm-in-az2 rm-in-az2 
    Maximum Routes --
    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. On the NSX Manager main navigation bar, click Networking.
    3. In the navigation pane, click Tier-0 gateways.
    4. Select the new gateway and from the ellipsis menu, click Edit.
    5. Add the uplink interfaces to the NSX Edge nodes.
      1. Expand BGP and in the BGP neighbors section, click 2.
      2. In the Set BGP neighbors dialog box, click Add BGP neighbor and configure the following settings. 
        SettingValue
        IP address ip_bgp_neighbor1 
        IP address 

        Disabled 

        Note: 

        Enable BFD only if the network supports and is configured for BFD.

        Remote AS asn_bgp_neighbor1 
        Hold downtime 12
        Keep alive time 4
        Passwordbgp_password
      3. In the Route filter section, click Set
      4. In the Set route filter dialog box, click Add route filter and configure the following settings. 
        SettingValue
        IP address Family IPV4
        EnabledEnabled
        Out Filter rm-out-az2 
        In Filter rm-in-az2 
        Maximum Routes -
      5. Click Add and then click Apply
    6. Repeat step 4 to configure BGP neighbor ip_bgp_neighbor2 and the corresponding route filter. 
    7. On the Tier-0 gateway page, click Close editing

    Migrate the Existing T1-Gateway to the New Edge Cluster  

    The Tier-1 gateway must be migrated to the new Tier-0 gateway and new NSX-T Edge cluster. 

    1. In a web browser, log in to the NSX Manager cluster at https://<vip_fqdn>/.
    2. On the NSX Manager main navigation bar, click Networking.
    3. In the navigation pane, select Tier-1 gateways.
    4. On the Tier-1 Gateway page, click the vertical eclipses menu for the tier1_gateway and click Edit.
    5. Update the values for the new Tier-0 gateway and edge cluster.
      SettingValue
      Tier-1 Gateway Name tier1_gateway 
      Linked Tier-0 Gateway new_tier0_gateway 
      Edge Cluster new_edge_cluster 
    6. Click Save and click CloseEditing.

    All the segments in workload are automatically connected to new Tier-0 gateway. 

    Remove the Legacy Edge Cluster and Nodes

    To delete the edge cluster, delete the legacy Tier-0 gateway, deselect the edge nodes from edge cluster and delete the edge nodes and edge cluster. Deleting edge nodes works only if you remove the edge nodes from the edge cluster first. 

    In a Web browser, log in to the NSX Manager cluster.  

    1. Remove the legacy Tier-0 gateway.
      1. On the NSX Manager main navigation bar, click Networking.
      2. In the navigation pane, select Tier-0 gateways.
      3. On the Tier-0 Gateway page, click the vertical eclipses menu for the tier0_gateway and click Delete.
      4. On the delete tier-0 gateway page, confirm delete.
    2. Remove the edge nodes from the cluster.
      1. On the main navigation bar, click System.
      2. In the navigation pane, select Fabric > Nodes.
      3. On the Edge Clusters tab, select the legacy Edge_cluster.
      4. Under Edit Edge Clusters, move the edge nodes for Availability Zone 1 and Availability Zone 2 from Selected to Available.
      5. Click Save.
    3. Remove the edge cluster.
      1. On the main navigation bar, click System.
      2. In the navigation pane, select Fabric > Nodes.
      3. On the Edge Clusters tab, select the legacy Edge_cluster  and delete.
      4. Under Delete Edge Clusters, confirm the delete operation.
    4. Delete the edge nodes
      1. On the main navigation bar, click System.In the navigation pane, select Fabric > Nodes.
      2. On the Edge Transport Nodes tab, select the Edge_01 for Availability Zone 1 and click Delete.
      3. Under Delete Transport Nodes, confirm.
      4. Repeat the process for Edge_02 for Availability Zone 1 and Edge_01 and Edge_02  of Availability Zone 2. 


    Additional Information

    Impact/Risks:

    This update does not impact the SDDC design and implementation, ensures interoperability, and introduces bug fixes.  

    Prerequisites 
    Before you upgrade the virtual infrastructure layer of the SDDC, verify that your existing VMware Cloud Foundation  environment meets certain general prerequisites. 

    • Verify that your environment implementation follows exactly the software bill of materials for  Cloud Foundation 3.10. 
    • In the VMware Cloud Foundation Upgrade Guide, see  Upgrade Prerequisites.