Configure VMware Cloud Director Extension for VMware Tanzu Mission Control with self-signed certificates
search cancel

Configure VMware Cloud Director Extension for VMware Tanzu Mission Control with self-signed certificates

book

Article ID: 325495

calendar_today

Updated On:

Products

VMware Cloud Director

Issue/Introduction

This document will guide you through the steps required to successfully deploy VMware Cloud Director Extension for VMware Tanzu Mission Control when self-signed certificates are used for Harbor or the Tanzu Mission Control deployment.

Symptoms:
  • You are deploying VMware Cloud Director Extension for VMware Tanzu Mission Control with a Harbor repository that uses self-signed certificates.
  • You are configuring cert-manager to provide self-signed certificates to the Tanzu Mission Control deployment.
  • You are attaching a VMware Cloud Director Container Service Extension cluster to a Tanzu Mission Control deployment which uses self-signed certificates.
  • You are using a self-signed certificate for VMware Cloud Director.


Environment

VMware Cloud Director 10.x

Cause

The extension requires a local Harbor repository to host images for the Tanzu Mission Control service and attached clusters. Any cluster being used to host the Tanzu Mission Control (TMC) services or attached to it will pull images from this Harbor repository. The cluster nodes and kapp-controller in each cluster must trust the self-signed certificate used for Harbor so that images may be accessed securely.

Tenant operations (e.g., attaching clusters or using the tmc CLI) will connect to VMware Cloud Director (VCD) and Tanzu Mission Control services that are protected with a self-signed certificate. The machine used to run these operations must trust both self-signed certificates in order for the operations to succeed.

Resolution

Note: This process does not apply if you are using a publicly-signed certificate. Using a publicly-signed certificate will simplify operations for the provider and the tenant. If you are a VMware Cloud Director tenant, check with your provider to determine if you need to apply any changes.

VMware Cloud Director Extension for VMware Tanzu Mission Control uses certificates to encrypt traffic to the Harbor repository and Tanzu Mission Control services.  Self-signed certificates may be used when a publicly-signed certificate is not available.

Use a single self-signed certificate authority (CA) for Harbor and Tanzu Mission Control. CA certificates are generally valid for a longer period of time. They also allow you to trust all endpoints with a single step rather than repeating the process for each service. Configuring the cluster and kapp-controller to trust the CA will allow you to rotate Harbor certificates without having to repeat this process until the CA expires.

Provider Operations

Some extra steps are required before installing VMware Cloud Director Extension for VMware Tanzu Mission Control with self-signed certificates.
  • Generate a certificate authority (CA) to be used for all self-signed certificates.
  • Update the Container Service Extension server configuration with the CA so future clusters are properly configured.
  • Update existing clusters which will host Tanzu Mission Control or attach to Tanzu Mission Control to trust the CA.
  • Create a ClusterIssuer which will use the CA to generate certificates for the Tanzu Mission Control services.
  • Install the VMware Cloud Director Extension for VMware Tanzu Mission Control.


1. Generating a self-signed certificate authority (CA)

This command will output a rootCA.key and rootCA.crt file which may be used to generate certificates or configure a ClusterIssuer resource for cert-manager . This command is provided as a reference, you may want to use another commands or configuration to generate your CA.
$ openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 \
-keyout $HOME/rootCA.key -out $HOME/rootCA.crt \
-nodes -extensions v3_ca \
-subj "/C=US/ST=CA/L=City Name/O=CompanyName/OU=OrgName/CN=Tanzu Mission Control Issuing CA"

On some platforms, you may get an error like Error Loading extension section v3_ca. This happens with the system default configuration does not support generating certificate authorities. The issue may be resolved by creating a custom configuration before running openssl.

$ cp /etc/ssl/openssl.cnf .

$ cat <<EOF >> openssl.cnf
[ v3_ca ]
basicConstraints = critical,CA:TRUE
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer:always
EOF

$ openssl req -x509 -sha256 -days 1825 -newkey rsa:2048 \
-keyout $HOME/rootCA.key -out $HOME/rootCA.crt \
-nodes -extensions v3_ca -config openssl.cnf \
-subj "/C=US/ST=CA/L=City Name/O=CompanyName/OU=OrgName/CN=Tanzu Mission Control Issuing CA"


2. Update the CSE configuration

Starting with CSE 4.1, it is possible to configure the certificates which are trusted by all new clusters. Update the system configuration with the PEM-formatted contents of the CA. In our example, this can be found in rootCA.crt.

Follow the steps at Configure the VMware Cloud Director Container Service Extension Server Settings to update the configuration.

Restart the CSE vApp after submitting the changes.

Note: These certificates will be trusted by all new clusters created through the UI. The tenant will not have the ability to remove them. Any tenants creating clusters with the API are responsible for including the certificates to be trusted by that cluster.


3. Update the configuration of existing clusters

Any cluster created before the CSE configuration was updated must be updated to trust the same set of certificates. This applies to the cluster which hosts Tanzu Mission Control and any tenant cluster that will be attached to Tanzu Mission Control.

3.1 Update the cluster node configuration

The attached insert_cse_cluster_node_certificates.sh script is available to update the cluster definition. When executed it will make two changes to the cluster:
  1. Disable the Auto Repair on Errors setting to prevent an accidental deletion of the cluster. This is related to a Known Issue where clusters may be deleted after already becoming available.
  2. Update the Cluster API (CAPI) YAML definition in the RDE with the certificate to be trusted.

Note: Updating the cluster definitions will execute a rolling-recreation of all nodes in the cluster. All pods will be restarted at least once as they are rescheduled from old worker nodes to new ones. This is a natural behavior for Kubernetes clusters. Applications with multiple nodes will see less interruption than applications with a single node.

The script may be run from any machine with network access to the VCD API. Be sure to have these utilities installed and in the user's PATH before running the script.
  • /bin/bash - Refer to your OS for installation instructions.
  • curl - Refer to your OS for installation instructions.
  • diff - Refer to your OS for installation instructions.
  • jq - https://jqlang.github.io/jq/
  • ytt - https://carvel.dev/ytt/


Configure credentials

The script uses environment variables to authenticate with the VCD API. The API token will be used if available, otherwise the password will be used.

Note: The script will only work with System users. It can be updated to work with a tenant user but we recommend that providers run this script on behalf of their users.
export VCD_URL=https://vcd.example.com
export VCD_USERNAME=system_local_user
export VCD_PASSWORD=
export VCD_API_TOKEN=

Test the changes

Use the script to preview the planned updates for a cluster by running it with the --dry-run option. This will show the differences between the current cluster definition and the updated definition.
Note: You may see some changes to other sections of the YAML definition as the format of the document is normalized by ytt.
$ ./insert_cse_cluster_node_certificates.sh myorg tmc01 rootCA.crt rootCA2.crt --dry-run

Apply the changes

When ready, remove the --dry-run option to apply changes to the cluster definition. You can monitor tasks in the VCD UI or with kubectl.

Note: Using kubectl to monitor the progress will require network access to the Kubernetes API. Progress may be monitored in the VCD UI if that is not available.
$ kubectl get nodes
NAME                                              STATUS   ROLES           AGE   VERSION
tmc01-control-plane-node-pool-5sdmz               Ready    control-plane   42h   v1.24.10+vmware.1
tmc01-control-plane-node-pool-dcjr9               Ready    control-plane   43h   v1.24.10+vmware.1
tmc01-control-plane-node-pool-gr4sm               Ready    control-plane   43h   v1.24.10+vmware.1
tmc01-worker-node-pool-1-64cddcc746xwnznb-68gtm   Ready    <none>          42h   v1.24.10+vmware.1
tmc01-worker-node-pool-1-64cddcc746xwnznb-6pd7k   Ready    <none>          43h   v1.24.10+vmware.1
tmc01-worker-node-pool-1-64cddcc746xwnznb-7dtvq   Ready    <none>          42h   v1.24.10+vmware.1
$ kubectl get kubeadmcontrolplanes,machinedeployments -A
NAMESPACE   NAME                                                                              CLUSTER   INITIALIZED   API SERVER AVAILABLE   REPLICAS   READY   UPDATED   UNAVAILABLE   AGE   VERSION
tmc01-ns    kubeadmcontrolplane.controlplane.cluster.x-k8s.io/tmc01-control-plane-node-pool   tmc01     true          true                   3          3       3         0             43h   v1.24.10+vmware.1

NAMESPACE   NAME                                                          CLUSTER   REPLICAS   READY   UPDATED   UNAVAILABLE   PHASE     AGE   VERSION
tmc01-ns    machinedeployment.cluster.x-k8s.io/tmc01-worker-node-pool-1   tmc01     3          3       3         0             Running   43h   v1.24.10+vmware.1


3.2 Update kapp-controller

The kapp-controller is responsible for lifecycle management (LCM) of Tanzu packages on the cluster. Clusters running or being managed by TMC-SM will use Tanzu packages from the Harbor repository. Follow this process to ensure the packages can be retrieved.

Note: These commands use kubectl to modify the kapp-controller configuration. This requires network access to the Kubernetes API. The tenant may need to execute these commands if the cluster is connected to a private network and the provider does not have network access to it.

Note: An error like cat: rootCA.crt: No such file or directory means that the specified file could not be found. Check the path to the file and attempt to run the script again.

Created with TKG <= 1.6.1

For clusters that were created before TKG2, the kapp-controller process runs in the tkg-system namespace. Use these commands to update the configuration with the contents from the PEM-formatted rootCA.crt file. Repeat the process for all certificates to be trusted.
kubectl patch -n tkg-system configmap/kapp-controller-config \
-p "$( \
  CURRENT_CACERTS=$(kubectl get -n tkg-system configmap/kapp-controller-config -o=jsonpath="{.data.caCerts}") \
    NEW_CACERTS=$(echo "$CURRENT_CACERTS" && cat rootCA.crt) \
  jq -c -n '{"data": {"caCerts": $ENV.NEW_CACERTS}}' \
)"
  
kubectl -n tkg-system rollout restart deploy/kapp-controller

Created with TKG >= 2.1.1

Starting with TKG2, the kapp-controller process runs in the kapp-controller namespace. Use these commands to update the configuration with the contents from the PEM-formatted rootCA.crt file. Repeat the process for all certificates to be trusted.
kubectl patch -n kapp-controller secret/kapp-controller-config \
-p "$( \
  CURRENT_CACERTS=$(kubectl get -n kapp-controller secret/kapp-controller-config -o=jsonpath="{.data.caCerts}" | base64 -d) \
    NEW_CACERTS=$(echo "$CURRENT_CACERTS" && cat ca-certificates.crt) \
  jq -c -n '{"stringData": {"caCerts": $ENV.NEW_CACERTS}}' \
)"

kubectl -n kapp-controller rollout restart deploy/kapp-controller


4. Configuring a ClusterIssuer resource

These steps will configure a ClusterIssuer resource using the CA we created earlier. Provide the name selfsigned-ca-clusterissuer of the ClusterIssuer during installation. Update the names if you used a different command to generate the CA or want a different name for the ClusterIssuer.
$ export KUBECONFIG=$PWD/kubeconfig-harbor.txt

$ kubectl create secret tls -n cert-manager selfsigned-ca-pair \
--cert=$HOME/rootCA.crt --key=$HOME/rootCA.key

$ cat <<EOF | kubectl apply -f -
{
  "apiVersion": "cert-manager.io/v1",
  "kind": "ClusterIssuer",
  "metadata": {
    "name": "selfsigned-ca-clusterissuer"
  },
  "spec": {
    "ca": {
      "secretName": "selfsigned-ca-pair"
    }
  }
}
EOF


5. Install the VMware Cloud Director Extension for VMware Tanzu Mission Control

Note: The Command Line Interface must be used to install the extension when Harbor uses a self-signed certificate.

You can now proceed with installation using the Command Line Interface (CLI). Take note of these changes to the normal CLI installation process.
  • The CA must be installed to the machine used to run the installation commands. Refer to the appropriate process for your OS as the steps will be different for each.
For Debian / Ubuntu
$ cp $HOME/rootCA.crt /usr/local/share/ca-certificates/
$ sudo update-ca-certificates
For Red Hat / CentOS
$ cp $HOME/rootCA.crt /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust
For Photon
$ sudo tdnf install -y openssl-c_rehash
$ sudo cp $HOME/rootCA.crt /etc/ssl/certs/rootCA.pem
$ sudo rehash_ca_certificates.sh
  • Include these options when running the linux.run create instance command.
--input-cert-provider=cluster-issuer
--input-cert-cluster-issuer-name=selfsigned-ca-clusterissuer
--input-harbor-ca-bundle-file=$HOME/rootCA.crt



Tenants Operations

 

1. Update the configuration of existing clusters

The configuration for existing clusters will need to be updated if it was created before the CSE configuration was updated with the CA certificate. The process is the same for tenant clusters as above.


2. Trust the Tanzu Mission Control certificate

Note: This process may need to be repeated with the VMware Cloud Director and Tanzu Mission Control if both are self-signed.

The self-signed certificate or CA must be trusted by the tenant machine before attempting to attach a cluster or use the tmc CLI. Refer to the appropriate process for your OS as the steps will be different for each.

For Debian / Ubuntu
$ cp $HOME/rootCA.crt /usr/local/share/ca-certificates/
$ sudo update-ca-certificates
For Red Hat / CentOS
$ cp $HOME/rootCA.crt /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust
For Photon
$ sudo tdnf install -y openssl-c_rehash
$ sudo cp $HOME/rootCA.crt /etc/ssl/certs/rootCA.pem
$ sudo rehash_ca_certificates.sh
Alternatively, you can use the SSL_CERT_DIR to define a user-managed location for trusted certificates.
$ mkdir $HOME/ssl-certs
$ cp rootCA.crt $HOME/ssl-certs
$ cp VCD.pem $HOME/ssl-certs
$ export SSL_CERT_DIR=$HOME/ssl-certs
You can now proceed with cluster attachment and use of the tmc CLI.

Additional Information



Impact/Risks:
Failure to properly configure clusters or machines with the appropriate certificates may result in one or more of these errors.
  • Pods unable to start because the images cannot be pulled from Harbor.
  • Clusters unable to attach to Tanzu Mission Control (TMC) because the attachment manifest cannot be loaded.
  • Errors when using the tmc CLI because the VCD or TMC certificates are not trusted.


Attachments

insert_cse_cluster_node_certificates get_app
insert_cse_cluster_node_certificates get_app