vRealize Automation 8.1 deployment fails with "client-secrets" does not exist
search cancel

vRealize Automation 8.1 deployment fails with "client-secrets" does not exist

book

Article ID: 312230

calendar_today

Updated On:

Products

VMware Aria Suite

Issue/Introduction

Symptoms:
The issue is present when VMware Identity Manager (vIDM) nodes are configured behind an VMware NSX load balancer
  • vRealize Automation 8.1 deployment or patch fails with
    0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
    100 620 100 92 100 528 368 2112 --:--:-- --:--:-- --:--:-- 2489
    {"refLink":"/csp/gateway/slc/api/definitions/external/<UUID>"}Error: getting deployed release "client-secrets": release: "client-secrets" not found
    sleeping for 8 before upstalling client-secrets
    running: helm upgrade --install --timeout=1800 --wait --namespace prelude --set-string
  • The /services-logs/csp-clients-fixture/csp-clients-fixture.log contains the below entries
    "500: Failed to register client with id XXX"
Note: Where XXX refers to a service OAuth2 client ID. This will often be a different client ID on each run.
  • The identity-service.log contains below entries
    "reactor.netty.http.client.PrematureCloseException: Connection prematurely closed BEFORE response"


Environment

VMware vRealize Automation 8.1.x

Cause

By default, NSX-v is closing inactive connections after one second, this causes the Identity service pod HTTP connection to vIDM to be closed while the Identity services' session to remain open and alive.

Resolution

Note: If the Workaround section was implemented previously, remove the workaround described when applying the below fix.

Before proceeding ensure sufficient resources are also assigned to the vIDM nodes as insufficient resourcing on vIDM nodes can produce similar symptoms. For clustered setups minimum recommended resources for vidm nodes is 4CPU 16 GB RAM.

To resolve this issue apply the below settings to the Workspace ONE / vIDM NSX load balancer configuration:
  1. Create a new Add an application rule to the NSX configuration: Add an Application Rule 
  2. Set the HTTP Keep-alive to 300s
    timeout http-keep-alive 300s
Note: See, Load Balancer HTTP Connection Mode.  You can set a value lesser than 300 due to other application(s) using the NSX load balancer. The Identity service timeout is 15 seconds and vIDM timeout is 20 seconds. Picking a value under 15 seconds will not resolve the issue, and a value between 15 and 20 seconds may prove unstable. If such values are necessary, you may need to apply the workaround described below to completely disable the identity connection pool or fine-tune the identity service keep-alive time.

Workaround:

NOTE: The below workaround will not persist through upgrades or patches. Should you wish to upgrade or patch vRealize Automation 8.x after implementing this workaround either follow this article for further updates to the resolution section or alternatively raise a support request to confirm the latest fix by information.

To increase vIDM resources see vRSLCM documentation

  1. Open the deployment configuration file using preferred text editor
    vi /opt/charts/identity-service/templates/deployment.yaml
  2. Search for the line
    -Dlogging.level.com.zaxxer.hikari.HikariConfig=DEBUG

Below the above entry add the following additional line

-Didentity.rest.max_idle_time=PT0S

Note:  In PT0S it is a zero, not the letter o.  When editing the file use spaces not tabs.  The file is sensitive to text alignment so ensure the lines are aligned correctly.

  1. Confirm the file is configured as below
    -Dlogging.level.com.zaxxer.hikari.HikariConfig=DEBUG
    -Didentity.rest.max_idle_time=PT0S
  2. Perform this change on each vRealize Automation node.
  3. Re-deploy vRealize Automation by running the deploy.sh script manually or by triggering a retry in the vRealize Suite Lifecycle Manager UI
    /opt/scripts/deploy.sh


Additional Information

vRealize Automation 8.1 Load Balancing Guide
Deploying the VMware Identity Manager Machine Behind a Load Balancer