Graceful Shutdown and Power On of a VMware Identity Manager PostgreSQL cluster
search cancel

Graceful Shutdown and Power On of a VMware Identity Manager PostgreSQL cluster

book

Article ID: 322726

calendar_today

Updated On:

Products

VMware VMware Aria Suite

Issue/Introduction

How to Shut down, Power On or Reboot a VMware Identity Manager deployed through VMware Aria Suite Lifecycle (formerly vRealize Suite LifeCycle Manager).

Note: This article is applicable for VMware Identity Manager deployed through Aria Suite Lifecycle.
for product version
 Aria Suite Lifecycle 8.x & VMware Identity Manager 3.x.

Environment

VMware vRealize Suite Lifecycle Manager 8.x
VMware Aria Suite Lifecycle 8.x
VMware Identity Manager 3.3.x
VMware Identity Manager 3.0

Resolution

When using Aria Suite Lifecycle 8.1 or later, please utilize the product UI. View this resource for details: Day 2 Operations for Global Environment in vRealize Suite Lifecycle Manager.

Workaround:
For versions earlier than 8.1, please take the following steps:

Single node VMware Identity Manager

Graceful Shut down of all services in VMware Identity Manager:    

  1. Navigate to the top-right corner of the VMware Identity Manager admin console.
  2. Ensure that the appliance health status of the VMware Identity Manager is Green.
URL: https://<VMware Identity Manager Hostname>/SAAS/admin/app/page#!/systemDiagnostic
Note: Where <VMware Identity Manager Hostname> is the Complete FQDN of the VMware Identity Manager appliance.
  1. SSH to VMware Identity Manager node as root
  2. Run the command to stop the Horizon and Elastic search services:
service horizon-workspace stop

vIDM versions 3.x - 3.3.6
service elasticsearch stop

vIDM versions 3.3.7 and above
/etc/init.d/opensearch stop

Note: After each command to verify the status of the service, use the command:                                  service <service-name> status
  1. Power off the VMware Identity Manager node, if required.

Graceful power on of all services in VMware Identity Manager    

  1. If the VMware Identity Manager node(s) are powered off, then power on all the node(s) in the vCenter.
  2. After the VMware Identity Manager node is powered on, the services starts up automatically. However, if step #1 was not performed, to gracefully bring up the VMware Identity Manager services follow the below steps.
  3. SSH to VMware Identity Manager node as Root
  4. Run the command to start the Horizon and Elasticsearch services:

vIDM versions 3.x - 3.3.6

service horizon-workspace start
service elasticsearch start
Note: Use service <service-name> status to check the status of the service.

vIDM versions 3.3.7 and above

service horizon-workspace start
/etc/init.d/opensearch start
/etc/init.d/opensearch status

Clustered VMware Identity Manager

Graceful Shut down of all services in VMware Identity Manager:  

  1. Navigate to top right corner of the VMware Identity Manager admin console.
  2. Ensure appliance health status of the VMware Identity Manager is Green.
URL:
https://VMwareIdentityManagerHostname/SAAS/admin/app/page#!/systemDiagnostic
Note: where VMwareIdentityManagerHostname is the FQDN of the VMware Identity Manager appliance.
  1. To get the current cluster health status, run the following API:
API:
curl -H "Authorization: Basic token" -k https://AriaSuiteLifecyclehostname/lcm/authzn/api/vidmcluserhealth
API Help:
AriaSuiteLifecyclehostname: The hostname / IP of Aria Suite Lifecycle appliance managing the vIDM cluster.
tokenRun the following command to get the Base64 encoded value of username:password. Here the username is admin@local, and password is admin@local user's password.
echo -n 'admin@local:password' | base64
Note: On VCF mode replace admin@local with vcfadmin@local and its respective password.  
Note: The API will trigger a request to re-calculate the cluster health which post completion would again generate a notification on the current overall status in Aria Suite Lifecycle. Ensure the health notification for the postgres cluster in Aria Suite Lifecycle is green. if not, please see Troubleshooting VMware Identity Manager postgres cluster deployed through vRealize Suite Lifecycle Manager to remediate cluster health, which is necessary to ensure the postgres is restored to healthy state after gracefully bringing it back. 
  1. To find the pgpool master, run the below command on any of the VMware Identity Manager nodes:    
    su - root -c "echo -e 'password'|/usr/local/bin/pcp_watchdog_info -p 9898 -h localhost -U pgpool"
Command parameters:
-h : The host against which the command would be run, here its 'localhost'.
-p : Port on which pgpool accepts connections, which is 9898
-U : The Pgpool health check and replication delay check user, which is pgpool

Expected result:
3 YES <Host1>:9999 Linux <Host1> <Host1>
      <Host1>:9999 Linux <Host1> <Host1> 9999 9000 4 MASTER
      <Host2>:9999 Linux <Host2> <Host2> 9999 9000 7 STANDBY
      <Host3>:9999 Linux <Host3> <Host3> 9999 9000 7 STANDBY


Note: In the above expected result, the node marked as MASTER, is the pgpool master and other nodes would be pgpool standby nodes.If pgpool master node cannot be found, see Troubleshooting VMware Identity Manager postgres cluster deployed through vRealize Suite Lifecycle Manager to remediate VMware Identity Manager cluster.
  1. On the pgpool master found in step #4.To find out the postgres primary node, run the command:
su - postgres -c "echo -e 'password'|psql -h localhost -p 9999 -U pgpool postgres -c \"show pool_nodes\""

Command parameters :
-h : The host against which the command would be run, here it would be 'localhost' .
-p : The port on which Pgpool accepts connections, here its 9999
-U : The Pgpool user, which is pgpool
-c : The command to run, which is 'show pool_nodes'

Expected result:
node_id | hostname | port | status | lb_weight | role | select_cnt | load_balance_node | replication_delay | last_status_change
---------+---------------+------+--------+-----------+---------+------------+-------------------+-------------------+---------------------
 0       | Host1    | 5432 | up     | 0.333333 | primary | 0 | false | 0 | 2019-10-14 06:05:42
 1       | Host2    | 5432 | up     | 0.333333 | standby | 0 | false | 0 | 2019-10-14 06:05:42
 2       | Host3    | 5432 | up     | 0.333333 | standby | 0 | true | 0 | 2019-10-14 06:05:42
(3 rows)
Note: In the above response, the node marked as primary in the 'role' column would be the primary postgres and other nodes would be standby postgres nodes. On an ideal scenario, there must be atleast one postgres primary node.If none of the nodes are marked primary in 'role' column, see Troubleshooting VMware Identity Manager postgres cluster deployed through vRealize Suite Lifecycle Manager to remediate VMware Identity Manager cluster and repeat this step to find the postgres primary node.
  1. Ensure that the master and the delegateIP are added to the /etc/hosts file
    The entries need to be outside of the VAMI_BEGIN and VAMI_END block
    This needs to be done for all the hosts on the cluster.
  2. To bring down the Postgres cluster gracefully:
    1. Stop Horizon Service on the two standby postgres nodes using the command:
service horizon-workspace stop
  1. Stop Horizon Service on the primary postgres node using the command:
service horizon-workspace stop
  1. Stop Elasticsearch Service on the two standby postgres nodes using the command:

vIDM versions 3.x - 3.3.6
service elasticsearch stop

vIDM versions 3.3.7 and above
/etc/init.d/opensearch stop

  1. Stop Elasticsearch Service on the primary postgres node using the command:

vIDM versions 3.x - 3.3.6
service elasticsearch stop

vIDM versions 3.3.7 and above
/etc/init.d/opensearch stop

  1. Stop Pgpool Service on the two postgres standby nodes using the command:
service pgService stop
  1. Stop Pgpool Service on the postgres master node using the command:
service pgService stop
  1. Stop vPostgres on the two standby postgres nodes using the command:
service vpostgres stop
  1. Stop vPostgres on primary postgres node using the command:
service vpostgres stop      
Note : After each command verify the service is stopped, run the below command:
service <service-name> status 
  1. Once all the services is stopped in step #6, all the VMware Identity Manager node services are gracefully powered-off. Now all nodes could be powered-off if required, in vCenter.

Graceful power on of all services in VMware Identity Manager:   

  1. If all VMware Identity Manager node(s) are powered off, then power on all the node(s) in the vCenter.
  2. After the VMware Identity Manager node is powered on, the services automatically come up. However, if step #1 was not performed, to gracefully bring up the VMware Identity Manager services follow the below steps, after performing an SSH to the VMware Identity Manager nodes as a root user in the following manner:
    1. Start the following services on the primary postgres server
service vpostgres start
service pgService start
service horizon-workspace start

vIDM versions 3.x - 3.3.6
service elasticsearch start

vIDM versions 3.3.7 and above
/etc/init.d/opensearch start

NoteAfter each command verify the status of the service by running the command:
service <service-name> status
  1. Start the following services on the two standby postgres nodes
service vpostgres start
     service pgService start
     service horizon-workspace start

vIDM versions 3.x - 3.3.6
service elasticsearch start

vIDM versions 3.3.7 and above
/etc/init.d/opensearch start

NoteAfter each command verify the status of the service by running the command:
service <service-name> status
  1. In case, the VMware Identity Manager node(s) were powered on as mentioned in Step #1, the delegate IP (Database IP) on the primary postgres node would be lost. To re-assign, follow step #4 as mentioned in the KB 75080.
  2. Run the command as mentioned in Step # 4 and 5 under "Cluster Graceful shut down of all services in VMware Identity Manager" section and make sure all the nodes are marked up. If any nodes are marked down, bring them up by performing the action mentioned in step #5 of the KB 75080.


Additional Information

Please review these resources for additional details: