Test the changed cluster to verify if it operates normally according to the scenario.
Connect to prihana through Session Manager.
After log in AWS Management Console, connect to EC2 Instance Console
Select HANA-HDB-Primary instance, click Action, and click Connect.

Select Session Manager and click Connect.

After switching to hdbadm user in prihana node, Crash(HDB kill) HANA DB.
sudo su - hdbadm
HDB kill -9
exit

sudo su -
crm_mon -rfn1

SAPHanaSR-showAttr

The second scenario checks how the cluster works when Stop prihana Instance.
Crash OS(Stop Instance)

Connect to sechana through Session Manager.


Monitor the cluster status
sudo su -
crm_mon -rfn

sudo su - hdbadm
cat /usr/sap/HDB/SYS/global/hdb/custom/config/global.ini

Start prihana node. If prihana node is normall, take back to prihana to use QAS system. Restore sechana’s PRD(HDB)’s global.ini settings.
Start prihana instance.

Switch to root user in sechana node. After prihana instance starts up normally, check if HDB HANA resource is Slave.
sudo su -
crm_mon -rfn

SAPHanaSR-showAttr

crm node standby sechana
crm_mon -rfn

sudo su - hdbadm
vi /usr/sap/HDB/SYS/global/hdb/custom/config/global.ini
[system_replication]
...
preload_column_tables = false #Add-on
[memorymanager]
global_allocation_limit = 24576

sudo su -
crm node online sechana
crm_mon -rfn1

SAPHanaSR-showAttr

© 2020, Amazon Web Services, Inc. or its affiliates. All rights reserved.