Monday, February 12, 2018

Real Application Clusters

Master Node in RAC can be identified by

select * from gv$gcs_resource;

The node which takes the auto backup of OCR is called master node in RAC.

By reviewing the ocssd and crsd logs.

cat $ORACLE_HOME/log/host01/cssd/ocssd.log |grep ‘master node’ |tail -1

Cluster name can found from below command

How to Query the Cluster Name [ID 577300.1]

GRID_HOME/bin/cemulto -n

OCR

OCR maintains information about clusterware resources like ASM, Database instances, scan listeners, disk groups, VIPs, Nodeapps etc.
OCR can be managed by ocrconfig, ocrcheck, ocrdump utilities as root user.
We can have up to 4 mirror copies of OCR.
Oracle automatically takes the backup of OCR for every 4, 8, 12 hours, At the end of every day, At the end of every week and frequency or number of OCR backups to retain can't be changed.
Oracle clusterware retains last 3 backups of OCR plus last daily and last weekly backup.
When a node gets added or deleted the information will be updated in OCR. It should be shared by all nodes in the cluster.

ocrconfig -showbackup Automatic backup location
ocrconfig -local -manualbackup Manually backing up to the local node.
ocrconfig -local -showbackup    lists backups available in the local node.
ls -ltr /u01/app/grid/cdata/nodename/*.olr lists backups available in OS.

Restore OLR from current backup

Stop the clusterware.
Verify ohasd.bin is not running
Restore the backup
ocrconfig -local -restore /u01/app/grid/cadata/nodename/olrbackup.olr
Start the clusterware.

Relocate OCR to different ASM disk group

Check the current location of OCR.
$GRID_HOME/bin/ocrcheck
Add the new location
$GRID_HOME/bin/ocrconfig -add +NEW_OCR
Check the current location of OCR
$GRID_HOME/bin/ocrcheck
Delete the old OCR location
$GRID_HOME/bin/ocrconfig -delete +DATA
Check the current location of OCR
$GRID_HOME/bin/ocrcheck

Voting Disks

Voting disks maintains the node membership.
Each node participating in the cluster has to send it's heart beat to voting disks for every 5 seconds.
If any of the node fails to vote it's availability for 30 seconds the node gets rebooted.
voting disk backups are taken by dd command and backing up voting disk should be part of your backup routine/policy and operations on voting disk can be performed as root user.
Oracle recommends to take the backup of voting disk after node addition or deletion.

Relocate VOTING disks to different disk groups.

$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/crsctl replace votedisk +OCR
$GRID_HOME/bin/crsctl query css votedisk

OCR and voting files are vital for clusterware operation, During installation of GI we have the option to choose only one disk group for OCR and voting files. If the disk group goes down we will be loosing both OCR and voting files and recovery of both will have different approaches, Hence down time would be more. Here i'm outlining the procedure to separate OCR and voting to different disk groups.

As a root user,

$GRID_HOME/bin/ocrcheck
$GRID_HOME/bin/crsctl query css votedisk
Create disk group after new disks are available.
$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/crsctl replace votedisk +VD
$GRID_HOME/bin/crsctl query css votedisk

No comments:

Post a Comment