Thursday, February 22, 2018

Interview Questions


What are the different kernel parameters?

Why do we set kernel parameters?

What happens to the database instance If database is completely using configured shmmax memory?

What is meant by fixed size?

What are the different types of buffers are allocated in memory at the instance startup time?

If a user or server reaches the maximum number of configured processes, What would be the impact to the user or server?

How do you decide which block size to be used in the database or in a particular tablespace?

What is a hard parsing and soft parsing?

How do you control hard parsing?

What is a literal and bind variables?

When do use literals and when do you use bind variables?

How do you check block corruption in the database?

What are different type of protection modes in data guard environment?

I have a 3 standby databases with one primary database, Now how many protection modes can be configured?

Can different standby databases receives redo data synchronously and asynchronously? If Yes, How can we have a single protection mode throughout data guard setup having more than one standby database?

Which process will give the commit acknowledgement to the primary database?

How data is shipped from primary database to standby database?

What is the functionality of LNS and RFS server?

What is the functionality of ARCH background process?

Which process will apply the redo data in standby database?

What are the parameters do we need to configured to set up physical standby database?

What is FAL_SERVER parameter?

If we unset FAL_SERVER parameter, What would be the impact?

What are standby redo log files?

What happens if archive logs are deleted or corrupted in primary before they applied to standby?

When we issue switch over command in physical standby database, The new primary database will be recovered/opened to which point of the database?

What are issues you have faced while installing Oracle RAC?

What is OCR and Voting disks?

What is a client side load balancing and server side load balancing?

How do you achieve client side load balancing and server side load balancing?

I have 2 queries executing in the RAC database instance on ORCL1,  one is insert and another is select statement and suddenly instance got crashed then what happens to my sessions?

how do you configure fail over and what are different types of fail over methods?

What are the steps you will take before applying patch?

Why do we unlock and lock the grid binaries before and after applying patches?

How do you recover the global inventory corruption in oracle RAC?

What are new features in Oracle upgrade utility in Oracle 12c?

What are the new features in Oracle 12c?

What is your backup strategy?

How backups were configured in your environment?

What is an obsolete backup?

My database backup retention is 5 days and 

day 0 - Full backup
day 1 - Incremental backup
day 2 - Incremental backup
day 3 - Incremental backup
day 4 - Incremental backup
day 5 - Incremental backup
day 6 - Incremental backup
day 0 - Full backup
day 1 - Incremental backup
day 2 - Incremental backup
day 3 - Database is crashed.

Since database's backup retention is 5 days, How many backups are available in the backup destination?

I have yesterday's database full backup and today datafile has been added and accidentally dropped. Can we recover the datafile without any data loss?

I have yesterday's database full backup and database is crashed today, How database knows the recovery length/time?

What is a db file sequential read and db file scatter read?

How do you troubleshoot slow performance of a query which was doing perfectly good some time back?

What is a index, Why do we create indexes and what are different types of indexes?

What is a partition and what are different types of partitions?





Saturday, February 17, 2018

Result: PRVF-4007 : User equivalence check failed for user "grid"

[grid@oel01 grid]$ ./runcluvfy.sh stage -post hwos -n oel01,oel02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "oel01"
  Destination Node                      Reachable?           
  ------------------------------------  ------------------------
  oel02                                 yes                   
  oel01                                 yes                   
Result: Node reachability check passed from node "oel01"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Comment               
  ------------------------------------  ------------------------
  oel02                                 failed               
  oel01                                 failed               
Result: PRVF-4007 : User equivalence check failed for user "grid"

ERROR:
User equivalence unavailable on all the specified nodes
Verification cannot proceed


Post-check for hardware and operating system setup was unsuccessful on all the nodes.
[grid@oel01 grid]$


[grid@oel01 sshsetup]$ ./sshUserSetup.sh -user grid -hosts "oel01 oel02" -noPromptPassphrase
The output of this script is also logged into /tmp/sshUserSetup_2018-02-18-00-41-56.log
Hosts are oel01 oel02
user is grid
Platform:- Linux 
Checking if the remote hosts are reachable
PING oel01.oracle.com (192.168.52.1) 56(84) bytes of data.
64 bytes from oel01.oracle.com (192.168.52.1): icmp_seq=1 ttl=64 time=0.065 ms
64 bytes from oel01.oracle.com (192.168.52.1): icmp_seq=2 ttl=64 time=0.075 ms
64 bytes from oel01.oracle.com (192.168.52.1): icmp_seq=3 ttl=64 time=0.065 ms
64 bytes from oel01.oracle.com (192.168.52.1): icmp_seq=4 ttl=64 time=0.061 ms
64 bytes from oel01.oracle.com (192.168.52.1): icmp_seq=5 ttl=64 time=0.062 ms

--- oel01.oracle.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.061/0.065/0.075/0.010 ms
PING oel02.oracle.com (192.168.52.2) 56(84) bytes of data.
64 bytes from oel02.oracle.com (192.168.52.2): icmp_seq=1 ttl=64 time=1.43 ms
64 bytes from oel02.oracle.com (192.168.52.2): icmp_seq=2 ttl=64 time=0.796 ms
64 bytes from oel02.oracle.com (192.168.52.2): icmp_seq=3 ttl=64 time=0.644 ms
64 bytes from oel02.oracle.com (192.168.52.2): icmp_seq=4 ttl=64 time=0.668 ms
64 bytes from oel02.oracle.com (192.168.52.2): icmp_seq=5 ttl=64 time=0.417 ms

--- oel02.oracle.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 0.417/0.791/1.432/0.343 ms
Remote host reachability check succeeded.
The following hosts are reachable: oel01 oel02.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
The script will setup SSH connectivity from the host oel01.oracle.com to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host oel01.oracle.com
and the remote hosts without being prompted for passwords or confirmations.

NOTE 1:
As part of the setup procedure, this script will use ssh and scp to copy
files between the local host and the remote hosts. Since the script does not
store passwords, you may be prompted for the passwords during the execution of
the script whenever ssh or scp is invoked.

NOTE 2:
AS PER SSH REQUIREMENTS, THIS SCRIPT WILL SECURE THE USER HOME DIRECTORY
AND THE .ssh DIRECTORY BY REVOKING GROUP AND WORLD WRITE PRIVILEDGES TO THESE
directories.

Do you want to continue and let the script make the above mentioned changes (yes/no)?
yes

The user chose yes
User chose to skip passphrase related questions.
Creating .ssh directory on local host, if not present already
Creating authorized_keys file on local host
Changing permissions on authorized_keys to 644 on local host
Creating known_hosts file on local host
Changing permissions on known_hosts to 644 on local host
Creating config file on local host
If a config file exists already at /home/grid/.ssh/config, it would be backed up to /home/grid/.ssh/config.backup.
Removing old private/public keys on local host
Running SSH keygen on local host with empty passphrase
Generating public/private rsa key pair.
Your identification has been saved in /home/grid/.ssh/id_rsa.
Your public key has been saved in /home/grid/.ssh/id_rsa.pub.
The key fingerprint is:
da:96:e7:db:d8:74:4b:74:de:07:c1:db:89:9b:82:9a grid@oel01.oracle.com
The key's randomart image is:
+--[ RSA 1024]----+
|                 |
|             .   |
|              o  |
|              .+.|
|        S    .+.o|
|       o ..  .o+.|
|      . +....oo +|
|       .oo =.o ..|
|       E  +.o .  |
+-----------------+
Creating .ssh directory and setting permissions on remote host oel01
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host oel01. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host oel01.
Warning: Permanently added 'oel01,192.168.52.1' (RSA) to the list of known hosts.
grid@oel01's password: 
Done with creating .ssh directory and setting permissions on remote host oel01.
Creating .ssh directory and setting permissions on remote host oel02
THE SCRIPT WOULD ALSO BE REVOKING WRITE PERMISSIONS FOR group AND others ON THE HOME DIRECTORY FOR grid. THIS IS AN SSH REQUIREMENT.
The script would create ~grid/.ssh/config file on remote host oel02. If a config file exists already at ~grid/.ssh/config, it would be backed up to ~grid/.ssh/config.backup.
The user may be prompted for a password here since the script would be running SSH on host oel02.
Warning: Permanently added 'oel02,192.168.52.2' (RSA) to the list of known hosts.
grid@oel02's password: 
Done with creating .ssh directory and setting permissions on remote host oel02.
Copying local host public key to the remote host oel01
The user may be prompted for a password or passphrase here since the script would be using SCP for host oel01.
grid@oel01's password: 
Done copying local host public key to the remote host oel01
Copying local host public key to the remote host oel02
The user may be prompted for a password or passphrase here since the script would be using SCP for host oel02.
grid@oel02's password: 
Done copying local host public key to the remote host oel02
cat: /home/grid/.ssh/known_hosts.tmp: No such file or directory
cat: /home/grid/.ssh/authorized_keys.tmp: No such file or directory
SSH setup is complete.

------------------------------------------------------------------------
Verifying SSH setup
===================
The script will now run the date command on the remote nodes using ssh
to verify if ssh is setup correctly. IF THE SETUP IS CORRECTLY SETUP,
THERE SHOULD BE NO OUTPUT OTHER THAN THE DATE AND SSH SHOULD NOT ASK FOR
PASSWORDS. If you see any output other than date or are prompted for the
password, ssh is not setup correctly and you will need to resolve the
issue and set up ssh again.
The possible causes for failure could be:
1. The server settings in /etc/ssh/sshd_config file do not allow ssh
for user grid.
2. The server may have disabled public key based authentication.
3. The client public key on the server may be outdated.
4. ~grid or ~grid/.ssh on the remote host may not be owned by grid.
5. User may not have passed -shared option for shared remote users or
may be passing the -shared option for non-shared remote users.
6. If there is output in addition to the date, but no password is asked,
it may be a security alert shown as part of company policy. Append the
additional text to the <OMS HOME>/sysman/prov/resources/ignoreMessages.txt file.
------------------------------------------------------------------------
--oel01:--
Running /usr/bin/ssh -x -l grid oel01 date to verify SSH connectivity has been setup from local host to oel01.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Sun Feb 18 00:42:50 IST 2018
------------------------------------------------------------------------
--oel02:--
Running /usr/bin/ssh -x -l grid oel02 date to verify SSH connectivity has been setup from local host to oel02.
IF YOU SEE ANY OTHER OUTPUT BESIDES THE OUTPUT OF THE DATE COMMAND OR IF YOU ARE PROMPTED FOR A PASSWORD HERE, IT MEANS SSH SETUP HAS NOT BEEN SUCCESSFUL. Please note that being prompted for a passphrase may be OK but being prompted for a password is ERROR.
Sun Feb 18 00:42:50 IST 2018
------------------------------------------------------------------------
SSH verification complete.
[grid@oel01 sshsetup]$ 

[grid@oel01 grid]$ ./runcluvfy.sh stage -post hwos -n oel01,oel02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "oel01"
  Destination Node                      Reachable?           
  ------------------------------------  ------------------------
  oel02                                 yes                   
  oel01                                 yes                   
Result: Node reachability check passed from node "oel01"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Comment               
  ------------------------------------  ------------------------
  oel02                                 passed               
  oel01                                 passed               
Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...
  Node Name     Status                    Comment               
  ------------  ------------------------  ------------------------
  oel02         passed                                         
  oel01         passed                                         

Verification of the hosts config file successful


Interface information for node "oel02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU 
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.22.2    192.168.22.0    0.0.0.0         UNKNOWN         08:00:27:31:C4:F2 1500
 eth1   192.168.52.2    192.168.52.0    0.0.0.0         UNKNOWN         08:00:27:1C:9C:4C 1500
 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         UNKNOWN         52:54:00:DC:15:6A 1500


Interface information for node "oel01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU 
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   192.168.22.1    192.168.22.0    0.0.0.0         UNKNOWN         08:00:27:51:84:D3 1500
 eth1   192.168.52.1    192.168.52.0    0.0.0.0         UNKNOWN         08:00:27:F0:D7:99 1500
 virbr0 192.168.122.1   192.168.122.0   0.0.0.0         UNKNOWN         52:54:00:DC:15:6A 1500


Check: Node connectivity of subnet "192.168.22.0"
  Source                          Destination                     Connected?   
  ------------------------------  ------------------------------  ----------------
  oel02:eth0                      oel01:eth0                      yes           
Result: Node connectivity passed for subnet "192.168.22.0" with node(s) oel02,oel01


Check: TCP connectivity of subnet "192.168.22.0"
  Source                          Destination                     Connected?   
  ------------------------------  ------------------------------  ----------------
  oel01:192.168.22.1              oel02:192.168.22.2              passed       
Result: TCP connectivity check passed for subnet "192.168.22.0"


Check: Node connectivity of subnet "192.168.52.0"
  Source                          Destination                     Connected?   
  ------------------------------  ------------------------------  ----------------
  oel02:eth1                      oel01:eth1                      yes           
Result: Node connectivity passed for subnet "192.168.52.0" with node(s) oel02,oel01


Check: TCP connectivity of subnet "192.168.52.0"
  Source                          Destination                     Connected?   
  ------------------------------  ------------------------------  ----------------
  oel01:192.168.52.1              oel02:192.168.52.2              passed       
Result: TCP connectivity check passed for subnet "192.168.52.0"


Check: Node connectivity of subnet "192.168.122.0"
  Source                          Destination                     Connected?   
  ------------------------------  ------------------------------  ----------------
  oel02:virbr0                    oel01:virbr0                    yes           
Result: Node connectivity passed for subnet "192.168.122.0" with node(s) oel02,oel01


Check: TCP connectivity of subnet "192.168.122.0"
Result: TCP connectivity check failed for subnet "192.168.122.0"


Interfaces found on subnet "192.168.22.0" that are likely candidates for a private interconnect are:
oel02 eth0:192.168.22.2
oel01 eth0:192.168.22.1

Interfaces found on subnet "192.168.52.0" that are likely candidates for a private interconnect are:
oel02 eth1:192.168.52.2
oel01 eth1:192.168.52.1

Interfaces found on subnet "192.168.122.0" that are likely candidates for a private interconnect are:
oel02 virbr0:192.168.122.1
oel01 virbr0:192.168.122.1

WARNING:
Could not find a suitable set of interfaces for VIPs

Result: Node connectivity check passed


Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Post-check for hardware and operating system setup was successful.
[grid@oel01 grid]$
[grid@oel01 grid]$
[grid@oel01 grid]$
[grid@oel01 grid]$   


[grid@oel01 grid]$ ./runcluvfy.sh stage -pre crsinst -n oel01,oel02 -r 11gR2 \
> -osdba dba \
> -orainv oinstall \
> -fixup -fixupdir /u01/app/grid -verbose

Please run the following script on each node as "root" user to execute the fixups:
'/tmp/CVU_11.2.0.1.0_grid/runfixup.sh'

Pre-check for cluster services setup was unsuccessful on all the nodes.
[grid@oel01 grid]$  

Monday, February 12, 2018

Deleting a node from the cluster

Deleting a node from cluster

Stop the database and ASM in the node.
Remove the database and ASM.
Remove the database software.
Remove the cluster software.
Update the Oracle inventory.

olsnodes command list the nodes and other information for all the nodes in the cluster.

olsnodes
olsnodes -n lists the nodes that are participating in the cluster with node numbers.
olsnodes -i lists the nodes that are participating in the cluster with VIPs.
olsnodes -s lists node status
olsnodes -a lists node mode
olsnodes -t lists the nodes that are pinned/unpinned
olsnodes -l -p lists the IP of private interconnect on the node where command is executed.

Current state

As Oracle user, olsnodes -s -t
srvctl status database -d ORCL -verbose
select inst_id,instance_name,status,to_char(startup_time,'mm/dd/yyyy hh24:mi:ss') "STARTUP_TIME" from gv$instance order by inst_id;

1. Stop the instance

srvctl stop instance -d ORCL -i ORCL3
srvctl status database -d ORCL -verbose

2. Remove the instance

srvctl remove instance -d ORCL -i ORCL3

3. Disable Oracle clusterware on Node 3

As root,

cd $GRID_HOME/crs/install
./rootcrs.pl -deconfig -deinstall -force

4. Disable and stop the listener on node 3

As root,

srvctl stop vip -i ORCL3-VIP -f
srvctl remove vip -i ORCL3-VIP -f

5. Delete the node, Run the below command as root user from surviving node node 1.

As root,

cd $GRID_HOME/bin
./crsctl delete node -n node3

6. Update the inventory for grid home.

cd $GRID_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/grid/product/11.2.0/grid "CLUSTER_NODE={node1,node2}" CRS=TRUE

7. Remove the database home

cd $ORACLE_HOME/deinstall
./deinstall -local

8. Update the inventory for oracle home

cd $ORACLE_HOME/oui/bin
./runInstaller -updateNodeList ORACLE_HOME=/u01/app/oracle/product/11.2.0 "CLUSTER_NODE={node1,node2}"

9. Post deletion steps



rm -rf /etc/oraInst.loc
rm -rf /etc/oracle
rm -rf /opt/ORCLfmap
rm -rf /etc/oratab
rm -rf /u01
groupdel dba oinstall
rm -rf /u01/app/oracle

Adding a node to the cluster

Before adding a node,

Install the operating system software.
create/modify required users and groups.
Configure kernel parameters.
Configure network.
Install the grid and oracle binaries.etc..

check the nodes in the cluster

olsnodes -s -i -t Here we see only 2 nodes in the cluster.

Once the node is ready to add to the cluster,

As grid user,

1. Run the cluster verification utility to confirm the node can be added to the node.

cd $GRID_HOME/bin
./cluvfy stage -pre nodeadd -n RAC3 -fixup -fixupnoexec

2. Run addnode.sh to add the node

cd $GRID_HOME/bin
./addNode.sh -silent "CLUSTER_NEW_NODES={RAC3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"


3. As a root user, execute orainstRoot.sh and root.sh scripts.

4. olsnodes -s -i -t Here the new node is part of the cluster.

Real Application Clusters

Master Node in RAC can be identified by

select * from gv$gcs_resource;

The node which takes the auto backup of OCR is called master node in RAC.

By reviewing the ocssd and crsd logs.

cat $ORACLE_HOME/log/host01/cssd/ocssd.log |grep ‘master node’ |tail -1

Cluster name can found from below command

How to Query the Cluster Name [ID 577300.1]

GRID_HOME/bin/cemulto -n

OCR

OCR maintains information about clusterware resources like ASM, Database instances, scan listeners, disk groups, VIPs, Nodeapps etc.
OCR can be managed by ocrconfig, ocrcheck, ocrdump utilities as root user.
We can have up to 4 mirror copies of OCR.
Oracle automatically takes the backup of OCR for every 4, 8, 12 hours, At the end of every day, At the end of every week and frequency or number of OCR backups to retain can't be changed.
Oracle clusterware retains last 3 backups of OCR plus last daily and last weekly backup.
When a node gets added or deleted the information will be updated in OCR. It should be shared by all nodes in the cluster.

ocrconfig -showbackup Automatic backup location
ocrconfig -local -manualbackup Manually backing up to the local node.
ocrconfig -local -showbackup    lists backups available in the local node.
ls -ltr /u01/app/grid/cdata/nodename/*.olr lists backups available in OS.

Restore OLR from current backup

Stop the clusterware.
Verify ohasd.bin is not running
Restore the backup
ocrconfig -local -restore /u01/app/grid/cadata/nodename/olrbackup.olr
Start the clusterware.

Relocate OCR to different ASM disk group

Check the current location of OCR.
$GRID_HOME/bin/ocrcheck
Add the new location
$GRID_HOME/bin/ocrconfig -add +NEW_OCR
Check the current location of OCR
$GRID_HOME/bin/ocrcheck
Delete the old OCR location
$GRID_HOME/bin/ocrconfig -delete +DATA
Check the current location of OCR
$GRID_HOME/bin/ocrcheck

Voting Disks

Voting disks maintains the node membership.
Each node participating in the cluster has to send it's heart beat to voting disks for every 5 seconds.
If any of the node fails to vote it's availability for 30 seconds the node gets rebooted.
voting disk backups are taken by dd command and backing up voting disk should be part of your backup routine/policy and operations on voting disk can be performed as root user.
Oracle recommends to take the backup of voting disk after node addition or deletion.

Relocate VOTING disks to different disk groups.

$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/crsctl replace votedisk +OCR
$GRID_HOME/bin/crsctl query css votedisk

OCR and voting files are vital for clusterware operation, During installation of GI we have the option to choose only one disk group for OCR and voting files. If the disk group goes down we will be loosing both OCR and voting files and recovery of both will have different approaches, Hence down time would be more. Here i'm outlining the procedure to separate OCR and voting to different disk groups.

As a root user,

$GRID_HOME/bin/ocrcheck
$GRID_HOME/bin/crsctl query css votedisk
Create disk group after new disks are available.
$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/crsctl replace votedisk +VD
$GRID_HOME/bin/crsctl query css votedisk

Thursday, February 8, 2018

Sequential Read and Scatter Read

A db file sequential read is a single-block read and db file scatter read is a multi block read.

db file sequential read - A single-block read (i.e., index fetch by ROWID) 
db file scatter read       - A multi block read (a full-table scan, OPQ, sorting)

The db file sequential read event signifies that the user process is reading the buffers into SGA(Database buffer cache) and is waiting for physical I/O to return or complete. It reads the blocks in to contiguous memory space and these single block reads or I/Os uses indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache.

Starting with Oracle 10g R2, Oracle recommends to not to set db_file_multiblock_read_count parameters that allowing oracle to empirically determine the optimal setting.


The db file sequential read has 3 parameters.


  1. file#,
  2. first block#,
  3. block count.
In 10g, Wait event falls under User I/O wait class.
Physical disk speed is an important factor in weighing these costs. Faster disk access speeds can reduce the costs of a full-table scan vs. single block reads to a negligible level. solid state disks provide up to 100,000 I/Os per second, six times faster than traditional disk devices. In a solid-state disk environment, disk I/O is much faster and multiblock reads become far cheaper than with traditional disks.

e.g.

Top 5 Timed Events
                                                           % Total
Event                        Waits         Time (s)       Ela Time
--------------------------- ------------ -----------      --------
db file sequential read       2,598          7,146          48.54
db file scattered read       25,519          3,246          22.04
library cache load lock         673          1,363           9.26
CPU time                                     1,154           7.83
log file parallel write      19,157            837           5.68


From the above e.g. reads and a write constitute the majority of the total database time. In this case, We need to increase database buffer cache (db_cache_size) or tune the SQL or Invest amount in having faster disk (SSD) I/O sub system.



Script to measure disk I/O cost of db file sequential read.

col c1 heading 'Average Waits|forFull| Scan Read I/O'        format 9999.999
col c2 heading 'Average Waits|for Index|Read I/O'            format 9999.999
col c3 heading 'Percent of| I/O Waits|for Full Scans'        format 9.99
col c4 heading 'Percent of| I/O Waits|for Index Scans'       format 9.99
col c5 heading 'Starting|Value|for|optimizer|index|cost|adj' format 999
 
 
select
   a.average_wait                                  c1,
   b.average_wait                                  c2,
   a.total_waits /(a.total_waits + b.total_waits)  c3,
   b.total_waits /(a.total_waits + b.total_waits)  c4,
   (b.average_wait / a.average_wait)*100           c5
from
  v$system_event  a,
  v$system_event  b
where
   a.event = 'db file scattered read'
and
   b.event = 'db file sequential read';

Scattered reads and full-table scans

Full table scans are not necessarily a detriment to performance and is a fastest way to access the table rows. The CBO choices to perform FTS depends on OPQ, db_block_size and clustering_factor, estimated % of rows determined by the query and other factors. Once of CBO chooses to perform FTS, The speed of performing FTS (SOFTS) depends on internal and external factors.
  1. The number of CPUs on the system
  2. The setting for Oracle Parallel Query (parallel hints, alter table)
  3. Table partitioning
  4. The speed of the disk I/O subsystem
With all the above factors it may be impossible to find best setting for optimizer_index_cost_adj parameter. In the real world the decision to perform FTS depends on below factors.
  1. Free blocks available in database buffer cache.
  2. Free space in temp as if query having order by clause
  3. Current demands on CPU.
Hence, it follows that the optimizer_index_cost_adj should change frequently, as the load changes on the server.