Wednesday, December 30, 2015

Understanding AWR

Report Header:

                This is a self-explanatory section which provides information about database name,database id, instance name, instance number, startup time, release number, RAC/NON-RAC system, Hostname, platform, number of cpus, cores, memory and snap interval etc.

Load Profile:
                It gives the information about workload on the database during the snapshot captured time / interval time.

DB Time: It is the amount of time spent on executing user calls. It doesn’t include background processes
CPU Time: It is the amount of CPU time spent on executing user calls. It doesn’t inclu background processes and time is in milliseconds.
Logical Reads: Consistent GETS + DB blocks GETS = Logical Reads
Block Changes: The number blocks modified during snapshot interval time
Physical Reads: The number of blocks request causing an I/O operation
Physical Writes: The number of physical writes writes
User calls: Number of user queries generated
Parses: The total number of parses including soft parses as well as hard parses
Hard parses: The number of hard parses
Soft parses: parses-hard parses
Sorts: Number of sorts performed
Logons: Number of logons during snapshot interval time
Executes: Number of SQLs executed during snapshot interval time
Transactions: Number of transactions per second
Compare the statistics with a base line snapshot report to determine if the database activity is abnormal.
Instance Efficiency Percentage:

                This section displays the collected information in the form of ratios for the several buffers like database buffer cache hit percentage and library cache hit percentage and shared pool memory usage statistics also available in this section. Instance efficiency should be close to 100%.
Execute to parse % and Parse CPU to Parse Elapsed %, having low values means that there could be a parsing problem. We may need to look at shared pool sizing issue or bind variables issue.
Redo NoWait%: Usually this stats is 99% or greater
In-memory Sort%: This deals about sort_area_size, hash_area_size and pga_aggregate_area, if any of these are not enough in size to handle the database load then In-memory Sort% will go down
Latch Hit%: Should be close to 100%
Soft parse%: with 98.20 % for the soft parse meaning that about 1.72 % (100 -soft parse) is happening for hard parsing. You might want to look at you bind variables issues.
% Non-Parse CPU: Most of our statements were already parsed so we weren't doing a lot of re parsing. Re parsing is high on CPU and should be avoided.
Buffer Nowait %:
99.99
Redo NoWait %:
100.00
Buffer Hit %:
93.06
In-memory Sort %:
100.00
Library Hit %:
98.67
Soft Parse %:
98.20
Execute to Parse %:
3.40
Latch Hit %:
99.98
Parse CPU to Parse Elapsd %:
0.01
% Non-Parse CPU:
96.21
Shared pool Statistics:




Begin
End
Memory Usage %:
73.86
75.42
% SQL with executions>1:
92.61
93.44
% Memory for SQL w/exec>1:
94.33
94.98


Memory usage % is the shared pool usage, SO here 73.86% of our shared pool used and out of that almost 94% is being re-used.

If memory usage % is larger like 90% means that our shared pool is too small and if it is at 50% then shared pool is too large


Monday, December 28, 2015

crontab

Scheduling at OS can be performed in the following ways

1) Using crontab

* Crontab stands for cron table, Which dervied from greek word chronos (Chronos means time).
 * To schedule multiple system events at regular intervals we use crontab.
* Each user can have their own crontab.
* The location of the crontab files is /var/spool/cron/crontabs.
* The crontab files which control the access are /etc/cron.d/cron.allow and /etc/cron.d/cron.deny.
* cron jobs can be allowed or disallowed for individual users, as specified in the files cron.allow and cron.deny
* A user should be placed in the cron.allow to get access to the crontab.
* If the cron.allow file does not exist but the cron.deny file does, then a user must not be listed there in order to use a given    command.
* If neither of these files exists, only the superuser will be allowed to use a given command.
* Blank lines and leading spaces and tabs are ignored.
* If an entry in the crontab starts with # then the line will be treated as a commnet
* Comments cannot be allowed in the same line as cron commands,

2) Using at

* To schedule a single system event we use at
* The location of the at is /var/spool/cron/atjobs.
* The at files which control the access are /etc/cron.d/at.deny.


Crontab entry format should be as follows

m h dom mon dow <File name>

m - Minutes (0-59)

h - Hours (0-23)

dom - Date of Month (1-31)

mon - Month (1-12)

dow - Day of Week (0-6) ==> 0 means Sunday and 6 means Saturday


* Crontab entries can be listed by contab -l

* To view the your crontab, crontab -v

* crontab can be Edited by crontab -e

* Saving the edited crontab can be carried out as follows

Esc + :wq + Enter

we get the following message

crontab: Installting new crontab and the control will be returned to the command line.

* To remove crontab, crontab -r / contab -i which prompts [Y|N] before removing crontab.

Examples of crontab

Database full backup at 09:10 AM on every sunday.

10 09 * * 0 /home/oracle/Backup-Full.sh

Database incremental backup daily at 08:30 PM except on sundays

30 20 * * 1-6 /home/oracle/Backup-Incr.sh

Archivelog backup at every 3 hours starting at 12:00 AM

00 00,03,06,09,12,15,18,21 * * 1-6 /home/oracle/Backup-Archive.sh

Note: I have posted the examples without testing the commands, Please make sure to check it in practice system.The notes is based on my understanding.


Thanks,

Wednesday, September 30, 2015

Important

Corruption at empty blocks

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=354717203845222&parent=DOCUMENT&sourceId=794505.1&id=336133.1&_afrWindowMode=0&_adf.ctrl-

state=oraf5szyy_191

Table

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=354721681748824&parent=DOCUMENT&sourceId=794505.1&id=556733.1&_afrWindowMode=0&_adf.ctrl-

state=oraf5szyy_240

LOB

https://support.oracle.com/epmos/faces/DocumentDisplay?_afrLoop=354724139149126&parent=DOCUMENT&sourceId=794505.1&id=293515.1&_afrWindowMode=0&_adf.ctrl-

state=oraf5szyy_281

Validate database through RMAN


We can use backup validate command to check the database for any physical / logical block corruptions.

Step 1:

1) Connect to RMAN

2) backup validate check logical database;

3) We can omit backup keyword from Oracle 11g onward and the above command will not take the backup of the database. It only check for any logical / physical corruptions in the database.

4) To complete the command faster configure number of channels its depends on number of CPUs as well as database edition.

RMAN> configure device type disk parallelism 4;
RMAN> backup validate check logical database;

OR

RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup validate check logical database;
}

5) If any of the datafiles are missing then use clause SKIP INACCESSIBLE in the above command to get succeed.

6) The CHECK LOGICAL option checks for both PHYSICAL and LOGICAL Block corruptions.

7) For a single or specific datafiles use

              check logical validate datafile 1;

8) Monitoring Validate command progress

select sid, serial#, context, sofar, totalwork,round(sofar/totalwork*100,2) "%_complete"
from v$session_longops
where opname like 'RMAN%'
  and opname not like '%aggregate%'
  and totalwork != 0
  and sofar <> totalwork;

9) In 11g RMAN generates a trace file with the details of the corruption description

10) From 11g, the validation of a single datafile can be made parallel by using the section clause. RMAN divides the file into sections and processes each file section in parallel. The following example divides the datafile 5 into 1 gb sections when multiple channels are configured or allocated and each section is run in parallel

backup validate check logical datafile 5 SECTION SIZE 1024M;

11) From 11g onward a range of blocks can be checked within a datafile using the BLOCK TO clause. The following command will check blocks from 5 to 20 of datafile 1:

validate check logical datafile 1 BLOCK 5 TO 20;

Step 2: Identify the corrupt segments

$ sqlplus / as sysdba
set pagesize 2000
set linesize 280
SELECT e.owner, e.segment_type, e.segment_name, e.partition_name, c.file#
     , greatest(e.block_id, c.block#) corr_start_block#
     , least(e.block_id+e.blocks-1, c.block#+c.blocks-1) corr_end_block#
     , least(e.block_id+e.blocks-1, c.block#+c.blocks-1)
       - greatest(e.block_id, c.block#) + 1 blocks_corrupted
     , corruption_type description
  FROM dba_extents e, v$database_block_corruption c
 WHERE e.file_id = c.file#
   AND e.block_id <= c.block# + c.blocks - 1
   AND e.block_id + e.blocks - 1 >= c.block#
UNION
SELECT s.owner, s.segment_type, s.segment_name, s.partition_name, c.file#
     , header_block corr_start_block#
     , header_block corr_end_block#
     , 1 blocks_corrupted
     , corruption_type||' Segment Header' description
  FROM dba_segments s, v$database_block_corruption c
 WHERE s.header_file = c.file#
   AND s.header_block between c.block# and c.block# + c.blocks - 1
UNION
SELECT null owner, null segment_type, null segment_name, null partition_name, c.file#
     , greatest(f.block_id, c.block#) corr_start_block#
     , least(f.block_id+f.blocks-1, c.block#+c.blocks-1) corr_end_block#
     , least(f.block_id+f.blocks-1, c.block#+c.blocks-1)
       - greatest(f.block_id, c.block#) + 1 blocks_corrupted
     , 'Free Block' description
  FROM dba_free_space f, v$database_block_corruption c
 WHERE f.file_id = c.file#
   AND f.block_id <= c.block# + c.blocks - 1
   AND f.block_id + f.blocks - 1 >= c.block#
order by file#, corr_start_block#;


An output example is:

OWNER SEGMENT_TYPE       SEGMENT_NAME PARTITION_ FILE# CORR_START_BLOCK# CORR_END_BLOCK# BLOCKS_CORRUPTED DESCRIPTION
----- ------------------ ------------ ---------- ----- ----------------- --------------- ---------------- -------------
SCOTT TABLE              EMP                         6                10              10                1
SCOTT TABLE PARTITION    ORDER        ORDER_JAN      6                26              28                3
                                                     6                29              29                1 Free Block
SCOTT TABLE              BONUS                       6                34              34                1
                                                     6                35              35                1 Free Block
SCOTT TABLE              DEPT                        6                42              42                1 Segment Header
SCOTT TABLE              INVOICE                     6                50              50                1

Oracle Block Corruption - ORA-01578 ORA-01110 ORA-26040


The following errors were triggered in the alert log file.

ORA-01578: ORACLE data block corrupted (file # 11, block # 84)
ORA-01110: data file 5: '/oradata/users.dbf'
ORA-26040: Data block was loaded using the NOLOGGING option


Step 1:

If the error is after RMAN duplicate / RESTORE then enable force logging at source database and perform duplicate / restore with new RMAN backup.

Enabling force logging:

alter database force logging;

Step 2:

If the error is produced at physical standby database, Then the option is to restore the datafile from primary only if the primary database's datafile is in good condition. To avoid this problem from being introduced enable force logging at primary.

Note:-We can do exp/imp of the corrupted objects only if the corrupted object(block) is not of type index.

Identify the affected segment:

File numbers can be absolute file number(AFN) and relative file number(RFN), In most of the cases both are identical but can be different when the database has been migrated from 7i or transportable/plugged tablespaces were used.

Getting absolute file number:

The absolute file number is provide by the error line ORA-01110 which can be found after the error ORA-01578 line, From the above errors AFN is 5. We can get AFN using RFN using DBA_DATA_FILES.


We can use the following utilities to check for the corruption in the database.

1) RMAN

2) DBV

3) select * from v$database_block_corruption;


DBV will give information about the corrupted blocks in different ways. In general, It provides relative data block address (RDBA) associated with the effected block.

e.g.

Page 34 is marked corrupt
Corrupt block relative dba: 0x02c00022 (file 11, block 34)
Bad check value found during dbv:
Data in bad block:
 type: 6 format: 2 rdba: 0x02c00022
 last change scn: 0x0771.4eebe71c seq: 0x2 flg: 0x04
 spare1: 0x0 spare2: 0x0 spare3: 0x0
 consistency value in tail: 0xe71c0602
 check value in block header: 0xd3ce
 computed block checksum: 0x2


In the above e.g. 0x02c00022 is the hex decimal block address and 11 is the RFN and 34 is block number.

Another way DBV will report the information about the corrupted blocks as follows.

DBV-200: Block, dba 46137379, already marked corrupted"

Here we use below query to find the RFN and Block number.

select dbms_utility.data_block_address_file(&&rdba) RFN,
dbms_utility.data_block_address_block(&&rdba) BL
from dual;
Enter value for rdba: 46137379

RFN        BL
---------- ----------
11         35

Getting AFN from RFN:

select file_id AFN, relative_fno, tablespace_name
from dba_data_files
where relative_fno=&RFN;
Enter value for rfn: 11

AFN        RELATIVE_FNO TABLESPACE_NAME
---------- ------------ ------------------------------

5          11           USERS


RMAN reports information about the corrupted blocks in v$database_block_corruption.


Step 3:

Identifying the corrupted objects:

select *
from dba_extents
where file_id = &AFN
and &BL between block_id AND block_id + blocks - 1;

It will give list of objects and its types resides in the corrupted blocks.
If the above query does't return any rows then corruption is at segment header in LMTS and if the the corruption is at segment header then the above query produces a corruption message is alertlog file and query will not fail. In that case run the following query

select owner, segment_name, segment_type, partition_name
from   dba_segments
where  header_file = &AFN
  and  header_block = &BL;

If the corrupted block belongs to a free extent i.e block that is not associated to any object or if the corrupted block is in temp file then the above queries returns no data. For TEMP FILES segment type will be temporary, If the block belongs to a free extent it should appear in DBA_FREE_SPACE

select *
from  dba_free_space
where file_id = &AFN
  and &BL between block_id AND block_id + blocks - 1;

//** Identified object can be a free block (or) an index (or) a table (or) a LOB **//

If the identified block is an index type then drop and recreate index to resolve the error.

In our case the object type is index, So we have dropped and recreated indexes which resolved the issue.

Note:- Further document will be updated soon


The identified block is a FREE Block,

If the NOLOGGING Block is a FREE Block, there is the option to wait until the block is reused which will automatically re-format the block or force re-formatting the block using Doc ID 336133.1

The identified block holds a TABLE,

DBMS_REPAIR.SKIP_CORRUPT_BLOCKS can be used to skip the corrupt block in SQL statements;

Doc ID 556733.1 has a DBMS_REPAIR example.

Then decide to re-create the segment: by moving the table: alter table &table_name move;

OR

by saving the data (export, Create Table as Select, etc) and then truncate or drop/create.

The identified block holds LOB,

If it is a LOB use Doc ID 293515.1










Thursday, July 30, 2015

DBMS_SCHEDULER jobs are not running


Received a request that scheduler jobs are not started today. Googled the issue and verified all the possibilities and found some entries in the alertlog file regarding background process CJQ status as follows

Starting background process CJQ0
ALTER SYSTEM SET job_queue_processes=1000 SCOPE=BOTH;
CJQ0 started with pid=37, OS id=2668.
.
Stopping background process CJQ0.
.
ALTER SYSTEM SET job_queue_processes=0 SCOPE=BOTH;.
.
ALTER SYSTEM SET job_queue_processes=0 SCOPE=BOTH;

 2) ps -ef|grep cjq returned no rows

 3) check for the CJQ process from v$process.

select spid,program,background from v$program;

we couldn't find a program w.r.to CJQ process.

 Confirmed that CJQ is not running because of that reason jobs aren't started today and then bounced the database which will starts all the background process but no luck even after restarting the database. The CJQ background process is responsible for initiating the scheduler jobs which will spawn multiple child processes. On further investigating the issue found scheduler has been disabled and enabling scheduler will start the CJQ process. So it is good to perform below steps before performing restart of the database to check scheduler status.

Scheduler Status:

1) select OWNER,OBJECT_NAME,OBJECT_ID,CREATED,LAST_DDL_TIME,TIMESTAMP from dba_objects where object_name='SCHEDULER_DISABLED';
 OWNER      OBJECT_NAME                     OBJECT_ID CREATED   LAST_DDL_ TIMESTAMP
---------- ------------------------------ ---------- --------- --------- -------------------
SYS        SCHEDULER_DISABLED                ******* ********* ********* ******************
 If the above query returns a row means SCHEDULER has been disabled.

 2) ps -ef|grep $ORACLE_SID|grep -i cjq0

 //** Here no record will be returned **//

3) exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','FALSE');

 PL/SQL procedure successfully completed.

4) select * from dba_objects where object_name='SCHEDULER_DISABLED';

 no rows selected

5) !ps -ef|grep $ORACLE_SID|grep -i cjq0
oracle    1947     1  0 11:24 ?        00:00:00 ora_cjq0_VIPOLTP

6) select count(*) from dba_scheduler_running_jobs;

   COUNT(*)
----------
        13
Conclusion:
If scheduler jobs are not started

·         Check CJQ process is running or not

·         Check the alertlog for the string “Stopping background process CJQ0”

·          Perform the above steps and get confirm whether scheduler was disabled or not, if scheduler was disabled then enable it then CJQ will start and jobs will be initiated.

·         In our case scheduler was disabled and changed the number of process but not enabled the scheduler.

·         Database restart will start all the background process, but if SCHEDULER is disabled then CJQ background process will not start even after database startup.

·         CJQ background process is responsible for initiating DBA_SCHEDULER_JOBS as well as DBA_JOBS. If CJQ is not running then these scheduled job will not start any more.

Thanks,
V V


Source : Internet

This is one of the most common Scheduler questions asked.
Here we list some of the common problems and their solutions.

1) job_queue_processes may be too low (this is the most common problem)
The value of job_queue_processes limits the total number of dbms_scheduler
and dbms_job jobs that can be running at a given time.
To check whether this is the case check the current value of
job_queue_processes with
SQL> select value from v$parameter where name='job_queue_processes';
Then check the number of running jobs
SQL> select count(*) from dba_scheduler_running_jobs;
SQL> select count(*) from dba_jobs_running;

If this is the problem you can increase the parameter using
SQL> alter system set job_queue_processes=1000;

2) max_job_slave_processes may be too low
If this parameter is not NULL then it limits how many dbms_scheduler jobs can
be running at a time. To check whether this is the problem, check the current
value using
SQL> select value from dba_scheduler_global_attribute
where attribute_name='MAX_JOB_SLAVE_PROCESSES';
Then check the number of running jobs
SQL> select count(*) from dba_scheduler_running_jobs;

If this is the problem you can increase the number or just NULL it out using
SQL> exec dbms_scheduler.set_scheduler_attribute('max_job_slave_processes',null)

3) sessions may be too low
This parameter limits the number of sessions at any time. Every Scheduler job
requires 2 sessions. To check whether this is the problem, check the current
valule using
SQL> select value from v$parameter where name='sessions';
Then check the current number of sessions using
SQL> select count(*) from v$session ;

If the numbers are too close you can increase the maximum using
SQL> alter system set job_queue_processes=200;

4) Have you recently applied a timezone update patch or upgraded the database
to a version with newer timezone information ? If you skipped any steps when
updating the timezone information, jobs may not run. To check whether this
is the case try doing
SQL> select * from sys.scheduler$_job;
and
SQL> select * from sys.scheduler$_window;
and make sure they finish without errors.

If it throws a timezone warning, reapply the upgrade or
timezone patch making sure to follow all the steps.

5) Is the database running in restricted mode ?
If the database is running in restricted mode then no jobs will run (unless
you are using 11g and use the ALLOW_RUNS_IN_RESTRICTED_MODE attribute).
To check this use
SQL> select logins from v$instance ;

If logins is restricted you can disable the restricted mode using
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;

6) Is the job scheduled to run on an instance which is down ?

You can check this by seeing whether instance_id is set for the job (check the dba_scheduler_jobs view), and if so you should check whether that instance is up.

7) Is the job scheduled to run on a service which has not been started on any instances ?

You can check this by checking what job_class a job points to and then checking whether that class points to a service. If it does, make sure the service has been started on at least one running instance. You can start a service on an instance using dbms_service.start_service.

8) Is the Resource Manager in effect with a restrictive resource plan ?

If a restrictive resource plan is in effect, scheduler jobs may not have sufficient resources allocated so they may not run. You can check what resource plan is in effect by doing

SQL> select name from V$RSRC_PLAN ;

If no plan is in effect or the plan in effect is INTERNAL_PLAN then the resource manager is not in effect. If the resource manager is in effect you can disable it by doing

SQL>alter system set resource_manager_plan = '';


9) Has the Scheduler been disabled ? This is not a supported action
but it is possible that someone has done it anyway. To check this do
SQL> select value from dba_scheduler_global_attribute where attribute_name='SCHEDULER_DISABLED'

If this query returns TRUE then you can fix this using
SQL> exec dbms_scheduler.set_scheduler_attribute('scheduler_disabled','false');

Reasons why jobs may run late

1) The first thing to check is the timezone that the job is scheduled with
SQL> select owner, job_name, next_run_date from dba_scheduler_jobs ;

If the jobs are in the wrong timezone they may not run at the expected
time. If the next_run_date is using an absolute timezone offset (like
+08:00) instead of a named timezone (like US/PACIFIC) then the jobs may not
run as expected if daylight savings is in effect - they may run an hour
early or late.

2) It may be that at the time the job was scheduled to run, one of the several
limits above may have been temporarily reached causing the job to be delayed.
Check if the limits above are high enough and if possible check them during
the time that the job is being delayed.

3) One possible reason that one of the above limits may be hit is that a
maintenance window may have come into effect. Maintenance windows are Oracle
Scheduler windows that belong to the window group named
MAINTENANCE_WINDOW_GROUP. During a scheduled maintenance window, several
maintenance tasks are run using jobs. This may cause one of the limits listed
above to be hit and user jobs to be delayed. See the admin guide for more info
about this (chapter 24).

To get a list of maintenance windows use
SQL> select * from dba_scheduler_wingroup_members;

To see when the windows run use
SQL> select * from dba_scheduler_windows;

To fix this you can either increase the limits or reschedule the maintenance
windows to run at more convenient times.

Diagnosing other Problems

If none of this works, here are some further steps you can take to try to
figure out what is going on.

1) Check whether there are any errors in the alert log. If the database is
having trouble allocating memory or has run out of disk space or any other
catastrophic errors have occurred, you should resolve those first. You can
find the location of the alert log by using
SQL> select value from v$parameter where name = 'background_dump_dest';
The alert log will be in this directory with a name starting with "alert".

2) Check whether if a job coordinator trace file and if it does, check if it
contains any errors. If this exists, it will be located in the
'background_dump_dest' directory which you can find as above and will look
something like SID-cjq0_nnnn.trc . If there are any errors here they may
hint at why jobs are not running.

3) If either of the above indicates that the SYSAUX tablespace (where the scheduler stores its logging tables) is full, you can use the dbms_scheduler.purge_log procedure to clear out old log entries.

4) See if there is a window currently open. If there is, you can try closing it to see if that helps .
SQL> select * from DBA_SCHEDULER_GLOBAL_ATTRIBUTE where
attribute_name='CURRENT_OPEN_WINDOW';
SQL> exec DBMS_SCHEDULER.close_window ('WEEKNIGHT_WINDOW');

5)try running a simple run-once job and see if it runs
SQL>begin
dbms_scheduler.create_job (
job_name => 'test_job',
job_type => 'plsql_block',
job_action => 'null;',
enabled => true);
end;
/
SQL> -- wait a while
SQL> select * from user_scheduler_job_run_details where job_name='TEST_JOB';

6) If a simple run-once job doesn't run, you can try restarting the scheduler as follows.

SQL> exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED', 'TRUE');
SQL> alter system set job_queue_processes=0;
SQL> exec dbms_ijob.set_enabled(FALSE);
SQL>
SQL> alter system flush shared_pool;
SQL> alter system flush shared_pool;
SQL>
SQL> exec dbms_ijob.set_enabled(TRUE);
SQL> alter system set job_queue_processes=99;
SQL> exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED', 'FALSE');






    Wednesday, July 29, 2015

    RMAN-03002 RMAN-05520

    Starting Duplicate Db at 07-JUL-15
    released channel: c1
    released channel: c2
    released channel: ch1
    released channel: ch2
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 07/07/2015 06:34:15
    RMAN-05520: database name mismatch, auxiliary instance has PROD, command specified TEST

    Recovery Manager complete.


    Here the issue is with db_name in target server, db_name set as PROD and instance started as TEST and duplicating target database to TEST. So we are duplicating to TEST but db_name set to PROD in parameter file.

    i have changed the db_name parameter in parameter file to TEST and duplicate complted successfully.

    Steps:

    1) Check DB started with pfile or spfile

    2) if it is with spfile, Take the backup of spfile and create pfile from spfile

    3) shutdown the database and change directory to $ORACLE_HOME/dbs

    4) open the init parameter file and change db_name parameter to TEST from PROD

    5) Start the database with pfile

    6) create the spfile from pfile

    7) start the instance in nomount state and run the duplicate script.

    Oracle 12c Notes

          We cannot put a pluggable database into archivelog mode, it doesn't have independent log files that could be archived. Either the whole database to be in archivelog mode or none of it. Archivelog destination is common location for any number of pluggable databases as well as for container database and Archive logs are independently identified with the container id.

          Logging in to Pluggable/container databases

             Export ORACLE_SID
             Sqlplus sys@CONNECTIDENTIFIER as sysdba, Which prompts for the password, Please provide it will take to SQL prompt if credentials are valid.

          [oracle@xoomoda1 ~]$  sqlplus sys@DBPLUG as sysdba
      
          SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 28 13:47:45 2015

         Copyright (c) 1982, 2014, Oracle.  All rights reserved.

         Enter password:

        Connected to:
        Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
        With the Partitioning, Real Application Clusters, OLAP, Advanced Analytics
        and Real Application Testing options      

                                       SQL> show CON_ID

                                      CON_ID
                                      ------------------------------
                                      3

                                    SQL> sho CON_NAME

                                   CON_NAME
                                   ------------------------------
                                   DBPLUG

        If you want to connect to container database or any pluggable database.

                                        SQL> conn sys@DBCONT as sysdba
       Enter password:
       Connected.
       SQL> sho CON_ID
       
       CON_ID
        -----------------------------
         1
       SQL> sho CON_NAME

       CON_NAME
       -----------------------------
       CDB$ROOT


        To check on which database (container/pluggable) databases you have logged in please do as follow.
        
        SQL> sho CON_ID

       CON_ID
        ---------------------------
        1
        SQL> sho CON_NAME

        CON_NAME
        -----------------------------
        CDB$ROOT

        From the con_name we can confirm it is a container database because it contains string ROOT.

        SQL> show CON_ID

        CON_ID
        ------------------------------
        3

        SQL> sho CON_NAME

        CON_NAME
        ------------------------------
        PROD

        Here we don’t know is it a container database or pluggable database. So query the v$pdbs which list out all the pluggable database under the root container.

        e.g:
         
        SQL> select CON_ID,NAME,OPEN_MODE,RESTRICTED from v$pdbs;

        CON_ID  NAME          OPEN_MODE     RES 
         ---------- ----------------         ------------------   --------
             2         PDB$SEED   READ ONLY       NO 
             3         PROD            READ WRITE      NO 

         So above two databases are pluggable database under root container.
         Now we can confirm PROD is pluggable database and we have connected to PROD pluggable database.


         We can use alter session command to change the container as alter session set container=name of the conatiner
                   
        e.g: I’m in PROD,
       
        SQL> show CON_ID

        CON_ID
        ------------------------------
        3

        SQL> sho CON_NAME

        CON_NAME
        ------------------------------
        PROD

        Now I would like to connect to DBCONT database which can be accomplished by

        alter session set container=DBCONT

        Under ROOT container there will be one seed container and atleast one pluggable database and seed container database always in READ-ONLY mode.

         SQL> select CON_ID,NAME,OPEN_MODE,RESTRICTED from v$pdbs;

        CON_ID  NAME          OPEN_MODE     RES 
         ---------- ----------------         ------------------   --------
             2         PDB$SEED   READ ONLY       NO 


          Alertlog Location is $ORACLE_BASE/diag/rdbms/sid/sid/trace. Here why I’m specifying is, If we issue sho parameter background or sho parameter dump it will point to $ORACLE_HOME/log not $ORACLE_BASE/diag/rdbms/sid/sid/trace. So in Oracle 12c it is good to use the path instead of sho parameter command to find the alertlog file location. $ORACLE_HOME/log  doesn’t contain alertlog file. The alertlog file is common for all the container and any number of pluggable databases.



      
           Thanks,
           v v

    Sunday, June 28, 2015

    Intro

    Hello Everybody,

    I'm starting my first blog to share the knowledge which i have learned and experienced in the kingdom of Oracle DBA. As of now i'm having quite less experience in handling oracle databases but having a dream to become a professional in the stream Oracle DBA and hoping the best to be on the heights in the coming future.

    This is my personal blog, Please test the cases which has been posted here, Please do not try it on production directly.


    Please advice me too and correct me if i'm wrong any where.



    Thanks,

    Venkatesh V