Thursday 29 March 2012

Maintenance Of Archivelogs On Standby Databases

Maintenance Of Archivelogs On Standby Databases



Maintenance of archivelogs which are already applied on standby database in Oracle 10g and above From Oracle 10g onwards, RMAN has a specific configuration for archivelog deletion policy i.e. ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY

This parameter can be set on primary and standby databases.  To enable automatic maintenance of the archivelogs on standby perform the below steps on standby database :
In a dataguard configuration, the archivelogs from the FRA will be automatically purged when the following conditions are met by the database. If you need it purged in at the Primary site, set it on Primary database.


Step1:
prior to 11g, if not using mandatory archivelog destinations, the database (primary and standby) must be restarted with the following parameter:       
SQL> alter system set "_log_deletion_policy"='ALL' scope=spfile;


Step 2 :
Enable the flash recovery area on the standby database so that all the archivelogs are shipped in the flash recovery area.

Example : 
To enable Flash Recovery Area and to allow the archivelogs to be shipped to Flash Recovery Area set the below parameters
SQL> alter system set DB_RECOVERY_FILE_DEST='/disk1/FRA';
SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE=80G;
SQL> alter system set  LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';


Step 3 :
Set the parameter Archive log deletion policy to applied on standby

Example :
RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

Note : With the above settings whenever there is a space pressure in Flash Recovery Area all the archivelogs that are applied on standby database will be automatically deleted.  
To list the obsolete objects, run the following query:
RMAN> SHOW RETENTION POLICY
RMAN> REPORT OBSOLETE;


Step 4:
there is space pressure in the FRA
When an archivelog is automatically deleted from the FRA, you will see this in the database's alert.log:
Sat Jun 18 01:00:32 2011
Deleted Oracle managed file /opt/app/oracle/FRA/ORA102/archivelog/2011_05_30/o1_mf_1_151_6y71q675_.arc
Deleted Oracle managed file /opt/app/oracle/FRA/ORA102/archivelog/2011_05_31/o1_mf_1_151_6y87pzg4_.arc

Oracle - Flush Execution Plan for Sepcific SQL

Sometimes, you need to invalidate a statement, for example because the current execution plan is not the right one (as consequence of bind variable peeking) and you want it to be re-optimized (hard parse).

alter system flush shared_pool; flushes everything and can kill the performance of a production system for a few dozen of minutes because most of new statements will be hard parsed.

Calculating statistics can invalidate the cursor, but you may have unexpected side effects.

Since 10.2.0.4 there is the possibility to flush a single statement with dbms_shared_pool.purge, providing the address and the hash value of the statement that you get from v$sql.

The syntax is:
dbms_shared_pool.purge('<address>,<hash value>','C');
where <address> and <hash value> comes from V$SQLAREA and 'C' is for cursor.
See ORACLE_HOME/rdbms/admin/dbmspool.sql for more information.

The function will return when it is done (if the cursor us currently used - see USERS_EXECUTING from V$SQLAREA - then it will wait).

Note that this function was made available in 11g (11.1)
There is a fix in 10.2.0.4 that make it usable as well, but the fix must be enabled by setting event 5614566

Here is a full example (in 10.2.0.4 to show how to set the event):

-- We get the address and hash value for the statement that we want to flush

SQL> select address,hash_value,users_executing,sql_text from v$sqlarea where sql_text='select 111111 from dual';

ADDRESS HASH_VALUE USERS_EXECUTING SQL_TEXT
---------------- ---------- --------------- ------------------------------------------------------------
0700000302D1F090 586804821 0 select 111111 from dual

1 row selected.

-- in 10.2.0.4 we need to set the event to activate the bug fix. Not needed in 11.1

SQL> alter session set events '5614566 trace name context forever';

Session altered.

-- we call dbms_shared_pool.purge:

SQL> exec sys.dbms_shared_pool.purge('0700000302D1F090,586804821','c');

PL/SQL procedure successfully completed.

-- the statement is not in shared pool anymore. Will be hard parsed at next execution.

SQL> select address,hash_value,sql_text from v$sqlarea where sql_text='select 111111 from dual';

no rows selected

Wednesday 28 March 2012

RMAN-06571: Datafile 1 Does Not Have Recoverable Copy

Rman-06571: Datafile 1 Does Not Have Recoverable Copy

Version : 10.2.04 and Later

Errors:

During incremental roll forward of standby.

RMAN> SWITCH DATABASE TO COPY;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of switch to copy command at 03/13/2012 07:51:46
RMAN-06571: datafile 1 does not have recoverable copy

Alertlog message:
----------------------
Setting recovery target incarnation to 2 Tue Mar 13 07:46:35 2012 Setting recovery target incarnation to 2

PRIMARY:
---------------
RMAN> list incarnation;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       Primary 489996286        CURRENT 1          29-MAR-11


STANDBY:
--------------
RMAN> list incarnation;

using target database control file instead of recovery catalog

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       STANDBY 489996286        PARENT  1          29-MAR-11
2       2       STANDBY 489996286        CURRENT 1912505686 10-SEP-11

Solution:
If you see above incarnation number if different in primary and standby, we need to swtich incarnation to be same as current incarantion in primary, this will resolve our issue. 

rman target /
reset database to incarnation <incarnation number here>;
recover database noredo;
exit

In this case it would be like this
rman target /
reset database to incarnation 1;
switch database to copy
recover database noredo;
exit

Oracle Timesten Overview and Datasore Refresh


Overview of Timesten:


Oracle TimesTen In-Memory Database (TimesTen) is a full-featured, memory-optimized, relational database with persistence and recoverability. It provides applications with the instant responsiveness and very high throughput required by database-intensive applications. Deployed in the application tier, TimesTen operates on databases that fit entirely in physical memory (RAM). Applications access the TimesTen database using standard SQL interfaces. For customers with existing application data residing on the Oracle Database, TimesTen is deployed as an in-memory cache database with automatic data synchronization between TimesTen and the Oracle Database.

Timesten Datasource Refresh:


Step 1: Take backup. (Source)

/export/TimesTen/7.0.5_64/bin/ttbackup -dir /var/tmp -fname PROD_DATASTORE_081010_Bkp DSNNAME

Step 2: Scp dumps to Target.

Step 3: Destroy the DSN(Target)

hostname% ./ttdestroy DSNNAME
hostname% ./ttstatus
TimesTen status report as of Mon Oct 11 2010

Daemon pid 10251 port 53388 instance tt1121
TimesTen server pid 10256 started on port 53389
------------------------------------------------------------------------
Accessible by group cheops
End of report

Step 4: Start restore

hostname% ./ttrestore -fname PROD_DATASTORE_081010_Bkp -dir /var/tmp DSNNAME
Restore started ...
ttRestore: SQL Warning: Connected to data store with warning
*** [TimesTen][TimesTen 11.2.1.6.1 ODBC Driver][TimesTen]TT6200: New value for permanent data partition size ignored since smaller than current value of 10240 megabytes -- file "db.c", lineno 10141, procedure "sbDbConnect"
*** ODBC Warning = 01000, TimesTen Warning = 6200  ( HAVE TO CHANGE PERMSIZE VALUE)

Restore complete

Some times we might encounter above warning. It is because the permsize is different from source & target. We have to change the permsize in sys.odbc.ini.

Changing sys.odbc.ini file.

hostname% cd /var/TimesTen/tt11
hostname% cp -p sys.odbc.ini sys.odbc.ini_sep11_2010
hostname% vi sys.odbc.ini

change PermSizeà  10240

We have to do below steps in order to apply the new value.



Step 5: make rampolicy manual.

hostname% ./ttAdmin -rampolicy manual DSNNAME

Step 6:  Ram unload and load it

hostname% ./ttAdmin -ramunload DSNNAME
RAM Residence Policy            : manual

hostname% ./ttAdmin -ramload DSNNAME

hostname% ./ttisql DSNNAME

Copyright (c) 1996-2010, Oracle.  All rights reserved.
Type ? or "help" for help, type "exit" to quit ttIsql.


Step 7: drop replications.
Command> repschemes;

Replication Scheme REPL.PROD:


  Store: PROD_DS on HOSTNAME
    Port: (auto)
    Log Fail Threshold: (none)
    Retry Timeout: 120 seconds
    Compress Traffic: Disabled

1 replication scheme found.
Command> drop replication REPL.PROD;
Command> repschemes;

0 replication schemes found.

Step 8: Delete bookmark.

Command> delete xladeletebookmark

XLA Bookmark: PROD_bookmark
  Read Log File:  1286099029
  Read Offset:    1705274
  Purge Log File: 1310
  Purge Offset:   55749560
  PID:            1868
  In Use:         No

1 bookmark found.
Command> xladeletebookmark PROD_bookmark;
Command> xladeletebookmark;

0 bookmarks found.
Command>
Command>



Step 9: Create the users which are requested by user.
Command> create user USERNAME identified by 'PASSWORD';




Step 10: Make Rampolicy to always.

hostname% ./ttAdmin -rampolicy always DSNNAME



Recover Physical Standby Using Increemental Backup

Version:

Oracle Server - Enterprise Edition - Version: 10.2.0.1 to 11.1.0.7

Reason

Steps to perform rollforward of physical Standy using increemental backup

Details
 SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

1) On the standby database, stop the managed recovery process (MRP)
 


2) On the STANDBY DATABASE, find the SCN which will be used for the incremental backup at the primary database:
SQL> SELECT CURRENT_SCN FROM V$DATABASE;

CURRENT_SCN
---------------------
3162298


3) In RMAN, connect to the PRIMARY database and create an incremental backup from the SCN derived in the previous step:
RMAN> BACKUP INCREMENTAL FROM SCN 3162298 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FORSTANDBY';


4) Transfer all backup sets created on the primary system to the standby system.

scp /tmp/ForStandby_* standby:/tmp

RMAN> CATALOG START WITH '/tmp/ForStandby';

using target database control file instead of recovery catalog
searching for all files that match the pattern /tmp/ForStandby

List of Files Unknown to the Database
=====================================
File Name: /tmp/ForStandby_2lkglss4_1_1
File Name: /tmp/ForStandby_2mkglst8_1_1

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /tmp/ForStandby_2lkglss4_1_1
File Name: /tmp/ForStandby_2mkglst8_1_1


6) Recover the STANDBY database with the cataloged incremental backup:
RMAN> RECOVER DATABASE NOREDO;

starting recover at 03-JUN-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=28 devtype=DISK
channel ORA_DISK_1: starting incremental datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: +DATA/mystd/datafile/system.297.688213333
destination for restore of datafile 00002: +DATA/mystd/datafile/undotbs1.268.688213335
destination for restore of datafile 00003: +DATA/mystd/datafile/sysaux.267.688213333
channel ORA_DISK_1: reading from backup piece /tmp/ForStandby_2lkglss4_1_1
channel ORA_DISK_1: restored backup piece 1
piece handle=/tmp/ForStandby_2lkglss4_1_1 tag=FORSTANDBY
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished recover at 03-JUN-09

7) In RMAN, connect to the PRIMARY database and create a standby control file backup:
RMAN> BACKUP CURRENT CONTROLFILE FOR STANDBY FORMAT '/tmp/ForStandbyCTRL.bck';
8) Copy the standby control file backup to the STANDBY system. 
scp /tmp/ForStandbyCTRL.bck standby:/tmp

RMAN> SHUTDOWN;
RMAN> STARTUP NOMOUNT;

10) From RMAN, connect to STANDBY database and restore the standby control file:

RMAN> RESTORE STANDBY CONTROLFILE FROM '/tmp/ForStandbyCTRL.bck';

Starting restore at 03-JUN-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=36 devtype=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
output filename=+DATA/mystd/controlfile/current.257.688583989
Finished restore at 03-JUN-09

11) Shut down the STANDBY database and startup mount:
RMAN> SHUTDOWN;
RMAN> STARTUP MOUNT;


12) Since the controlfile is restored from PRIMARY the datafile locations in STANDBY controlfile will be same as PRIMARY database, so catalog datafiles in STANDBY will do the necessary rename operations.

Perform the below step  in STANDBY in case the datafile directory structure between primary and standby are different.
RMAN> CATALOG START WITH '+DATA/mystd/datafile/';

List of Files Unknown to the Database
=====================================
File Name: +data/mystd/DATAFILE/SYSTEM.309.685535773
File Name: +data/mystd/DATAFILE/SYSAUX.301.685535773
File Name: +data/mystd/DATAFILE/UNDOTBS1.302.685535775
File Name: +data/mystd/DATAFILE/SYSTEM.297.688213333
File Name: +data/mystd/DATAFILE/SYSAUX.267.688213333
File Name: +data/mystd/DATAFILE/UNDOTBS1.268.688213335

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +data/mystd/DATAFILE/SYSTEM.297.688213333
File Name: +data/mystd/DATAFILE/SYSAUX.267.688213333
File Name: +data/mystd/DATAFILE/UNDOTBS1.268.688213335


 

RMAN> SWITCH DATABASE TO COPY;

datafile 1 switched to datafile copy "+DATA/mystd/datafile/system.297.688213333"
datafile 2 switched to datafile copy "+DATA/mystd/datafile/undotbs1.268.688213335"
datafile 3 switched to datafile copy "+DATA/mystd/datafile/sysaux.267.688213333"


13) If the STANDBY database needs to be configured for FLASHBACK use the below step to enable.


SQL> ALTER DATABASE FLASHBACK OFF; 
SQL> ALTER DATABASE FLASHBACK ON;
14) On the STANDBY database, start the MRP
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;

Hidden Parameter Datapump expdp/impdp

Oracle Undocumented Parameters


1. METRICS

You can use the parameter METRICS=Y to include additional logging information about the number of objects and the time it took to process them in the log file. With METRICS, additional information can be obtained about the number of objects that were processed and the time it took for processing them.
Note that the METRICS parameter does not create any additional trace files. Instead, the extra details are logged in the logfile of the Export Data Pump or Import Data Pump job.
Here is an example:


 
expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y
...
[oracle@zurich Desktop]$ expdp \'/ as sysdba\' DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y
  
Export: Release 11.2.0.3.0 - Production on Tue Dec 20 10:46:08 2011
  
Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
  
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYSMAN_PUMP":  "/******** AS SYSDBA" DUMPFILE=data_pump_dir:sysman_%U.dat SCHEMAS=sysman_apm JOB_NAME=sysman_pump METRICS=Y 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 320 KB
Processing object type SCHEMA_EXPORT/USER
     Completed 1 USER objects in 0 seconds
...
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
     Completed 1 DEFAULT_ROLE objects in 2 seconds
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
     Completed 1 TABLESPACE_QUOTA objects in 8 seconds
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
     Completed 1 PROCACT_SCHEMA objects in 27 seconds
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
...
. . exported "SYSMAN_APM"."PATCH_RUN"                        0 KB       0 rows
Master table "SYS"."SYSMAN_PUMP" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYSMAN_PUMP is:
  /u01/app/oracle/admin/CLOUD/dpdump/sysman_01.dat
Job "SYS"."SYSMAN_PUMP" successfully completed at 10:49:19

2. ACCESS_METHOD

In some situations, the undocumented parameter ACCESS_METHOD can be used to enforce a specific method to unload or load the data.
If the parameter is not specified, then Data Pump will automatically choose the best method to load or unload the data.
The parameter can only be specified when the Data Pump job is initially started i.e., the parameter cannot be specified when the job is restarted.
If the parameter is specified, the method of loading or unloading the data is enforced on all tables that need to be loaded or unloaded with the job.
Enforcing a specific method may result in a slower performance of the overall Data Pump job.
Here is an example:

expdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
expdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE 
  
impdp \'/ as sysdba\'  ... ACCESS_METHOD=DIRECT_PATH  
impdp \'/ as sysdba\'  ... ACCESS_METHOD=EXTERNAL_TABLE
 

3. TRACE

Tracing can be enabled by specifying an 7 digit hexadecimal mask in the TRACE parameter of expdp or impdp. The first three digits enable tracing for a specific DataPump component, while the last four digits are usually: 0300. Any leading zero’s can be omitted, and the value specified for the TRACE parameter is not case sensitive. TRACE does not add anything to the output of DataPump, it creates additional trace files.
Here is a summary of the Data Pump trace levels:

 
  10300 SHDW: To trace the Shadow process 
  20300 KUPV: To trace Fixed table
  40300 'div' To trace Process services
  80300 KUPM: To trace Master Control Process 
 100300 KUPF: To trace File Manager
 200300 KUPC: To trace Queue services
 400300 KUPW: To trace Worker process(es)         
 800300 KUPD: To trace Data Package
1000300 META: To trace Metadata Package
1FF0300 'all' To trace all components, full tracing

As you see, in order to trace all Data Pump components, level 1FF0300 can be specified, here is an example:

 
impdp \'/ as sysdba\' NETWORK_LINK=edw_p SCHEMAS=CLOUD PARALLEL=8 JOB_NAME=net_pump EXCLUDE=statistics TRACE=1FF0300 
...
SHDW: *** GET_STATUS_VERSION call ***
SHDW:     Version = 9999
KUPP:10:58:22.050: Input trace/debug flags: 01FF0300 = 33489664
KUPP:10:58:22.050: Current trace/debug flags: 01FF0300 = 33489664
SHDW:10:58:22.050: Current user = SYS
SHDW:10:58:22.050: Current schema = SYS
SHDW:10:58:22.050: Current language = AMERICAN_AMERICA.AL32UTF8
SHDW:10:58:22.052: Current session address = 0000000077F7AFA0
SHDW:10:58:22.052: *** OPEN call ***
SHDW:10:58:22.052:     operation       = IMPORT
SHDW:10:58:22.052:     job_mode        = schema
SHDW:10:58:22.052:     remote_link     = edw_p
SHDW:10:58:22.052:     version         = 
SHDW:10:58:22.052:     compression     = 2
KUPV:10:58:22.052: Open request for job: SYS.NET_PUMP
KUPV:10:58:22.058: Master Table create statement: CREATE TABLE "SYS"."SYSMAN_PUMP" (process_order NUMBER, duplicate NUMBER, dump_fileid NUMBER, dump_position NUMBER, dump_length NUMBER, dump_orig_length NUMBER 
...

The 3 types of trace files can be found in $ADR_HOME/trace:
- Data Pump Master Control Processes files have format: SID_dm_process_id.trc
- Data Pump Worker Process trace files have format: SID_dw_process_id.trc
- Data Pump Shadow Process trace files have format: SID_ora_process_id.trc

4. KEEP_MASTER

If the undocumented DataPump parameter KEEP_MASTER=Y is used, then the Master Table is not deleted after the job completes. The drop of the master table does not lead to any data dictionary corruption and if you keep the master table after the job completes, then a drop of the master table afterwards will not cause any corruption either.
You might use this parameter if you create transportable tablespaces where the source and the destination are both ASM based.
Here is an example:

 
expdp \'/ as sysdba\' directory=DUMPDIR DUMPFILE=skat_full.dmp LOGFILE=skat_full.log FULL=Y KEEP_MASTER=Y