Sunday, 18 August 2013

LEAST amount of recovery that has to be done before a database restored

Q?


What is the LEAST amount of recovery that has to be done before a database restored from a hot rman backup can be opened?

A:

ALL archivelogs from backup start to backup end must be applied before the database can be opened - this is the MINIMUM amount of recovery needed.

To determine which log was current at the time the backup completed, note the COMPLETION time of the rman database backup - take this from the backup log.

If the backup log is not longer available query the rman metadata but make sure you set NLS_DATE_FORMAT before invoking rman so that timestamps as well as date are returned:

- for unix:     %export NLS_DATE_FORMAT='dd-mon-rr hh24:mi:ss'
                %rman target /

- for windows:  >set nls_date_format=dd-mon-rr:hh24:mi:ss
                >rman target /


To find your backup:

RMAN>LIST BACKUP OF DATABASE COMPLETED AFTER '<date>';
or
RMAN>LIST BACKUP OF DATABASE COMPLETED AFTER 'sysdate -n';


Set <date> to limit the output to the backups that you are interested and note the completion time - for a multi-piece backup, note the completion time of the LAST backuppiece created.

Lets say, the backup completed 04-Jun-11 13:13:05 query the controlfile:

SQL>ALTER SESSION SET NLS_DATE_FORMAT='DD-MON-RR HH24:MI:SS';
SQL> SELECT THREAD#, SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG WHERE '04-JUN-11 13:13:05' BETWEEN FIRST_TIME AND NEXT_TIME;


If the above query does not return any rows, it may be that the information has aged out of the controlfile - run the same query against v$log_history.

The sequence# returned by the above query is the log sequence current at the time the backup ended - let say 530 thread 1.  For minimum recovery use:

RMAN>RUN { SET UNTIL SEQUENCE 531 THREAD 1;
RESTORE DATABASE;
RECOVER DATABASE;
ALTER DATABASE OPEN RESETLOGS;
}


If this is a RAC implementation the use this SQL instead to query the controlfile:

SQL>SELECT THREAD#, SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE# FROM V$ARCHIVED_LOG WHERE '04-JUN-11 13:13:05' BETWEEN FIRST_TIME AND NEXT_TIME;


For minimum recovery use the log sequence and thread that has the lowest NEXT_CHANGE# returned by the above query.


Reference : How to determine minimum end point for recovery of an RMAN backup (Doc ID 1329415.1)

Wednesday, 27 February 2013

Slow Statistics purging (SYSAUX grows) - Delete Taking long

Recently i came accross a problem where one delete statement as below was taking almost more than 10 hours :

DELETE /*+ dynamic_sampling(4) */ FROM sys.wri_optstat_histgrm_history WHERE savtime < :1 AND ROWNUM <= NVL (:2, ROWNUM)

It is actually purging the old statistics from sysaux table and pointing towoards a bug


For an workaround we use below, before applying any patch:


Bug 10279045 - Slow Statistics purging (SYSAUX grows) [ID 10279045.8]


SQL> alter table sys.WRI$_OPTSTAT_HISTGRM_HISTORY move tablespace SYSAUX;
Table altered.

SQL> alter index sys.I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST rebuild nologging parallel 8;
Index altered.

SQL> alter index sys.I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST noparallel;
Index altered.
 
SQL> alter index sys.I_WRI$_OPTSTAT_H_ST rebuild nologging parallel 8;
Index altered.

SQL> alter index sys.I_WRI$_OPTSTAT_H_OBJ#_ICOL#_ST noparallel;
Index altered.

Tuesday, 19 February 2013

Datapump - Space Reduction

With the introduction of 11g feature with expdp/expdp compression parameter , we all are using it and greatly helps in reducing our dump size.
But do we know this is even better then gzip and produce half size then gzip. here are details:

The parameter has three options:
METDATA_ONLY - only the metadata is compressed; the data is left as it is (available in Oracle Database 10.2 as well).
... DATA_ONLY - only the data is compressed; the metadata is left alone.
ALL - both the metadata and data are compressed.
NONE - this is the default; no compression is performed.
Here is how you compress the export of the table UNITS_FACT:
$ expdp global/global directory=dump_dir tables=units_fact dumpfile=units_fact_comp.dmp compression=all

For comparison purposes, export without compression:
$ expdp global/global directory=dump_dir tables=units_fact dumpfile=units_fact_uncomp.dmp

Now if you check the files created:
$ ls -l
-rw-r----- 1 oracle dba 2576384 Jul 6 22:39 units_fact_comp.dmp
-rw-r----- 1 oracle dba 15728640 Jul 6 22:36 units_fact_uncomp.dmp

The compression ratio is 100*(15728640-2576384)/15728640 or about 83.61%! That's fairly impressive; the uncompressed dumpfile is 15MB while the compressed one is 1.5MB.
If you compress the dumpfile using gzip:
$ gzip units_factl_uncomp.dmp

-rw-r----- 1 oracle dba 3337043 Jul 6 22:36 units_fact_uncomp.dmp.gz

The compressed file is about 3.2MB, double the size of the compressed file in Data Pump. So, in addition to the compression being potentially more efficient, the decompression really adds value. When importing the dumpfile, the import does not have to decompress the file first; it decompresses as it reads it, making the process really fast.

Monday, 18 February 2013

Colored SQL - 11g New Feature

Colored SQL


This new feature doesn't paint the SQL statement in color; rather, it sort of marks it as "important."

Suppose you are troubleshooting a performance issue and you suspect a specific SQL statement contributing to the problem. You want the SQL statement to be captured in every Automatic Workload Repository (AWR) snapshot. But AWR snapshots do not capture all SQL statements; just the top ones. How can you force a specific SQL to be captured, regardless of its inclusion in the top SQLs?

The procedure add_colored_sql() in the package dbms_workload_repository marks the SQL as "colored", or important enough to be captured in every AWR snapshot regardless of whether the SQL is in top SQLs. First, identify the SQL statement and then get its SQL_ID. To color it, use:

  
begin
  dbms_workload_repository.add_colored_sql(
     sql_id => 'ff15115dvgukr' 
  );
end;


To find out which SQLs have been colored, you can query the AWR table WRM$_COLORED_SQL:
SQL> SELECT * FROM wrm$_colored_sql;

DBID  SQL_ID  OWNER      CREATE_TI
---------- ------------- ----------      --------
2965581158 ff15115dvgukr 1 05-APR-08


Thereafter the SQL statement with ID ff15115dvgukr will be captured in every snapshot, even if it's not in the top SQL statements. (Of course, the SQL must be present in the library cache to be captured in the AWR snapshots.)

But what if the SQL ceases to be that colorful—that is, not important enough to be captured? You can turn it off by issuing the obverse of the procedure.

begin
  dbms_workload_repository.remove_colored_sql(
    sql_id => 'ff15115dvgukr' 
  );
end;


This feature is extremely useful when you want to focus on a specific SQL in your tuning exercises

Thursday, 8 November 2012

Oracle 11gR2 ASM Spfile is in Disk Group ( How ASM starts then?)

I recently got very intresting confusion ASM Spfile is stored in ASM diskgroup in 11gR2 in first default diskgroup created during GI installtion and in order to start ASM we need spfile.
So if my ASM is down, how do i start it as my Spfile lies in ASM diskgroup.

Here lies the answer to this question:

$ asmcmd spget
+DEV_DATA01/asm/asmparameterfile/registry.253.754437197


 
While Clusterware (GI) Startup GPnP ( Grid plug and play ) profile provides the ASM discovery string

$gpnptool getpval -asm_dis

ORCL:*

These string is used to scan the device headers which contain the ASM spfile i.e. (kfdhdb.spfflg=1).

$ asmcmd lsdsk -G RDPDEV_DATA01
Path
ORCL:ASMSDB12
ORCL:ASMSDB2
ORCL:ASMSDB3
ORCL:ASMSDB5


 

Let's scan above disk devices :

$ kfed read ORCL:ASMSDB12 | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000


$ kfed read ASMSDB2  | grep -E 'spf|ausize'

$  kfed read ORCL:ASMSDB3  | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000


$  kfed read ORCL:ASMSDB5  | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                       19 ; 0x0f4: 0x00000013
kfdhdb.spfflg:                        1 ; 0x0f8: 0x0000000   --> here is your spfile on this disk


Final Story:

To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device's header. By the way, the same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:
grid@iudb007:~/ [+ASM5] kfed read ORCL:ASMSDB5 | grep vf
kfdhdb.vfstart:                     256 ; 0x0ec: 0x00000100 <- START offset of the voting file
kfdhdb.vfend:                       288 ; 0x0f0: 0x00000120 <- END offset of the voting file

Tuesday, 6 November 2012

OEM Agent : The agentTZRegion value not in Agreement

Due to DST  changes some days back we see couple of OEM Agents failed to start with below mentioned errors:

oracle@machine[TEST01]> ./emctl start agent
Starting agent ......... failed.
The agentTZRegion value in /orcl/app/oracle/agent11g/sysman/config/emd.properties is not in agreement with what agent thinks it should be.Please verify your environment to make sure that TZ setting has not changed since the last start of the agent.
If you modified the timezone setting in the environment, please stop the agent and exectute 'emctl resetTZ agent' and also execute the script mgmt_target.set_agent_tzrgn(<agent_name>, <new_tz_rgn>) to get the value propagated to repository.
Consult the log files in: /orcl/app/oracle/agent11g/sysman/log

OS level time as changed but it's not changed in Oracle Agent binaries. We have manaully fix it using below menthod:

1. Set the Correct Env variable to TZ parameter

oracle@machine[TEST01]> echo $TZ
Europe/Moscow
export TZ=Etc/GMT-4

2. Reset the agent time zone using emctl resetTZ agent

oracle@machine[TEST01]> ./emctl resetTZ agent

Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
Updating /orcl/app/oracle/agent11g/sysman/config/emd.properties...
Successfully updated /orcl/app/oracle/agent11g/sysman/config/emd.properties.

Login as the em repository user and run the  script: ( OEM Grid )
exec mgmt_target.set_agent_tzrgn('machinename2:1830','Etc/GMT-4')
and commit the changes

This can be done for example by logging into sqlplus and doing
SQL> exec mgmt_target.set_agent_tzrgn('machinename:1830','Etc/GMT-4')
SQL> commit

3. Agent started:

oracle@machine[TEST01]> ./emctl start agent
Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
Starting agent ..................... started.

Reference Metalink ID note:
DST Compliance for Grid Control OMS, Repository and Agents: Checks and Updates [ID 457792.1]

Monday, 5 November 2012

ORA-12801: error signaled in parallel query server P001 ORA-01722: invalid number

Today I came accross with above two errors. These are two sets of errors

1. Parallel Query Failing
2. Invalid Comparision Operation.

Query Failing:

SELECT COUNT(*) FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < 20000;

ORA-12801: error signaled in parallel query server P001
ORA-01722: invalid number


When i check degree on this table it was set to 1 so no problem found there but degree with above table indexes was set to 4. I changed the degree to 1 to sort first issue as below:

SQL> SQL> select owner,table_name,degree from dba_tables where table_name in ('SALES_MASTER_HISTORY');

OWNER                          TABLE_NAME                     DEGREE
------------------------------ ------------------------------ ----------
CAP                          SALES_MASTER_HISTORY                   1

SQL> select index_name,degree from dba_indexes where table_name in ('SALES_MASTER_HISTORY');

INDEX_NAME                     DEGREE
------------------------------ ----------------------------------------
IDX_TMH_NETTEDBOOKINGREF       4
IDX_VALUEDATE                  4
PK_TRADES_MASTER_HISTORY       4
IDX_TMH_CORESYSREF             4
IDX_TMH_TRUNC_BOOKEDDATE_IDX   4
IDX_TMH_BRANCH_CODE            4
IDX_TMH_CUSTOMER_NUMBER        4
IDX_TMH_BROKERCODE             4
IDX_BOOKEDDATE                 4
IDX_TMH_CONTRACTNUMBER         4
IDX_TMH_CUSTOMER               4
11 rows selected.


SQL> alter index CAP.IDX_BOOKEDDATE  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_BRANCH_CODE  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_BROKERCODE  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_CONTRACTNUMBER  parallel 1;
Index altered.
SQL>  alter index CAP.IDX_TMH_CUSTOMER_NUMBER  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_NETTEDBOOKINGREF  parallel 1;
Index altered.
SQL>  alter index CAP.IDX_VALUEDATE  parallel 1;
Index altered.
SQL> alter index CAP.PK_TRADES_MASTER_HISTORY  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_TRUNC_BOOKEDDATE_IDX  parallel 1;
Index altered.

SQL> select owner,index_name,degree from dba_indexes where table_name in ('SALES_MASTER_HISTORY') order by INDEX_NAME;
OWNER                          INDEX_NAME                     DEGREE
------------------------------ ------------------------------ ----------------------------------------
CAPFX                          IDX_BOOKEDDATE                 1
CAPFX                          IDX_TMH_BRANCH_CODE            1
CAPFX                          IDX_TMH_BROKERCODE             1
CAPFX                          IDX_TMH_CONTRACTNUMBER         1
CAPFX                          IDX_TMH_CORESYSREF             1
CAPFX                          IDX_TMH_CUSTOMER               1
CAPFX                          IDX_TMH_CUSTOMER_NUMBER        1
CAPFX                          IDX_TMH_NETTEDBOOKINGREF       1
CAPFX                          IDX_TMH_TRUNC_BOOKEDDATE_IDX   1
CAPFX                          IDX_VALUEDATE                  1
CAPFX                          PK_TRADES_MASTER_HISTORY       1

After that when i execute query it was giving just :

SQL> select count(*)  FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < 20000;
select count(*)  FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < 20000
                                                  *
ERROR at line 1:
ORA-01722: invalid number

This is not really a database problem, here problem is TREASURYNUMBER is char(8) and when we compare it with number it causing implicit datatype conversion which causes error. This can be resolved by using 2000 value in quotes as below:

SQL> SELECT COUNT(*) FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < '20000';
  COUNT(*)
----------
   109090909

So, this is simple solution but beneficial to  know.