Thursday 8 November 2012

Oracle 11gR2 ASM Spfile is in Disk Group ( How ASM starts then?)

I recently got very intresting confusion ASM Spfile is stored in ASM diskgroup in 11gR2 in first default diskgroup created during GI installtion and in order to start ASM we need spfile.
So if my ASM is down, how do i start it as my Spfile lies in ASM diskgroup.

Here lies the answer to this question:

$ asmcmd spget
+DEV_DATA01/asm/asmparameterfile/registry.253.754437197


 
While Clusterware (GI) Startup GPnP ( Grid plug and play ) profile provides the ASM discovery string

$gpnptool getpval -asm_dis

ORCL:*

These string is used to scan the device headers which contain the ASM spfile i.e. (kfdhdb.spfflg=1).

$ asmcmd lsdsk -G RDPDEV_DATA01
Path
ORCL:ASMSDB12
ORCL:ASMSDB2
ORCL:ASMSDB3
ORCL:ASMSDB5


 

Let's scan above disk devices :

$ kfed read ORCL:ASMSDB12 | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000


$ kfed read ASMSDB2  | grep -E 'spf|ausize'

$  kfed read ORCL:ASMSDB3  | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000


$  kfed read ORCL:ASMSDB5  | grep -E 'spf|ausize'
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.spfile:                       19 ; 0x0f4: 0x00000013
kfdhdb.spfflg:                        1 ; 0x0f8: 0x0000000   --> here is your spfile on this disk


Final Story:

To read the ASM spfile during the ASM instance startup, it is not necessary to open the disk group. All information necessary to access the data is stored in the device's header. By the way, the same technique is used to access the Clusterware voting files which are also stored in an ASM disk group. In this case, Clusterware does not need a running ASM instance to access the cluster voting files:
grid@iudb007:~/ [+ASM5] kfed read ORCL:ASMSDB5 | grep vf
kfdhdb.vfstart:                     256 ; 0x0ec: 0x00000100 <- START offset of the voting file
kfdhdb.vfend:                       288 ; 0x0f0: 0x00000120 <- END offset of the voting file

Tuesday 6 November 2012

OEM Agent : The agentTZRegion value not in Agreement

Due to DST  changes some days back we see couple of OEM Agents failed to start with below mentioned errors:

oracle@machine[TEST01]> ./emctl start agent
Starting agent ......... failed.
The agentTZRegion value in /orcl/app/oracle/agent11g/sysman/config/emd.properties is not in agreement with what agent thinks it should be.Please verify your environment to make sure that TZ setting has not changed since the last start of the agent.
If you modified the timezone setting in the environment, please stop the agent and exectute 'emctl resetTZ agent' and also execute the script mgmt_target.set_agent_tzrgn(<agent_name>, <new_tz_rgn>) to get the value propagated to repository.
Consult the log files in: /orcl/app/oracle/agent11g/sysman/log

OS level time as changed but it's not changed in Oracle Agent binaries. We have manaully fix it using below menthod:

1. Set the Correct Env variable to TZ parameter

oracle@machine[TEST01]> echo $TZ
Europe/Moscow
export TZ=Etc/GMT-4

2. Reset the agent time zone using emctl resetTZ agent

oracle@machine[TEST01]> ./emctl resetTZ agent

Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
Updating /orcl/app/oracle/agent11g/sysman/config/emd.properties...
Successfully updated /orcl/app/oracle/agent11g/sysman/config/emd.properties.

Login as the em repository user and run the  script: ( OEM Grid )
exec mgmt_target.set_agent_tzrgn('machinename2:1830','Etc/GMT-4')
and commit the changes

This can be done for example by logging into sqlplus and doing
SQL> exec mgmt_target.set_agent_tzrgn('machinename:1830','Etc/GMT-4')
SQL> commit

3. Agent started:

oracle@machine[TEST01]> ./emctl start agent
Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
Starting agent ..................... started.

Reference Metalink ID note:
DST Compliance for Grid Control OMS, Repository and Agents: Checks and Updates [ID 457792.1]

Monday 5 November 2012

ORA-12801: error signaled in parallel query server P001 ORA-01722: invalid number

Today I came accross with above two errors. These are two sets of errors

1. Parallel Query Failing
2. Invalid Comparision Operation.

Query Failing:

SELECT COUNT(*) FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < 20000;

ORA-12801: error signaled in parallel query server P001
ORA-01722: invalid number


When i check degree on this table it was set to 1 so no problem found there but degree with above table indexes was set to 4. I changed the degree to 1 to sort first issue as below:

SQL> SQL> select owner,table_name,degree from dba_tables where table_name in ('SALES_MASTER_HISTORY');

OWNER                          TABLE_NAME                     DEGREE
------------------------------ ------------------------------ ----------
CAP                          SALES_MASTER_HISTORY                   1

SQL> select index_name,degree from dba_indexes where table_name in ('SALES_MASTER_HISTORY');

INDEX_NAME                     DEGREE
------------------------------ ----------------------------------------
IDX_TMH_NETTEDBOOKINGREF       4
IDX_VALUEDATE                  4
PK_TRADES_MASTER_HISTORY       4
IDX_TMH_CORESYSREF             4
IDX_TMH_TRUNC_BOOKEDDATE_IDX   4
IDX_TMH_BRANCH_CODE            4
IDX_TMH_CUSTOMER_NUMBER        4
IDX_TMH_BROKERCODE             4
IDX_BOOKEDDATE                 4
IDX_TMH_CONTRACTNUMBER         4
IDX_TMH_CUSTOMER               4
11 rows selected.


SQL> alter index CAP.IDX_BOOKEDDATE  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_BRANCH_CODE  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_BROKERCODE  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_CONTRACTNUMBER  parallel 1;
Index altered.
SQL>  alter index CAP.IDX_TMH_CUSTOMER_NUMBER  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_NETTEDBOOKINGREF  parallel 1;
Index altered.
SQL>  alter index CAP.IDX_VALUEDATE  parallel 1;
Index altered.
SQL> alter index CAP.PK_TRADES_MASTER_HISTORY  parallel 1;
Index altered.
SQL> alter index CAP.IDX_TMH_TRUNC_BOOKEDDATE_IDX  parallel 1;
Index altered.

SQL> select owner,index_name,degree from dba_indexes where table_name in ('SALES_MASTER_HISTORY') order by INDEX_NAME;
OWNER                          INDEX_NAME                     DEGREE
------------------------------ ------------------------------ ----------------------------------------
CAPFX                          IDX_BOOKEDDATE                 1
CAPFX                          IDX_TMH_BRANCH_CODE            1
CAPFX                          IDX_TMH_BROKERCODE             1
CAPFX                          IDX_TMH_CONTRACTNUMBER         1
CAPFX                          IDX_TMH_CORESYSREF             1
CAPFX                          IDX_TMH_CUSTOMER               1
CAPFX                          IDX_TMH_CUSTOMER_NUMBER        1
CAPFX                          IDX_TMH_NETTEDBOOKINGREF       1
CAPFX                          IDX_TMH_TRUNC_BOOKEDDATE_IDX   1
CAPFX                          IDX_VALUEDATE                  1
CAPFX                          PK_TRADES_MASTER_HISTORY       1

After that when i execute query it was giving just :

SQL> select count(*)  FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < 20000;
select count(*)  FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < 20000
                                                  *
ERROR at line 1:
ORA-01722: invalid number

This is not really a database problem, here problem is TREASURYNUMBER is char(8) and when we compare it with number it causing implicit datatype conversion which causes error. This can be resolved by using 2000 value in quotes as below:

SQL> SELECT COUNT(*) FROM SALES_MASTER_HISTORY WHERE TREASURYNUMBER < '20000';
  COUNT(*)
----------
   109090909

So, this is simple solution but beneficial to  know.




Friday 24 August 2012

SQL Profile - Play with Execution plan

Although i don't have too much experience working with outlines, but these days i am working significantly with SQL Profile. My Favourite blogger Kerry osborne has written a wonderfull blog on
this topic which gives me very clear idea of sql profile.

Outlines basically locks the profile , but sql profile plays with emperical value of the sql plan which normaly known as fudge factore. It's give the flexibility to optimizter to look for alternate plan.
But we can say sql profile advantages will fade with time.

Hints used in sql profile, these hinsts are very likely documents so wanted to give as much information as possible:

1. Setting Optimer Mode : ALL_ROWS

2. Disable hints present in sql : IGNORE_OPTIM_EMBEDDED_HINTS

3.Setting OPTIMIZER_FEATURES_ENABLE to it's default Value i.e active all available feature :OPTIMIZER_FEATURES_ENABLE (default)

4.Adjusting number of rows returned from the table:
OPT_ESTIMATE(@"SEL$1", TABLE, "L"@"SEL$1", SCALE_ROWS=10)
 Eg : 10 times as many rows are expected to return

5. Adjusting number of rows returned from index scan:
OPT_ESTIMATE(@"SEL$1", INDEX_SKIP_SCAN, "A"@"SEL$1", PK_TABLE_XXXX_ASSIGNMENT, SCALE_ROWS=.4)
Eg: 4 times fewer rows will be returned using index skip scan.

6. Table, index and column statistics adusted using below hints:
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL3", scale, length=12 distinct=2 nulls=0)
TABLE_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", scale, blocks=5 rows=2400)
OPTIMIZER_FEATURES_ENABLE(default)
Below are some scripts to find out hints from a sql profile and awr snapshots ( scripts are borrowed from kerry blogs )

SQL> set echo on
SQL> @sql_profile_hints
set lines 155
col hint for a150
select attr_val hint
from dba_sql_profiles p, sqlprof$attr h
where p.signature = h.signature
and name like ('&profile_name')
order by attr#
 /
Enter value for profile_name: PROFILE_0qa98gcnnza7h

HINT
------------------------------------------------------------------------------------------------------------------------------------------------------
OPT_ESTIMATE(@"SEL$1", TABLE, "L"@"SEL$1", SCALE_ROWS=0.0536172171)
OPT_ESTIMATE(@"SEL$1", INDEX_SKIP_SCAN, "A"@"SEL$1", PK_TABLE_XXXX_ASSIGNMENT, SCALE_ROWS=4)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL1", scale, length=6 distinct=1234 nulls=0 min=1000000014 max=1026369632)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL2", scale, length=12 distinct=2 nulls=0)
COLUMN_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", "COL3", scale, length=12 distinct=2 nulls=0)
TABLE_STATS("APP_OWNER"."TABLE_XXXX_ASSIGNMENT", scale, blocks=5 rows=2400)
OPTIMIZER_FEATURES_ENABLE(default)

SQL> @sql_hints_awr
 select
 extractvalue(value(d), '/hint') as outline_hints
 from
 xmltable('/*/outline_data/hint'
 passing (
 select
 xmltype(other_xml) as xmlval
 from
 dba_hist_sql_plan
where
sql_id = '&sql_id'
and plan_hash_value = &plan_hash_value
and other_xml is not null
)
) d;
Enter value for sql_id: 0qa98gcnnza7h
Enter value for plan_hash_value: 568322376

OUTLINE_HINTS
-----------------------------------------------------------------------------------------------------------------------------------------------------------
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('10.2.0.3')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
FULL(@"SEL$1" "SKEW"@"SEL$1")

Scripts to create sql profile ( Kerry osborne blog reference ) :

create_sql_profile.sql – uses cursor from the shared pool
create_sql_profile_awr.sql – uses AWR tables
sql_profile_hints.sql – shows the hints in a SQL Profile for 10g


Basic Commands:
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
DBMS_SQLTUNE.ALTER_SQL_PROFILE
DBMS_SQLTUNE.DROP_SQL_PROFILE


set verify off exec dbms_sqltune.accept_sql_profile(task_name => '&task_name',category => '&category');




set verify off
EXEC DBMS_SQLTUNE.ALTER_SQL_PROFILE (name => '&profile_name', attribute_name => 'STATUS', value => 'DISABLED');

Reference Links :

http://kerryosborne.oracle-guy.com/2009/04/oracle-sql-profiles/
 

Wednesday 25 July 2012

Table Fragmentation - Oracle

Table fragmentation –

If a table is only subject to inserts, there will not be any fragmentation.
Fragmentation comes with when we update/delete data in table.

The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get reuse ever at all). This leaves behind holes in table which results in table fragmentation.

To understand it more clearly, we need to be clear on how oracle manages space for tables.
“High water mark” of table actually defines the border line between (ever) used and unused (never) space. While performing full table scan, Oracle will always read the data up to HWM. And if there is lot of free space with-in HWM, that is read too, and hence degrading the performance of FTS.

Now lets see how to identify HWM, unused (never used) space and free space (used but deleted/updated) and then take a call whether the concerned table is candidate for a reorganization or not.

SQL> create table test as select * from dba_tables; -- Create a table
Table created.

SQL> exec dbms_stats.gather_table_stats('SYS','TEST); -- Gather Stats on this table


SQL> select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"
2 from user_tables where table_name='TEST'; -- The number of blocks used/free

Ever Used Never Used Total rows
---------- ---------- ----------
49 6 1680

SQL> delete from test where owner='SYS'; --- Im deleting almost half the number of rows.
764 rows deleted.

SQL> commit;
Commit complete.

SQL> exec dbms_stats.gather_table_stats('SYS','TEST); -- Gather Stats on this table again


SQL> select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows"
2 from user_tables where table_name='TEST'; -- No difference in blocks usage

Ever Used Never Used Total rows
---------- ---------- ----------
49 6 916


Even though you deleted almost half the rows, the above shows that table HWM is up to 49 blocks, and to perform any FTS, Oracle will go up to 49 blocks to search the data. If your application is so-written that there are many FTS on this table, you may consider, reorganizing this table.

Reasons to reorganization

a) Slower response time (from that table)
b) High number of chained (actually migrated) rows.
c) Table has grown many folds and the old space is not getting reused.
Note: Index based queries may not get that much benefited by reorg as compared to queries which does Full table scan.

How to reorganize?

Before Oracle 10g, there were mainly 2 ways to do the reorganization of the table

a) Export, drop, and import.
b) Alter table move (to another tablespace, or same tablespace).

Oracle 10g provides us a new way of reorganizing the data.

Shrink command: This command is only applicable for tables which are tablespace with auto segment space management.
Before using this command, you should have row movement enabled.

SQL> alter table test enable row movement;
Table altered.

There are 2 ways of using this command.

1. Break in two parts: In first part rearrange rows and in second part reset the HWM.

Part 1: Rearrange (All DML's can happen during this time)

SQL> alter table test shrink space compact;
Table altered.

Part 2: Reset HWM (No DML can happen. but this is fairly quick, infact goes unnoticed.)
SQL> alter table sa shrink space;
Table altered.

2. Do it in one go:

SQL> alter table sa shrink space; (Both rearrange and restting HWM happens in one statement)
Table altered.

Few advantages over the conventional methods

1. Unlike "alter table move ..", indexes are not in UNUSABLE state. After shrink command, indexes are updated also.
2. Its an online operation, So you dont need downtime to do this reorg.
3. It doesnot require any extra space for the process to complete.

Conclusion
Its a new 10g feature to shrink (reorg) the tables (almost) online which can be used with automatic segment space management.

Wednesday 27 June 2012

EMD upload error: Upload was successful but collections currently disabled - disk full

This errors occures while uploading OEM agent becuase your filesystem has breached the level to which agent is set.

We had two solutions here:

1. Clear the filesystem, if not perform second step.
2. Change the value of below parameters in emd.properties file in
<AGENT_HOME>/sysman/config/emd.properties.

Parameter before change:
UploadMaxDiskUsedPct=98
UploadMaxDiskUsedPctFloor=95

Parameter After Change:
UploadMaxDiskUsedPct=99
UploadMaxDiskUsedPctFloor=99
./emctl secure agent password
./emctl upload agent
./emctl clearstate agent

Above step will fix the problem and we can upload the agent now:

./emctl upload agent
Oracle Enterprise Manager 11g Release 1 Grid Control 11.1.0.1.0
Copyright (c) 1996, 2010 Oracle Corporation.  All rights reserved.
---------------------------------------------------------------
EMD upload completed successfully



Reference Metalink Note : 317242.1

Tuesday 19 June 2012

Values populated in DBA_TAB_MODIFICATIONS

Goal :
The goal is to explain why the view DBA_TAB_MODIFICATIONS  does sometimes have no values
even when the parameter STATISTICS_LEVEL  is set to TYPICAL and  the specific schema has been analyzed successful using the package DBMS_STATS.GATHER_SCHEMA_STATS.
In addition all the tables in that schema shows MONITORING=YES in the view dba_tables.


Fix:
The updates to the table *_tab_modifications are related to the volumne of updates for a table.
There is a need of approximatly 10% of datavolumn changes. Just only on single update of the row for example might not lead to fill the *_tab_modifications.

Example :

SQL>  create table test ( num  varchar2(32));
Table created.

Lets insert 100 rows to table test:

SQL>  begin
  2  for i in 1..100 loop
  3  insert into sys.test values (i);
  4  end loop;
  5  commit;
end;  6
  7  /
PL/SQL procedure successfully completed.

SQL> select count(*) from test;
  COUNT(*)
----------
       100

Gather stats for this table:

SQL> EXEC DBMS_STATS.GATHER_TABLE_STATS('SYS','TEST');
PL/SQL procedure successfully completed.

SQL>  SELECT OWNER,TABLE_NAME,STALE_STATS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS FROM DBA_TAB_STATISTICS where OWNER ='SYS' AND TABLE_NAME='TEST';
OWNER                          TABLE_NAME                     STA   NUM_ROWS
------------------------------ ------------------------------ --- ----------
    BLOCKS EMPTY_BLOCKS
---------- ------------
SYS                            TEST                           NO         100
         1            0

SQL>  select * from dba_tab_modifications where TABLE_OWNER='SYS' AND TABLE_NAME='TEST';

no rows selected

Now lets manually flush the modifications from SGA:

Note: The procedure DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO flushes in-memory monitoring information for all tables in the dictionary.
Corresponding entries in the *_TAB_MODIFICATIONS, *_TAB_STATISTICS and *_IND_STATISTICS
views are updated immediately, without waiting for the Oracle database to flush them periodically (per default every 3 hours). This procedure is useful when you need up-to-date information in those views.

SQL> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.

SQL>  SELECT OWNER,TABLE_NAME,STALE_STATS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS FROM DBA_TAB_STATISTICS where OWNER ='SYS' AND TABLE_NAME='TEST';
OWNER                          TABLE_NAME                     STA   NUM_ROWS
------------------------------ ------------------------------ --- ----------
    BLOCKS EMPTY_BLOCKS
---------- ------------
SYS                            TEST                           NO         100
         1            0

Now lets insert 1000 more values and check if it put entry in dba_tab_modifications:

SQL> begin
 for i in 1..1000 loop
  insert into sys.test values (i);
 end loop;
  commit;
 end;
  /
  2    3    4    5    6    7
PL/SQL procedure successfully completed.

SQL> select count(*) from test;
  COUNT(*)
----------
      1100

SQL>  SELECT OWNER,TABLE_NAME,STALE_STATS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS FROM DBA_TAB_STATISTICS where OWNER ='SYS' AND TABLE_NAME='TEST';
OWNER                          TABLE_NAME                     STA   NUM_ROWS
------------------------------ ------------------------------ --- ----------
    BLOCKS EMPTY_BLOCKS
---------- ------------
SYS                            TEST                           NO         100
         1            0
no rows selected

SQL>  select * from dba_tab_modifications where TABLE_OWNER='SYS' AND TABLE_NAME='TEST';
no rows selected

SQL>  SELECT OWNER,TABLE_NAME,STALE_STATS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS FROM DBA_TAB_STATISTICS where OWNER ='SYS' AND TABLE_NAME='TEST';
OWNER                          TABLE_NAME                     STA   NUM_ROWS
------------------------------ ------------------------------ --- ----------
    BLOCKS EMPTY_BLOCKS
---------- ------------
SYS                            TEST                           NO         100
         1            0

SQL> exec dbms_stats.flush_database_monitoring_info;
PL/SQL procedure successfully completed.

SQL> SELECT OWNER,TABLE_NAME,STALE_STATS,NUM_ROWS,BLOCKS,EMPTY_BLOCKS FROM DBA_TAB_STATISTICS where OWNER ='SYS' AND TABLE_NAME='TEST';
OWNER                          TABLE_NAME                     STA   NUM_ROWS
------------------------------ ------------------------------ --- ----------
    BLOCKS EMPTY_BLOCKS
---------- ------------
SYS                            TEST                           YES        100
         1            0

SQL>  select * from dba_tab_modifications where TABLE_OWNER='SYS' AND TABLE_NAME='TEST';
TABLE_OWNER                    TABLE_NAME
------------------------------ ------------------------------
PARTITION_NAME                 SUBPARTITION_NAME                 INSERTS
------------------------------ ------------------------------ ----------
   UPDATES    DELETES TIMESTAMP TRU DROP_SEGMENTS
---------- ---------- --------- --- -------------
SYS                            TEST
                                                                    1000
         0          0 19-JUN-12 NO              0

 

Wednesday 6 June 2012

Playing with Optimizer Mode - Execution Plan



Today i need to talk about optimizer  modes in oracle and in this post i will specially talk about ALL_ROWS and FIRST_ROWS modes.

Possbile values can be:

OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance.
Values:
first_rows_n
The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).
first_rows
The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.
all_rows
The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement).
Lets create a test table and index over it with statistics:

SQL> create table opt_test  as select * from dba_objects;
Table created.

SQL> create index opt_testindex  on opt_test(OBJECT_TYPE);
Index created.

SQL> exec dbms_stats.gather_table_stats('sys','opt_test');
PL/SQL procedure successfully completed.

SQL> select count(*) from opt_test;
  COUNT(*)
----------
     62952

SQL> set pagesize 200
SQL> set linesize 200
SQL> select object_type,count(*) from dba_objects group by object_type;

OBJECT_TYPE           COUNT(*)
------------------- ----------
CONSUMER GROUP               5
INDEX PARTITION            228
SEQUENCE                   314
QUEUE                       21
SCHEDULE                     1
TABLE PARTITION            557
RULE                         1
JAVA DATA                  301
PROCEDURE                  283
OPERATOR                    57
LOB PARTITION                1
WINDOW                       2
DATABASE LINK                1
LOB                        679
PACKAGE                    983
PACKAGE BODY               921
LIBRARY                    154
RULE SET                    11
PROGRAM                     12
TYPE BODY                  174
CONTEXT                      7
JAVA RESOURCE              773
XML SCHEMA                  24
TRIGGER                    435
JOB CLASS                    2
UNDEFINED                    6
DIRECTORY                   18
MATERIALIZED VIEW            5
TABLE                     4163
INDEX                     4524
SYNONYM                  24462
VIEW                      4977
FUNCTION                   342
WINDOW GROUP                 1
JAVA CLASS               16474
INDEXTYPE                   10
JAVA SOURCE                  5
CLUSTER                     10
TYPE                      1990
RESOURCE PLAN                3
EVALUATION CONTEXT           8
JOB                          8
42 rows selected.

You we can say totaly table rows i.e. 63k , synonyms is in maximum quantity almost 25k. Now when we select objects with object_type as
synonyms it should go for full table scan rather than going for index scan than fetch data from the tables.
But things are not simple as it looks we can cheat around oracle using different optimizer mode as shown below:

SQL> set autotrace traceonly exp
SQL>  alter session set optimizer_mode=ALL_ROWS;
Session altered.
SQL> select * from opt_test where object_type='SYNONYM';
Execution Plan
----------------------------------------------------------
Plan hash value: 1427440082
------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          | 23810 |  2185K|   197   (4)| 00:00:03 |
|*  1 |  TABLE ACCESS FULL| OPT_TEST | 23810 |  2185K|   197   (4)| 00:00:03 |
------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("OBJECT_TYPE"='SYNONYM')
SQL> alter session set optimizer_mode=FIRST_ROWS;
Session altered.
SQL>  select * from opt_test where object_type='SYNONYM';
Execution Plan
----------------------------------------------------------
Plan hash value: 743984098
---------------------------------------------------------------------------------------------
| Id  | Operation                   | Name          | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |               | 23810 |  2185K|  1133   (1)| 00:00:14 |
|   1 |  TABLE ACCESS BY INDEX ROWID| OPT_TEST      | 23810 |  2185K|  1133   (1)| 00:00:14 |
|*  2 |   INDEX RANGE SCAN          | OPT_TESTINDEX | 23810 |       |    66   (2)| 00:00:01 |
---------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("OBJECT_TYPE"='SYNONYM')
SQL>

As we need first rows quickly optimizer compromises on cost of execution plan and chooses the plan which will deliver first rows fastly.

This does not declare your optimer mode is not good, but it is beneficial in OLTP environment where user need first rows fastly on their on screen , then plan is real good.


Conclusion
Cost based optimizer gives you flexibility to choose response time or throughput. So use them based on your business requirement.

 

Sunday 20 May 2012

Manually add a host target to Grid Control / OEM

Recently i went through the situation where is incedently drop host target from agent while moving my agent
to another grid.

So, i got following solution and it works for  me:

Solution: Create a temporary file /tmp/hosttarget with one line:

<Target TYPE="host" NAME="hostname.abc.com"/>

Where hostname.abc.com is the host name. Then add the target with:

emctl config agent addtarget -f /tmp/hosttarget

Similary we can do this for other targets as well.

Saturday 5 May 2012

Oracle Join Nested/Hash/Sort Merge- Performance Tuning

Normally we have three types of  Oracle join:

1. Nested - Loop Join
2. Hash Join
3. Sort Merge Join

I will discuss about characteristics of these three joins today:

1. Nested Loop Joins

Nested loop joins are useful when small subsets of data are being joined and if the join condition is an efficient way of accessing the second table.
It is very important to ensure that the inner table is driven from (dependent on) the outer table. If the inner table's access path is independent of the outer table, then the same rows are retrieved for every iteration of the outer loop, degrading performance considerably. In such cases, hash joins joining the two independent row sources perform better.
See Also:

A nested loop join involves the following steps:
  1. The optimizer determines the driving table and designates it as the outer table.
  2. The other table is designated as the inner table.
  3. For every row in the outer table, Oracle accesses all the rows in the inner table. The outer loop is for every row in the outer table and the inner loop is for every row in the inner table. The outer loop appears before the inner loop in the execution plan, as follows:
    NESTED LOOPS 
      outer_loop 
      inner_loop 
    

1.1 When the Optimizer Uses Nested Loop Joins

The optimizer uses nested loop joins when joining small number of rows, with a good driving condition between the two tables. You drive from the outer loop to the inner loop, so the order of tables in the execution plan is important.
The outer loop is the driving row source. It produces a set of rows for driving the join condition. The row source can be a table accessed using an index scan or a full table scan. Also, the rows can be produced from any other operation. For example, the output from a nested loop join can be used as a row source for another nested loop join.
The inner loop is iterated for every row returned from the outer loop, ideally by an index scan. If the access path for the inner loop is not dependent on the outer loop, then you can end up with a Cartesian product; for every iteration of the outer loop, the inner loop produces the same set of rows. Therefore, you should use other join methods when two independent row sources are joined together.

1.2 Nested Loop Join Hints

If the optimizer is choosing to use some other join method, you can use the USE_NL(table1 table2) hint, where table1 and table2 are the aliases of the tables being joined.


2 Hash Joins

Hash joins are used for joining large data sets. The optimizer uses the smaller of two tables or data sources to build a hash table on the join key in memory. It then scans the larger table, probing the hash table to find the joined rows.
This method is best used when the smaller table fits in available memory. The cost is then limited to a single read pass over the data for the two tables.

2.1 When the Optimizer Uses Hash Joins

The optimizer uses a hash join to join two tables if they are joined using an equijoin and if either of the following conditions are true:
  • A large amount of data needs to be joined.
  • A large fraction of a small table needs to be joined.

2.2 Hash Join Hints

Apply the USE_HASH hint to instruct the optimizer to use a hash join when joining two tables together.

3. Sort Merge Joins

Sort merge joins can be used to join rows from two independent sources. Hash joins generally perform better than sort merge joins. On the other hand, sort merge joins can perform better than hash joins if both of the following conditions exist:
  • The row sources are sorted already.
  • A sort operation does not have to be done.
However, if a sort merge join involves choosing a slower access method (an index scan as opposed to a full table scan), then the benefit of using a sort merge might be lost.
Sort merge joins are useful when the join condition between two tables is an inequality condition (but not a nonequality) like <, <=, >, or >=. Sort merge joins perform better than nested loop joins for large data sets. You cannot use hash joins unless there is an equality condition.
In a merge join, there is no concept of a driving table. The join consists of two steps:
  1. Sort join operation: Both the inputs are sorted on the join key.
  2. Merge join operation: The sorted lists are merged together.
If the input is already sorted by the join column, then a sort join operation is not performed for that row source. However, a sort merge join always creates a positionable sort buffer for the right side of the join so that it can seek back to the last match in the case where duplicate join key values come out of the left side of the join.

3.1 When the Optimizer Uses Sort Merge Joins

The optimizer can choose a sort merge join over a hash join for joining large amounts of data if any of the following conditions are true:
  • The join condition between two tables is not an equi-join.
  • Because of sorts already required by other operations, the optimizer finds it is cheaper to use a sort merge than a hash join.

3.2 Sort Merge Join Hints

To instruct the optimizer to use a sort merge join, apply the USE_MERGE hint. You might also need to give hints to force an access path.
There are situations where it is better to override the optimizer with the USE_MERGE hint. For example, the optimizer can choose a full scan on a table and avoid a sort operation in a query. However, there is an increased cost because a large table is accessed through an index and single block reads, as opposed to faster access through a full table scan.


Note : if we consider on the broader look Nested Loop join is performed on small tables with index on driven (inner) column will add edge to it, on the other hand Hash Join is used on Large tables with no indexes and use pga for preparing hash table. Sort Merge join is used in case of medium sized tables.

Above note is referenced from Oracle Internals and manuals.


Example:

SQL> conn hr/*****
Connected.
SQL> create table e as select * from emp;
Table created.
SQL> create table d as select * from dept;
Table created.
SQL> create index e_deptno on e(deptno);
Index created.
Gather D stats as it is
SQL> exec dbms_stats.gather_table_stats('hr','D')
PL/SQL procedure successfully completed.

Set artificial stats for E:
SQL> exec dbms_stats.set_table_stats(ownname => 'hr', tabname => 'E', numrows => 100, numblks => 100, avgrlen => 124);
PL/SQL procedure successfully completed.

Set artificial stats for E_DEPTNO index
SQL> exec dbms_stats.set_index_stats(ownname => 'hr', indname => 'E_DEPTNO', numrows => 100, numlblks => 10);
PL/SQL procedure successfully completed.

Check out the plan:
A) With less number of rows(100 in E), you will see Nested loop getting used.

SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno;
Execution Plan
----------------------------------------------------------
Plan hash value: 3204653704
----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 2200 | 6 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| E | 25 | 225 | 1 (0)| 00:00:01 |
| 2 | NESTED LOOPS | | 100 | 2200 | 6 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | D | 4 | 52 | 3 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | E_DEPTNO | 33 | | 0 (0)| 00:00:01 |
----------------------------------------------------------------------------------------

B) Let us set some more artificial stats to see which plans is getting used:

SQL> exec dbms_stats.set_table_stats(ownname => 'hr', tabname => 'E', numrows => 1000000, numblks => 10000, avgrlen => 124);
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.set_index_stats(ownname => 'hr', indname => 'E_DEPTNO', numrows => 1000000, numlblks => 1000);
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.set_table_stats(ownname => 'hr', tabname => 'D', numrows => 1000000,numblks => 10000 , avgrlen => 124);
PL/SQL procedure successfully completed.

Now we have 1000000 number of rows in E and D table both and index on E(DEPTNO) reflects the same.
Plans changes !!
SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno;
Execution Plan
----------------------------------------------------------
Plan hash value: 51064926
-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 250G| 5122G| | 3968K(100)| 13:13:45 |
|* 1 | HASH JOIN | | 250G| 5122G| 20M| 3968K(100)| 13:13:45 |
| 2 | TABLE ACCESS FULL| E | 1000K| 8789K| | 2246 (3)| 00:00:27 |
| 3 | TABLE ACCESS FULL| D | 1000K| 12M| | 2227 (2)| 00:00:27 |
-----------------------------------------------------------------------------------

C) Now to test MERGE JOIN, we set moderate number of rows and do some ordering business.
SQL> exec dbms_stats.set_table_stats(ownname => 'hr', tabname => 'E', numrows => 10000, numblks => 1000, avgrlen => 124);
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.set_index_stats(ownname => 'hr', indname => 'E_DEPTNO', numrows => 10000, numlblks => 100);
PL/SQL procedure successfully completed.
SQL> exec dbms_stats.set_table_stats(ownname => 'hr', tabname => 'D', numrows => 1000, numblks => 100, avgrlen => 124);
PL/SQL procedure successfully completed.
SQL> select e.ename,d.dname from e, d where e.deptno=d.deptno order by e.deptno;
Execution Plan
----------------------------------------------------------
Plan hash value: 915894881
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2500K| 52M| 167 (26)| 00:00:02 |
| 1 | MERGE JOIN | | 2500K| 52M| 167 (26)| 00:00:02 |
| 2 | TABLE ACCESS BY INDEX ROWID| E | 10000 | 90000 | 102 (1)| 00:00:02 |
| 3 | INDEX FULL SCAN | E_DEPTNO | 10000 | | 100 (0)| 00:00:02 |
|* 4 | SORT JOIN | | 1000 | 13000 | 25 (4)| 00:00:01 |
| 5 | TABLE ACCESS FULL | D | 1000 | 13000 | 24 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

 

Friday 4 May 2012

Invisible Indexes - 11g New Feature

Invisible Indexes in Oracle Database 11g Release 1 New Feature

Oracle 11g allows indexes to be marked as invisible. Invisible indexes are maintained like any other index, but they are ignored by the optimizer unless the OPTIMIZER_USE_INVISIBLE_INDEXES parameter is set to TRUE at the instance or session level. Indexes can be created as invisible by using the INVISIBLE keyword, and their visibility can be toggled using the ALTER INDEX command.

 1. Create a table t1 with 2 columns n1 and n2

Hint : Create table t1(n1 number,n2 number);

SQL> show parameter visible

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_use_invisible_indexes      boolean     FALSE
SQL> Create table t1(n1 number,n2 number);

Table created.



2. Populate Records

    Begin
          For i in 1..1000 loop
           Insert into t1 values(i,i);
          end loop;
   end;
/

SQL>   Begin
          For i in 1..1000 loop
           Insert into t1 values(i,i);
          end loop;
   end;
/  2    3    4    5    6

PL/SQL procedure successfully completed.

SQL>



3.Create a Invisible index on column n1

Hint :-

SQL> create index t1_n1 on t1(n1) invisible;

create index t1_n1 on t1(n1) invisible;


4
SQL> explain plan for select count(*) from t1 where n1=:b1;



SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3724264953

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 |    13 |     2   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE    |      |     1 |    13 |            |          |
|*  2 |   TABLE ACCESS FULL| T1   |    10 |   130 |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("N1"=TO_NUMBER(:B1))

Note
-----
   - dynamic sampling used for this statement (level=2)

18 rows selected.
If you see above it's not using index as we have give it to be in invisible mode.

5 alter index t1_n1 visible;


6 Define a bind variable b1

sql> variable b1 number

sql>begin
        :b1:=5;
     end;


7 SQL> explain plan for select count(*) from t1 where n1=:b1;



8 SQL> select * from table(dbms_xplan.display);
---------------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1|     5|      1   (0)| 00:00:01 |
|   1|   SORT AGGREGATE    |       |      1|     5|             |          |
|*  2|     INDEX RANGE SCAN| T1_N1 |      1|     5|      1   (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
   2 - access("N1"=TO_NUMBER(:B1))



9 alter index t1_n1 visible;


10 SQL> explain plan for select count(*) from t1 where n1=:b1;


SQL> explain plan for select count(*) from t1 where n1=:b1;

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 73337487

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |     1 |    13 |     1   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |       |     1 |    13 |            |          |
|*  2 |   INDEX RANGE SCAN| T1_N1 |    10 |   130 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("N1"=TO_NUMBER(:B1))

Note
-----
   - dynamic sampling used for this statement (level=2)

18 rows selected.

SQL>

Benefits:Invisible indexes can be useful for processes with specific indexing needs, where the presence of the indexes may adversely affect other functional areas. They are also useful for testing the impact of dropping an index.

View:The current visibility status of an index is indicated by the VISIBILITY column of the [DBA|ALL|USER]_INDEXES views.

Bottleneck: Invisble index are just not visible to select statement, rather it will operate normally in case of DML's.


Thursday 3 May 2012

Row chaining and Row Migration - Perfomance Tuning

Proof of Concept: There are two circumstances when this can occur, the data for a row in a table may be too large to fit into a single data block. This can be caused by either row chaining or row migration.

Row Chaining: Occurs when the row is too large to fit into one data block when it is first inserted. In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable.

Row Migration: Occurs when a row that originally fitted into one data block is updated so that the overall row length increases, and the block’s free space is already completely filled. In this case, Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in a new block. Oracle preserves the original row piece of a migrated row to point to the new block containing the migrated row: the rowid of a migrated row does not change.
When a row is chained or migrated, performance associated with this row decreases because Oracle must scan more than one data block to retrieve the information for that row.

o INSERT and UPDATE statements that cause migration and chaining perform poorly, because they perform additional processing.

o SELECTs that use an index to select migrated or chained rows must perform additional I/Os.

Detection: Migrated and chained rows in a table or cluster can be identified by using the ANALYZE command with the LIST CHAINED ROWS option. This command collects information about each migrated or chained row and places this information into a specified output table. To create the table that holds the chained rows,

1. Execute script UTLCHAIN.SQL. i.e @?/rdbms/admin/utlchain.sql --> to crated the table chained_Rows which will contain the data after analyze.

2.ANALYZE TABLE scott.emp LIST CHAINED ROWS; --> Analyze table

3.SELECT * FROM chained_rows; --> it will populate data in chained_rows regarding row chaining and migrations

4.You can also detect migrated and chained rows by checking the ‘table fetch continued row’ statistic in the v$sysstat view.
SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’;
NAME VALUE
—————————————————————- ———
table fetch continued row 308

Although migration and chaining are two different things, internally they are represented by Oracle as one. When detecting migration and chaining of rows you should analyze carrefully what you are dealing with.

Resolving:
o In most cases chaining is unavoidable, especially when this involves tables with large columns such as LONGS, LOBs, etc. When you have a lot of chained rows in different tables and the average row length of these tables is not that large, then you might consider rebuilding the database with a larger blocksize.

e.g.: You have a database with a 2K block size. Different tables have multiple large varchar columns with an average row length of more than 2K. Then this means that you will have a lot of chained rows because you block size is too small. Rebuilding the database with a larger block size can give you a significant performance benefit.
o Migration is caused by PCTFREE being set too low, there is not enough room in avoid migration, all tables that are updated should have their PCTFREE set so that there is enough space within the block for updates.
You need to increase PCTFREE to avoid migrated rows. If you leave more free space available in the block for updates, then the row will have more room to grow.
SQL Script to eliminate row migration :
— Get the name of the table with migrated rows:
ACCEPT table_name PROMPT ‘Enter the name of the table with migrated rows: ‘

— Clean up from last execution
set echo off
DROP TABLE migrated_rows;
DROP TABLE chained_rows;
— Create the CHAINED_ROWS table
@…/rdbms/admin/utlchain.sql
set echo on
spool fix_mig
— List the chained and migrated rows
ANALYZE TABLE &table_name LIST CHAINED ROWS;

— Copy the chained/migrated rows to another table
create table migrated_rows as
SELECT orig.*
FROM &table_name orig, chained_rows cr
WHERE orig.rowid = cr.head_rowid
AND cr.table_name = upper(‘&table_name’);

— Delete the chained/migrated rows from the original table
DELETE FROM &table_name WHERE rowid IN (SELECT head_rowid FROM chained_rows);

— Copy the chained/migrated rows back into the original table
INSERT INTO &table_name SELECT * FROM migrated_rows;

spool off

Tips

1. Analyze the table and check the chained count for that particular table
8671 Chain Count

analyze table tbl_tmp_transaction_details compute statistics

select table_name,chain_cnt,pct_free,pct_used from dba_tables where table_name=’TBL_TMP_TRANSACTION_DETAILS’

2. Increase the pctfree size to 30

alter table tbl_tmp_transaction_details pctfree 30

3. Regenerate Report (When rows get updated only we will have Chained rows)

tbl_report_generation_status

begin dbms_job.run(190); end;

4. Analyze the table and check the chained count for that particular table
0 Chain Count

analyze table tbl_tmp_transaction_details compute statistics

select table_name,chain_cnt,pct_free,pct_used from dba_tables where table_name=’TBL_TMP_TRANSACTION_DETAILS’

Note:
If we want to do the procedure to delete the chained rows from original table and insert the same again, then we need chained_rows table
To create chained rows we need to run the utlchain.sql from $ORACLE_HOME/rdbms

Find out the chained rows.

analyze table tbl_tmp_transaction_details list chained count;

The above command will move the chained rows to chained_row table
Based on the rowid in chained_row table we can move those record to temp table and delete those chained rows from original table then insert the same again into original table.

select * from tbl_tmp_transaction_details where rowid=’AAAG8DAAGAAAGOKABD’:


Example:

SQL> Create table frag_tab(code number,x1 char(2000),x2 char(2000),
                            x3 char(2000),x4 char(2000));  2 

Table created.

SQL> Insert into frag_tab(code) values(1);
       Insert into frag_tab(code) values(2);
       Insert into frag_tab(code) values(3);
       commit;
1 row created.

SQL>
1 row created.

SQL>
1 row created.

SQL>

Commit complete.

SQL>
SQL>
SQL>
SQL> update frag_tab set x1='x1',x2='x2',x3='x3',x4='x4' where code=2;
   update frag_tab set x1='x1',x2='x2',x3='x3',x4='x4' where code=1;
   update frag_tab set x1='x1',x2='x2',x3='x3',x4='x4' where code=3;
commit;

1 row updated.

SQL>
1 row updated.

SQL>
1 row updated.

SQL>
Commit complete.

SQL>
SQL>
SQL>
SQL>
SQL> @?/rdbms/admin/utlchain.sql

Table created.

SQL> Analyze table frag_tab list chained rows;

Table analyzed.

SQL> select * from chained_rows;

OWNER_NAME                     TABLE_NAME
------------------------------ ------------------------------
CLUSTER_NAME                   PARTITION_NAME
------------------------------ ------------------------------
SUBPARTITION_NAME              HEAD_ROWID         ANALYZE_T
------------------------------ ------------------ ---------
SYS                            FRAG_TAB

N/A                            AAASOtAABAAAU9ZAAA 03-MAY-12

SYS                            FRAG_TAB

N/A                            AAASOtAABAAAU9ZAAB 03-MAY-12

OWNER_NAME                     TABLE_NAME
------------------------------ ------------------------------
CLUSTER_NAME                   PARTITION_NAME
------------------------------ ------------------------------
SUBPARTITION_NAME              HEAD_ROWID         ANALYZE_T
------------------------------ ------------------ ---------

SYS                            FRAG_TAB

N/A                            AAASOtAABAAAU9ZAAC 03-MAY-12





SQL> Create table duptab as select * from frag_tab where 1=0;

Table created.

SQL>  Insert into duptab  select * from frag_tab
       where rowid in(select head_rowid from chained_rows);  2 

3 rows created.

SQL> delete frag_tab where rowid in(select head_rowid from chained_rows);

3 rows deleted.

SQL> Insert into frag_tab
                   as select * from duptab  2 
  3  /
                   as select * from duptab
                   *
ERROR at line 2:
ORA-00926: missing VALUES keyword


SQL> Insert into frag_tab
                    select * from duptab  2 
  3  /

3 rows created.

SQL>  commit;

Commit complete.

SQL> truncate table chained_rows;

Table truncated.

SQL> Create tablespace bigtbs
         datafile '/home/oracle/bigtbs.dbf' size 10m
         blocksize 16k;  2    3 
Create tablespace bigtbs
*
ERROR at line 1:
ORA-29339: tablespace block size 16384 does not match configured block sizes


SQL> ALter system set db_16k_cache_size=10m;

System altered.

SQL> Create tablespace bigtbs
         datafile '/home/oracle/bigtbs.dbf' size 10m
         blocksize 16k;  2    3 
Create tablespace bigtbs
*
ERROR at line 1:
ORA-01119: error in creating database file '/home/oracle/bigtbs.dbf'
ORA-27040: file create error, unable to create file
Linux Error: 2: No such file or directory


SQL> l
  1  Create tablespace bigtbs
  2           datafile '/home/oracle/bigtbs.dbf' size 10m
  3*          blocksize 16k
SQL> Create tablespace bigtbs
         datafile '/tmp/bigtbs.dbf' size 10m
         blocksize 16k;
  2    3 
Tablespace created.

SQL> Alter table frag_tab move tablespace bigtbs;

Table altered.

SQL> Analyze table frag_tab list chained rows;

Table analyzed.

SQL> select * from chained_rows
  2  /

no rows selected

SQL>

Wednesday 2 May 2012

Increemental Statistics Gathering Feature -11g

Increemental Statistics Gathering Feature -11g

Expensive global statistics collection


In data warehouse environment it is very common to do a bulk load directly into one or more empty partitions. This will make the partition statistics stale and may also make the global statistics stale. Re-gathering statistics for the effected partitions and for the entire table can be very time consuming. Traditionally, statistics collection is done in a two-pass approach:
  • In the first pass we will scan the table to gather the global statistics
  • In the second pass we will scan the partitions that have been changed to gather their partition level statistics.
The full scan of the table for global statistics collection can be very expensive depending on the size of the table. Note that the scan of the entire table is done even if we change a small subset of partitions.


In Oracle Database 11g, we avoid scanning the whole table when computing global statistics by deriving the global statistics from the partition statistics. Some of the statistics can be derived easily and accurately from partition statistics. For example, number of rows at global level is the sum of number of rows of partitions. Even global histogram can be derived from partition histograms. But the number of distinct values (NDV) of a column cannot be derived from partition level NDVs. So, Oracle maintains another structure called a synopsis for each column at the partition level. A synopsis can be considered as sample of distinct values. The NDV can be accurately derived from synopses. We can also merge multiple synopses into one. The global NDV is derived from the synopsis generated by merging all of the partition level synopses. To summarize

  1. Gather statistics and create synopses for the changed partitions only
  2. Oracle automatically merges partition level synopses into a global synopsis
  3. The global statistics are automatically derived from the partition level statistics and global synopses


Incremental maintenance feature is disabled by default. It can be enabled by changing the INCREMENTAL table preference to true. It can also be enabled for a particular schema or at the database level. If you are interested in more details of the incremental maintenance feature

Assume we have table called SALES that is range partitioned by day on the SALES_DATE column. At the end of every day data is loaded into latest partition and partition statistics are gathered. Global statistics are only gathered at the end of every month because gathering them is very time and resource intensive. Use the following steps in order to maintain global statistics after every load.
1 -Turn on incremental feature for the table. 

EXEC DBMS_STATS.SET_TABLE_PREFS('SH','SALES','INCREMENTAL','TRUE');

2 -At the end of every load gather table statistics using GATHER_TABLE_STATS command. You don't need to specify the partition name. Also, do not specify the granularity parameter. The command will collect statistics for partitions where data has change or statistics are missing and update the global statistics based on the partition level statistics and synopsis.
EXEC DBMS_STATS.GATHER_TABLE_STATS('SH','SALES');

Note: that the incremental maintenance feature was introduced in Oracle Database 11g Release 1. However, we also provide a solution in Oracle Database10g Release 2 (10.2.0.4) that simulates the same behavior. The 10g solution is a new value, 'APPROX_GLOBAL AND PARTITION' for the GRANULARITY parameter of the GATHER_TABLE_STATS procedures. It behaves the same as the incremental maintenance feature except that we don't update the NDV for non-partitioning columns and number of distinct keys of the index at the global level. For partitioned column we update the NDV as the sum of NDV at the partition levels. Also we set the NDV of columns of unique indexes as the number of rows of the table. In general, non-partitioning column NDV at the global level becomes stale less often. It may be possible to collect global statistics less frequently then the default (when table changes 10%) since approx_global option maintains most of the global statistics accurately.

Let's take a look at an example to see how you would effectively use the Oracle Database 10g approach.
After the data load is complete, gather statistics using DBMS_STATS.GATHER_TABLE_STATS for the last partition (say SALES_11FEB2009), specify granularity => 'APPROX_GLOBAL AND PARTITION'. It will collect statistics for the specified partition and derive global statistics from partition statistics (except for NDV as described before).

EXEC DBMS_STATS.GATHER_TABLE_STATS ('SH', 'SALES', 'SALES_11FEB2009', GRANULARITY => 'APPROX_GLOBAL AND PARTITION');
It is necessary to install the one off patch for bug 8719831 if you are using the above features in 10.2.0.4 (patch 8877245) or in 11.1.0.7 (patch 8877251)


Let’s take the ORDERS2 table, which is partitioned by month on order_date.  We will begin by enabling incremental statistics for the table and gathering statistics on the table.


After the statistics gather the last_analyzed date for the table and all of the partitions now show 13-Mar-12.

And we now have the following column statistics for the ORDERS2 table.

We can also confirm that we really did use incremental statistics by querying the dictionary table sys.HIST_HEAD$, which should have an entry for each column in the ORDERS2 table.

So, now that we have established a good baseline, let’s move on to the DML. Information is loaded into the latest partition of the ORDERS2 table once a month. Existing orders maybe also be update to reflect changes in their status. Let’s assume the following transactions take place on the ORDERS2 table this month.

After these transactions have occurred we need to re-gather statistic since the partition ORDERS_MAR_2012 now has rows in it and the number of distinct values and the maximum value for the STATUS column have also changed.

Now if we look at the last_analyzed date for the table and the partitions, we will see that the global statistics and the statistics on the partitions where rows have changed due to the update (ORDERS_FEB_2012) and the data load (ORDERS_MAR_2012) have been updated.

The column statistics also reflect the changes with the number of distinct values in the status column increase to reflect the update.



Note : Above info is referenced from Oracle blogs and Oracle Manuals.

Sunday 15 April 2012

Recreate Targets.xml File for Grid Control agent version : 10.2

Some when you targets.xml is corrupted or accidently dropped , you need to create it manually. Steps are as below:

1. Create a blank targets.xml file :
cd $ORACLE_HOME/sysman/emd
touch targets.xml

2. After created targets.xml file you need to copy values from $ORACLE_HOME/sysman/config/emd.properties files as below format:

<Targets AGENT_SEED="agentseed from emd.properties">
               <Target TYPE="oracle_emd" NAME="hostname and port from emd.properties EMD_URL"/>
               <Target TYPE="host" NAME="hostname as it appears in the emd_url"/>
</Targets

Example :

<Targets AGENT_SEED="256896864">
<Target TYPE="oracle_emd" NAME="hostname:3872"/>
<Target TYPE="host" NAME="hostname"/>
</Targets>

First time you start the agent with above file it will itself convert AGENT_SEED parameter to AGENT_TOKEN and value comes in encrypted form

3. To force target re-discovery using below methods:

Run agentca -d as

cd <AGENT_HOME>/bin
agentca -d