Thursday, 17 April 2014

Initial Load - Instantiation - Oracle Golden Gate using Datapump



Full database export using expdp

Enable Minimal Supplemental Logging in Oracle on source

alter database add supplemental log data ;

Enable Supplemental Logging at Table Level on source

GGSCI> dblogin userid xxxxx password xxxxxx
GGSCI> add trandata <schema>.<tablename>

Add Extract, Add Exttrail, Add Pump, Add Rmttrail on source

Start Extract, Start Pump on source

Create a database directory:
SQLPLUS> create directory dumpdir as '<some directory>' ;

Get the current SCN on the source database

SQLPLUS> select current_scn from v$database ; 

28318029


Run the Data Pump Export using the flashback SCN you obtained in the previous step. The following example shows running the expdp utility at a Degree Of Parallelism (DOP) of 4. If you have sufficient system resources (CPU, memory and IO) then running at a higher DOP will decrease the amount of time it takes to take the export (up to 4x for a DOP of 4). Note that expdp uses Oracle Database parallel execution settings (e.g. parallel_max_servers) which have to be set appropriately in order to take advantage of parallelism. Other processes running in parallel may be competing for those resources. See the Oracle Documentation for more details.

>expdp directory=dumpdir full=y parallel=4 dumpfile=ora102_%u.dmp flashback_scn=28318029
>Username: system NOTE: any DB user with DBA privileges will do >Password:

Note: The export log needs to be checked for errors.

Start an import using impdp to the target database when the export step is complete.
Add and Start Replicat after import is complete:

GGSCI> add replicat <repname>, exttrail ./dirdat/<xx>
GGSCI> start replicat <repname>, aftercsn <value returned from step 6>

Example 2: schema export using exp

Enable Minimal Supplemental Logging in Oracle on source

SQLPLUS > alter database add supplemental log data;
Enable Supplemental Logging at Table Level on source

GGSCI> dblogin userid xxxxx password xxxxxx
GGSCI> add trandata <schema>.<tablename>


Add Extract, Add ExtTrail FileTrail File, Add Pump, Add RmtTrail FileTrail File on source

Start Extract, Start Pump on source

Get the current SCN on the source database:

SQLPLUS> select current_scn from v$database ;
CURRENT_SCN
-----------------------
28318029

Get a flashback snapshot from the SCN that you obtained in the previous step. You can do this with exp or expdp utility. The following example shows the use of export utility (exp):

>exp <username>/<password> owner=<schema_name> grants=n statistics=none triggers=n compress=n FLASHBACK_SCN=28318029

Note: Undo_retention has to be set high enough, and the export log needs to be checked for errors. You can speed up exp by running multiple session in parallel but you have to manually configure subsets of the data you want to export (e.g. different tables for different export files). Make sure to use the same FLASHBACK_SCN for all export sessions to ensure taking a read consistent backup.

Start an import to the target database when export is complete.

Add and Start Replicat:

GGSCI> add replicat <rename>, exttrail ./dirdat/<xx>
GGSCI> start replicat <rename>, aftercsn <SCN value returned from query on source>

Add tables to an Existing GoldenGate Configuration - Oracle Golden Gate




1.     Log into the source database via GGSCI

[oracle@blue ggate]$ ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.13 17435036 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_131002.1206_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Oct  3 2013 02:39:46

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.



GGSCI (blue.color) 2> dblogin userid ggate, password ggate
Successfully logged into database.

2.     Add trandata for the new table(s).

GGSCI (blue.color) 3> add trandata song.stone

2013-12-05 15:17:07  WARNING OGG-00869  No unique key is defined for table 'STONE'. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.

Logging of supplemental redo data enabled for table SONG.STONE.

GGSCI (blue.color) 4> add trandata song.greystreet

2013-12-05 15:17:15  WARNING OGG-00869  No unique key is defined for table 'GREYSTREET'. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.

Logging of supplemental redo data enabled for table SONG.GREYSTREET.

3.     Add the two tables in the Extract Parameter file

GGSCI (blue.color) 5> edit params e_gg
...
TABLE song.stone;
TABLE song.greystreet;
...

4.     Restart the Extract e_gg

GGSCI (blue.color) 2> stop e_gg

Sending STOP request to EXTRACT E_GG ...
Request processed.


GGSCI (blue.color) 3> start e_gg

Sending START request to MANAGER ...
EXTRACT E_GG starting

5.     At this point, capture the Current SCN from the database. Do not lose this number, we need it later!

[oracle@blue ggate]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Thu Dec 5 15:28:31 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    1113396

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

6.     Take a Data Pump export of the two tables. Please note the use of FLASHBACK_SCN.

[oracle@blue ggate]$ expdp system/oracle directory=data_pump_dir dumpfile=additional_tables.dmp flashback_scn=1113396 tables=song.stone,song.greystreet

Export: Release 11.2.0.3.0 - Production on Thu Dec 5 15:32:04 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** directory=data_pump_dir dumpfile=additional_tables.dmp flashback_scn=1113396 tables=song.stone,song.greystreet
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
. . exported "SONG"."GREYSTREET"                         5.882 KB     100 rows
. . exported "SONG"."STONE"                              5.679 KB     100 rows
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /u01/app/oracle/admin/sky/dpdump/additional_tables.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" successfully completed at 15:32:14

7.     Run the import on the Target database.

[oracle@blue ggate]$ impdp system/oracle directory=data_pump_dir dumpfile=additional_tables.dmp remap_schema=song:river

Import: Release 11.2.0.3.0 - Production on Thu Dec 5 15:45:29 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01":  system/******** directory=data_pump_dir dumpfile=additional_tables.dmp remap_schema=song:river
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "RIVER"."GREYSTREET"                        5.882 KB     100 rows
. . imported "RIVER"."STONE"                             5.679 KB     100 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Job "SYSTEM"."SYS_IMPORT_FULL_01" successfully completed at 15:45:31

8.     Here’s the important step, you need to edit the replicat’s parameter file and include the following string in the map statement.

GGSCI (blue.color) 2> edit params r_gg
...
MAP song.stone, TARGET river.stone , FILTER ( @GETENV("TRANSACTION", "CSN") > 1113396);
MAP song.greystreet, TARGET river.greystreet, FILTER ( @GETENV("TRANSACTION", "CSN") > 1113396);
...

9.     Recycle the replicat.

GGSCI (blue.color) 4>  stop r_gg

Sending STOP request to REPLICAT R_GG ...
start Request processed.


GGSCI (blue.color) 5> start r_gg

Sending START request to MANAGER ...
REPLICAT R_GG starting

GGSCI (blue.color) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     E_GG        00:00:00      00:00:09   
REPLICAT    RUNNING     R_GG        00:00:00      00:00:03   

10. Check the counts on both systems.

10.1. On the Source

SQL> select name from v$database;

NAME
---------
SKY

SQL> select count(1) from song.stone;

  COUNT(1)
----------
       100

10.2. On the Target

SQL> select name from v$database;

NAME
---------
WATER

SQL> select count(1) from river.stone;

  COUNT(1)
----------
       100

10.3. Insert data on the source.

SQL> INSERT INTO song.stone
SELECT dbms_random.string('A', 2) RNDMSTR
  FROM all_objects
 WHERE ROWNUM < 101;  2    3    4 

100 rows created.

SQL> commit;

Commit complete.

SQL> select count(1) from song.stone;

  COUNT(1)
----------
       200

10.4. Check the counts on the target table.

SQL> select count(1) from river.stone;

  COUNT(1)
----------
       200

10.5.  Check lag on the replicat.

GGSCI (blue.color) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                          
EXTRACT     RUNNING     E_GG        00:00:00      00:00:02   
REPLICAT    RUNNING     R_GG        00:00:00      00:00:08   

11.     Once the table is caught up, remove the FILTER parameters from the replicat.

GGSCI (blue.color) 2> edit params r_gg
...
MAP song.stone, TARGET river.stone;
MAP song.greystreet, TARGET river.greystreet;
...

12.     Restart the replicat and we’re done!

GGSCI (blue.color) 4>  stop r_gg

Sending STOP request to REPLICAT R_GG ...
start Request processed.


GGSCI (blue.color) 5> start r_gg

Sending START request to MANAGER ...
REPLICAT R_GG starting

GGSCI (blue.color) 6> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                          
EXTRACT     RUNNING     E_GG        00:00:00      00:00:06   
REPLICAT    RUNNING     R_GG        00:00:00      00:00:02   

Resynchronizing an out-of-sync table - Oracle Golden Gate



In this procedure, there are two Replicat groups:

● The original group, known as groupA
● A new group that you will create, known as groupB

To resynchronize a table

1.Comment out the out-of-sync table in the original Replicat parameter file.

2.Stop the original Replicat and start it again so it continues processing the tables that are not out-of-sync.

STOP REPLICAT <groupA>
START REPLICAT <groupA>

3.Write down the timestamp on the source system.

4.Try to resolve any long-running transactions on the source out-of-sync table. You will be copying the source data to the target, and if transactions are open while the copy is being made, they will not be captured.

5. Start a copy of the source data.

6.Import the copy to the target table.

7.Create a new Replicat group for the out-of-sync table, using BEGIN to start at the source timestamp that you recorded step 3, and using the existing trail for EXTTRAIL.

ADD REPLICAT <groupB>, EXTTRAIL <existing trail>,BEGIN <yyyy-mm-dd hh:mm:ss>

8. Create a parameter file for the new group and include HANDLECOLLISIONS.
EDIT PARAMS <groupB>

9.Start the new Replicat group. START REPLICAT <groupB>

10.View the lag of the new group until it shows “At EOF, no more records to process.”
SEND REPLICAT <groupB>, GETLAG

11.Stop the new Replicat group.STOP REPLICAT <groupB>

12.Edit the parameter file of the new group to comment out or remove HANDLECOLLISIONS.

13.Start the new Replicat group.

START REPLICAT <groupB>

The next steps merge the table back with the others so that only one Replicat group is needed again.

14.Stop Extract. STOP EXTRACT <extract_group>

15.View the lag of each Replic at group until it shows “At EOF, no more records to process.”

SEND REPLICAT <groupA>, GETLAG
SEND REPLICAT <groupB>, GETLAG

16.Stop both Replicat groups.

STOP REPLICAT <groupA>
STOP REPLICAT <groupB>

17.Uncomment the newly synchronized table in the original Replicat parameter file.

18. Start Extract.

START EXTRACT <extract_group>

19.Start the original Replicat.

START REPLICAT <groupA>

20.Delete the new Replicat that you created.

DELETE REPLICAT <groupB>

Note : we can use find current_scan from source database and take export using the flashback parameter and copy the tables to target, while starting dummy replicat group we use start replicat after csn.