Migrating Data Using Transportable Tablespace(TTS)


Introduction

There are various ways of migrating data in Oracle 10g. Datapump, moving over link, RMAN duplicate, and transportable tablespace are the most common ones. According to my experiences I can do the following classification

Method Hardness Best Suitable For Availability
Database Link Easy Less than 50 GB High
Data Pump Moderate Over 50 GB High
TTS Moderate Terabytes of Data Low
RMAN Duplicate Hard Full Database High

In this post I will be explaining the TTS way of doing the job. Although we define TTS as suitable for terabytes of data, when the system resources on the source system is limited, TTS is again the best option. That’s because all other techniques except the database link requires a copy of original data and this requires resource (CPU,disk etc.).

Preparing the Data Before Move (Optional)

Preparing the data before moving is important in two sense:

  1. Usually you don’t need to move the whole data so processing and reducing the amount of data prior to move will reduce your transfer time to some extent.
  2. One problem about TTS is that it requires tablespaces to be moved to be taken to read only mode during the move. If you copy the subset of the data to another tablespace, you will be increasing the availability of the master data source. You will be taking the copy tablespace to read only mode only.

SQL> CREATE TABLESPACE STAGING DATAFILE

‘/data08/asndb/staging1.dbf’ SIZE 4G AUTOEXTEND ON NEXT 4G MAXSIZE 16G,

‘/data08/asndb/staging2.dbf’ SIZE 4G AUTOEXTEND ON NEXT 4G MAXSIZE 16G,

‘/data08/asndb/staging3.dbf’ SIZE 4G AUTOEXTEND ON NEXT 4G MAXSIZE 16G

BLOCKSIZE 8192 NOLOGGING ONLINE EXTENT MANAGEMENT LOCAL

SEGMENT SPACE MANAGEMENT AUTO FLASHBACK OFF;

SQL> CREATE TABLE STAGING_TAB PCTFREE 1 PARALLEL 3 TABLESPACE STAGING AS

SELECT <list of required columns> FROM ASN_USER.SMSC_LOG WHERE <filter predicate>;

For our case by this filtering we reduce the 330 GB total data to 35 GB migration data. As a result, remember that filtering on the source site is a best practice for any migration activity.

TODO List on Source Database

There are a number of steps to be done on the source site. Respectively,

  1. Validating the self containing property of the migration tablespace.
  2. Altering the migration tablespace to read only mode.
  3. Exporting the meta data

Validating Self Containing Property

TTS requires to be self contained. This means that the segments within the migration tablespace set can not have dependency to a segment in a tablespace out of the transportable tablespace set. Although it seems a complex task, it is simply done by a PL/SQL procedure in DBMS_TTS package.

SQL> begin
2    sys.dbms_tts.transport_set_check(‘STAGING’, TRUE);
end;
/

PL/SQL procedure successfully completed.

SQL> select * from sys.transport_set_violations;

no rows selected

As you see STAGING tablespace is self contained. If it were not self contained you should either remove the dependencies by dropping them or include the tablespaces of segments into TTS set to which migration set is depended.

ALTER Tablespaces to READ ONLY Mode

This requirement must be obvious. In the next steps you will see that we will perform a file system copy to tablespace datafiles. In order those datafiles to not to require a recovery they need to be in consistent during the copy activity. That is to say it is required that redo log generation has to stop for those datafiles.

SQL> alter tablespace STAGING read only;

Export the Metadata

Final step on source side is to extract the metadata export. For this purpose you need a directory object. For example you can use built-in ADMIN_DIR or you can create  a new one.

bash-2.05$ expdp userid=temp_usr/****** directory=ADMIN_DIR dumpfile=staging_asn.dmp logfile=ADMIN_DIR:asn.log transport_tablespaces=staging transport_full_check=y

Export: Release 10.2.0.1.0 – 64bit Production on Friday, 11 July, 2008 13:02:42

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options
Starting “TEMP_USR”.”SYS_EXPORT_TRANSPORTABLE_01″:  userid=temp_usr/******** directory=DIR1 dumpfile=staging_asn.dmp logfile=DIR1:asn.log transport_tablespaces=staging transport_full_check=y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table “TEMP_USR”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
******************************************************************************
Dump file set for TEMP_USR.SYS_EXPORT_TRANSPORTABLE_01 is:
/data08/asndb/staging_asn.dmp
Job “TEMP_USR”.”SYS_EXPORT_TRANSPORTABLE_01″ successfully completed at 13:03:29

TODO List on Target Database

The task list for target database is simply,

  1. Copy the datafiles and export file to target server via FTP
  2. Create the necessary schemas
  3. Modify the db_n_cache_size parameter if necessary.
  4. Import the export file

Copy the datafiles via FTP

The very first step is to copy the datafiles and metadata export file to target server.

himalaya@oracle $ cd /export/home/oracle/hsensoy/datafile

himalaya@oracle $ ftp 10.x.x.x

Name (10.200.171.16:oracle): root

Password:

ftp> cd /data08/asndb

ftp> ls

asn.log
dump.log
dump.par
nohup.out
staging1.dbf
staging2.dbf
staging3.dbf
staging_asn.dmp


ftp> binary

ftp> get staging_asn.dmp

ftp> prompt
ftp> mget *.dbf

ftp> qui

Create Required Schemas

Remember that the metadata export is taken for the tablespace. While you are importing into a new database it will be looking for the schemas of the objects taken from the source system. So you need to create the required schemas in the new database.

SQL> CREATE USER TEMP_USR PROFILE DEFAULT IDENTIFIED BY ******
2 DEFAULT TABLESPACE USERS_BTS
3 TEMPORARY TABLESPACE PRIM_TMP_GRP
4 QUOTA UNLIMITED ON USERS_BTS QUOTA UNLIMITED ON STAGING_BTS
5 ACCOUNT UNLOCK;

User created.

Modify the db_n_cache_size (Optional)

If the source and target databases have different database block sizes. Then Oracle will not allow you to import the metadata

himalaya@oracle $ impdp userid=ddsbase/****** parfile=import.par

Import: Release 10.2.0.3.0 – 64bit Production on Friday, 11 July, 2008 14:38:56

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_01″ successfully loaded/unloaded
Starting “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_01″:  userid=ddsbase/******** parfile=import.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
ORA-39123: Data Pump transportable tablespace job aborted
ORA-29339: tablespace block size 8192 does not match configured block sizes

Job “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_01″ stopped due to fatal error at 14:38:59

Fortunately Oracle allows the usage of different block sized tablespaces within the same database. The only thing to do is to explicitly set the buffer cache size for different tablespaces

himalaya@oracle $ srv

SQL*Plus: Release 10.2.0.3.0 – Production on Fri Jul 11 14:40:59 2008

Copyright (c) 1982, 2006, Oracle.  All Rights Reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set db_8k_cache_size=16M;

System altered.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options

Import the Metadata

The final step on the target site is to import the tablespace metada. For this pupose you may simply create a data pump parameter file using your favorite text editor.

himalaya@oracle $ vi import.par
“import.par” [New file]

directory=ADMIN_DIR
dumpfile=staging_asn.dmp
logfile=ADMIN_DIR:asn.log
transport_datafiles=’/export/home/oracle/hsensoy/datafile/staging1.dbf’,’/export
/home/oracle/hsensoy/datafile/staging2.dbf’,’/export/home/oracle/hsensoy/datafil
e/staging3.dbf’
keep_master=y
~
~
~
“import.par” [New file] 5 lines, 262 characters

Then using this file you can perform the import

himalaya@oracle $ impdp userid=ddsbase/****** parfile=import.par

Import: Release 10.2.0.3.0 – 64bit Production on Friday, 11 July, 2008 14:41:14

Copyright (c) 2003, 2005, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 – 64bit Production
With the Partitioning, OLAP and Data Mining options
Master table “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_02″ successfully loaded/unloaded
Starting “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_02″:  userid=ddsbase/******** parfile=import.par
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job “DDSBASE”.”SYS_IMPORT_TRANSPORTABLE_02″ successfully completed at 14:41:48

himalaya@oracle $

Your tablespace is ready to use in the target database.

About these ads

About kocakahin

Just a computer engineer

Posted on July 12, 2008, in Oracle. Bookmark the permalink. 6 Comments.

  1. Another way to do this is to use RMAN to create the TTS set from an existing backup although you need the space and its fairly complicated you don’t then need to place your source tablespace in read only mode.

  2. Sure Chris,
    As an RMAN addicted DBA, I think your way is much more elegant and “high available” way of doing the same thing. For people who haven’t tried this methodology yet, Metalink Note 455593.1 may be good starting point.

    Thanks in advance for your comment.

  3. Hi,

    Lets explain limitation of TTS also. Thanks for sharing said knowledge base.

    Regards,
    Gitesh
    http://www.dbametrix.com

  4. I really love the RMAN approach since on most live production system I guess you can’t put the TBS in Read Only Mode.. it is a bit more complicated but once you have tried it… you’ll love it.

  5. You haven’t said anything about how you copy over the non table stuff from the source database such as views, procedures, packages which belong to the schemata which own the tables in the transported tablespaces.

    • Actually transportable tablespace idea I have explained is for data migration only. There are better ways to migrate schematic things like package ,procedure,etc. You can use data pump export to move them between databases.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 172 other followers

%d bloggers like this: