Sunday 25 March 2018

Schema Refresh Using DataPump


Schema refresh is the routine task of the DBA life.To archive this  i am going to use Datapump (Expdp/Impdp) this is new technology which is introduced in the oracle 10G release. it is faster than the traditional export/import. use of expdp/impdp require one user which having the EXP_FULL_DATABASE role assing to him or we can do with SYS user.

Below explained the steps to perform schema refresh from SOURCE DATABASE to TARGET DATABASE.

Source Database:-

1) check the schema which you are want to export is present or not.

SQL> select USERNAME,ACCOUNT_STATUS,CREATED from dba_users where username='<YOUR_USERNAME>';

2) Check the size of the schema

SQL> SELECT sum(bytes)/1024/1024/1024 AS total_size_GB FROM  dba_segments WHERE owner='<YOUR_SCHEMA_NAME>';


3) Take count of objects which are present in the databsae schema.

SQL> select object_type,status,count(*)  from dba_objects where owner='<YOUR_SCHEMA_NAME>' group by object_type,status;


4) Create OS level as well as database level  directory where your data going to be store.

OS LEVEL:-
mkdir -p /u01/<Ditectory_name>

DB Level:-

sql> create or replace directorey DATAPUMP as '/u01/<Directory_name>';


5) Check space on mount point where your export data going be store make
       space according to that.

To check space on the mount pount if you are using

linux operating system df -h <mount_point>

on AIX operating system user df -gt <mount_point>

6) Take export of the schema

nohup expdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &

 export will run in back end.  To check status

ps -ef |grep export

or check the logs for status in log file.

After complition of the export copy dump file to target databsae (if taget databae is on same server no need to copy)
to copy for source server to target server use scp os command or use winscp tool.

below is the scp command

scp -p username@servername:<location_where_to_copy> (for servername either give IP  address or name of the server)

this will ask for the target server password: enter target server password.

That's it on source database.



TARGET DATABASE:-


1) Check the mount point size on the target database it should be more that the schema size.

2) create OS level as well as database level  directory same as source database .

OS LEVEL:- mkdir -p /u01/<Ditectory_name>

DB Level:-

sql> create or replace directorey DATAPUMP as '/u01/<Directory_name>';

Note: before going to import data need to take export of the target database schema

nohup expdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &


3) Check object count in target database.

SQL> select object_type,status,count(*)  from dba_objects where owner='<YOUR_SCHEMA_NAME>' group by object_type,status;

4) now drop the schema / objects

SQL> drop schema <SCHEMA_NAME> cascade;


NOTE:- Better to drop object instead of dropping the schema.

set lines 200
set pages 500

Drop:- Tables, Indexes
SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('TABLE');

Drop:-

Sequences,Views, Packages,Functions & Procedures,Synonyms, & Triggers

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');

Drop:- Types

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('TYPE');

5) Check object count again if it's zero then proceed with import.

check count with below query.

select owner,object_type,count(*)
 from dba_objects
 where owner in ('<SCHEMA_NAME')
 group by owner,object_type;


5) Now import the Dump file in to the target database.

nohup impdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &

Monitor Import job in log file or in os level using

ps -ef |grep import

6) After inport completed check is there any Invalid objects present.

select owner, object_type, status, count(*)
from dba_objects
where status = 'INVALID'
group by owner, object_type, status;

If you found any invaid object recompile the object.

SQL> @?/rdbms/admin/utlrp.sql

Hope this will you Help You.







1 comment:

  1. Very impressive article! The blog is highly informative and has answered all my questions. To introduce about our company and the activities, Pegasi Media Group is a database provider that helps you to boost your sales & grow your business through well-build database services

    ReplyDelete