Tuesday, January 25, 2011

Starting and Stopping Oracle Application server 10g sequence

Hi All,
Oracle Application Server 10g (OAS 10g) Starting and Stopping sequence is very important when you have to bring all the services up & running.If the sequence changes than you might face some issues like some services will be down and not possible to bring up those services,This issue I faced when working with Oracle 10g Appplication server where weekly maintainance activity result in some applications down,so the correct order of starting and Stopping Oracle 10g Application server services are as follows:

Starting Oracle 10g Application server:

1)Start the listener.

Start the listener from the server where Your Application services are running
$lsnrctl start

2)Start Database.

1)Start the listener.
Start the listener from the server where Your Application services are running
On Windows server:
C:\Users\user>set ORACLE_SID=DB_NAME
SQL>startup

On Unix based server:
$export ORACLE_SID=DB_NAME
SQL>startup

3)Start Infrastructure services:
Make sure you are in Infra Home=>opmn=>bin directory.
$cd Oracle10g/Infra/opmn/bin
$opmnctl startall

4)Start Middle-tier services:
Make sure you are in Middle-tier Home=>opmn=>bin directory.
$cd Oracle10g/MidTier/opmn/bin
$opmnctl startall

Note 1:opmn stands for oracle process manager and notification server.
opmnctl startall:Start all application services like HTTP,OC4J,Wireless.....
opmnctl stopall: Stop all application services like HTTP,OC4J,Wireless.....

Note 2:Individual service or component can also be started/stopped with the help of opmnctl as follows:
$opmnctl startproc process-type=OC4J Home
$opmnctl startproc component-type=OC4J
$opmnctl stopproc process-type=OC4J Home
$opmnctl stoproc component-type=OC4J
$opmnctl startproc ias-component=wireless
$opmnctl stopproc ias-component=wireless
$opmnctl stopproc process-type=OC4J_Wireless
$opmnctl startproc process-type=OC4J_Wireless

Stopping Oracle 10g Application server:

When you are stopping the services this order becomes reverse,so you have to follow the below steps:

1)Stop Middle-tier services:
Make sure you are in Middle-tier Home=>opmn=>bin directory.
$cd Oracle10g/MidTier/opmn/bin
$opmnctl stopall

2)Stop Infrastructure services:
Make sure you are in Infra Home=>opmn=>bin directory.
$cd Oracle10g/Infra/opmn/bin
$opmnctl stopall

3)Stop Database:
On Windows server:
C:\Users\user>set ORACLE_SID=DB_NAME
SQL>shutdown immediate

On Unix based server:
export ORACLE_SID=DB_NAME
SQL>shutdown immediate

4)Stop the listener:
Stop the listener from the server where Your Application services are running
$lsnrctl stop

Hope it helps.

Best regards,

Rafi.

Saturday, January 22, 2011

Completed Oracle E-Business Suite R12 Applications Certification


Hi All,
I took R12 DBA(Oracle E-Business Suite R12: Install, Patch and Maintain Applications) exam today and by almighty's grace,I got passed.Got rewarded for some devoted,passionated and dedicated effort.

Brief about exam:
There were 75 Questions and 90 minutes(1 hour 30 minutes) was alloted to complete this exam.I completed this exam within 1 hour after that I reviewed my answers which took ten minutes.After this there were some follow up questions which does not carry any marks ,but It is for Oracle university's reference purpose.I finished my exam successfully.

I'm now Oracle E-business suite R12 Applications Database Administrator certified professional(Oracle 10g/11g OCP + R12 install patch clonning=Oracle R12 Apps EBS Certified professional). The link you can refer for completing this cerfication is given below:

http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=210

For preparing this exam:
The books I followed :
1)Oracle Instructor led training material for R12 install,patch & clone Application.
2)Apps R12 Book from Amazon.com

The links I followed :
1)Metalink notes and faqs on Applications installation,autoconfig,concurrent manager,clonning,patching,Shared APPL_TOP,Load Balancing
2)Online apps DBA link by Atul
3)Oracle Application installation,patching & clonning documents from oracle

Hope it helps.

Best regards,

Rafi.

Tuesday, January 4, 2011

Database refresh process

Hi,
Database refresh is done by adopting various technique.But the main idea is to get the fresh production data in your Development,Test or Performance Database environment so that Developers/QAs make use of that data and do needful changes and testing depending upon the requirement.Below are the steps which I followed.

REFRESH PROCESS 1:

In the refresh process 1,I have used the exp/imp utility and expdp/impdp utility to transfer the data from source to target Database.

Steps we followed for refresh:
Step 1:Checking the tablespaces if it exists and the tablespace sizes in Source and the Destination Databases for the below
------mentioned tablespaces:
Source:STARQA_GDC Destination:orcl(my desktop Database)
STAR02D,STAR02I,STAR01D & STAR01I
SELECT F.TABLESPACE_NAME,TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999') "USEDMB",
TO_CHAR (F.FREE_SPACE, '999,999') "FREEMB",
TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTALMB",
TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' FREE
FROM (SELECT TABLESPACE_NAME,
ROUND (SUM (BLOCKS*(SELECT VALUE/1024
FROM V$PARAMETER
WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME


Check if these tablespaces exists,if so,check size and add if required and send me the out put of below query in orcl.

SOURCE DATABASE:STARQA_GDC
--------------------
TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- ------
STAR01D 18,055 14,673 32,728 45 %
STAR01I 2,067 933 3,000 31 %
STAR02D 32,706 2,004 34,710 6 %
STAR02I 3,003 1,497 4,500 33 %

DESTINATION DATABASE:ORCL
-----------------------
TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- ------
STAR01D 7,729 11,371 19,100 60 %
STAR01I 1,898 143 2,041 7 %
STAR02D 23,813 1,136 24,949 5 %
STAR02I 2,969 159 3,128 5 %
Check the USEDMB column in SOURCE DATABASE(STARQA_GDC) and TOTALMB column in DESTINATION DATABASE(ORCL) carefully and verify the SPACE needed for the refresh,If the space is insufficient in particular TABLESPACE,Increase the size of the Tablespace for import operation to be successfull and hence Database refresh to be successfull.

Step 2:Drop the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/Dropstarusers.sql
DROP USER STARTXN CASCADE;
DROP USER STARREP CASCADE;
DROP USER STARREPAPP CASCADE;
DROP USER STARMIG CASCADE;
DROP USER STARTXNAPP CASCADE;
Run the script Dropstarusers.sqlat SQL prompt
SQL>@Dropstarusers.sql

Step 3:Create the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/createstarusers.sql or Take the script by going to the TOAD tool,create the users with required privileges and grants.
We need to create the 5 users STARTXN,STARREP,STARREPAPP,STARMIG & STARTXNAPP whose Data needs to be refresh.

Step 4:Copy the export dump files to the destination location by using WINSCP utility.

Step 5:Create the directory for the Datapump import:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
(or)
grant read,write on directory imp_dp2_Jun to system;
Since,import is done with system user.

Step 6: Importing the dumpfile in desktop:
The import scripts are:
Step of importing dumpfile for 5 users:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
Datapump import done for the dump file got from GDC team after unziping it .
impdp system/database directory=IMP_DP2_JUN dumpfile=STARMIG_210610.dmp logfile=STARMIG_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREP_210610.dmp logfile=STARREP_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREPAPP_210610.dmp logfile=STARREPAPP_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXN_210610.dmp logfile=STARTXN_210610.log
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXNAPP_210610.dmp logfile=STARTXNAPP_210610.log

Step 7:Verify the log files for each import

Step 8:Exporting in target Box:
starmig:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starmig_23jun10.dmp' log=/work/Rafi/exp_orcl_starmig_23jun10.log owner=starmig statistics=none
vi imp_starmig.sh =>Add the exp script here
To run in background without interupt:
nohup sh imp_starmig.sh > a.out &
startxn:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxn_23jun10.dmp' log=/work/Rafi/exp_orcl_startxn_23jun10.log owner=startxn statistics=none
nohup sh imp_startxn.sh > b.out &
startxnapp:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp' log=/work/Rafi/exp_orcl_startxnapp_23jun10.log owner=startxnapp statistics=none
nohup sh imp_startxnapp.sh > c.out &
starrep:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrep_25jun10.dmp' log=/work/Rafi/exp_orcl_starrep_25jun10.log owner=starrep statistics=none
nohup sh imp_starrep25.sh > r.out &
starrepapp:
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp' log=/work/Rafi/exp_orcl_starrepapp_23jun10.log owner=starrepapp statistics=none
nohup sh imp_starrepapp.sh > e.out &
Step 9: Importing in 36 box:
starmig:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starmig_23jun10.dmp' log=imp_starmig_stardev23jun10.log fromuser=starmig touser=starmig
nohup sh imp_starmigDEV.sh > f.out &
startxn:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxn_23jun10.dmp' log=imp_startxn_stardev23jun10.log fromuser=startxn touser=startxn
nohup sh imp_startxnDEV.sh > g.out &
startxnapp:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp' log=imp_startxnapp_stardev23jun10.log fromuser=startxnapp touser=startxnapp
nohup sh imp_startxnappDEV.sh > h.out &
starrep:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starrep_25jun10.dmp' log=imp_starrep_stardev25jun10.log fromuser=starrep touser=starrep
nohup sh imp_starrepDEV.sh > i.out &
starrepapp:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp' log=imp_starrepapp_stardev23jun10.log fromuser=starrepapp touser=starrepapp
nohup sh imp_starrepappDEV.sh > j.out &

Step 10: Verified the objects and tables that are imported.

Connect to SQL PLUS with each user:
1)startxn:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxn Schema in STARQA_GDC Database.
2)Starrep:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrep Schema in STARQA_GDC Database.
3)Startxnapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxnapp Schema in STARQA_GDC Database.
4)Starrepapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;

Compared this with startrepapp Schema in STARQA_GDC Database.
5)Starmig:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrepapp Schema in STARQA_GDC Database.

Step 11: Compared the schemas in case of difference in the object.
Open the source and Destination Database Toad sessions separately.
If we find some schema difference Then go to toad utility and compare the schemas.Check for the objects and tables didn't got imported.In the TOAD utility go to the DBA tab and click on Compare schemas.Click the objects we need to compare.We will get here the objects which are less in particular Destination schemas.

Step 12:We Make use of exp utility to export the missing objects interactively.

REFRESH PROCESS 2:

In the refresh process2,the indirect method is avoided as we came across one parameter version=10.2.So when you do export with this parameter you can directly import without losing any data by using expdp and impdp utility.

Steps for the refresh from Source(11.1.0.7) to Target(11.1.0.6):(

Step 1:Checking the tablespaces if it exists and the tablespace sizes in Source and the Destination Databases for the below
------mentioned tablespaces:
Source:STARQA_GDC Target:STARTST
STAR02D,STAR02I,STAR01D & STAR01I
SELECT F.TABLESPACE_NAME,TO_CHAR ((T.TOTAL_SPACE - F.FREE_SPACE),'999,999') "USEDMB",
TO_CHAR (F.FREE_SPACE, '999,999') "FREEMB",
TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTALMB",
TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' FREE
FROM (SELECT TABLESPACE_NAME,
ROUND (SUM (BLOCKS*(SELECT VALUE/1024
FROM V$PARAMETER
WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME


Check if these tablespaces exists,if so,check size and add if required and send me the out put of below query in STARTST.

SOURCE DATABASE:STARQA_GDC
--------------------
TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- ------
STAR01D 18,055 14,673 32,728 45 %
STAR01I 2,067 933 3,000 31 %
STAR02D 32,706 2,004 34,710 6 %
STAR02I 3,003 1,497 4,500 33 %

DESTINATION DATABASE:STARTST
-----------------------
TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- ------
STAR01D 7,729 11,371 19,100 60 %
STAR01I 1,898 143 2,041 7 %
STAR02D 23,813 1,136 24,949 5 %
STAR02I 2,969 159 3,128 5 %
Check the USEDMB column in SOURCE DATABASE(STARQA_GDC) and TOTALMB column in TARGET DATABASE(STARTST) carefully and verify the SPACE needed for the refresh,If the space is insufficient in particular TABLESPACE,Increase the size of the Tablespace for import operation to be successfull and hence Database refresh to be successfull.

Step 2:Drop the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/Dropstarusers.sql
DROP USER STARTXN CASCADE;
DROP USER STARREP CASCADE;
DROP USER STARREPAPP CASCADE;
DROP USER STARMIG CASCADE;
DROP USER STARTXNAPP CASCADE;
Run the script Dropstarusers.sql at SQL prompt
SQL>@Dropstarusers.sql

Step 3:Create the users whose Data needs to be refreshed.
Use the below scripts for this. This is located in /work/Rafijobs/createstarusers.sql or Take the script by going to the TOAD tool,create the users with required privileges and grants.
We need to create the 5 users STARTXN,STARREP,STARREPAPP,STARMIG & STARTXNAPP whose Data needs to be refresh.

Step 4:Copy the export dump files to the destination location using WinSCP.

Note:Step 5 is only possible when expdp is done with version=10.2, since the source Database version is 11.1.0.7 and target Database oracle version is 11.1.0.6,So from higher to lower version import is only possible when export is done with version=10.2 parameter.(For expdp refer refresh method 1).

Step 5:Create the directory for the Datapump import:
create directory IMP_DP2_JUN as 'D:\GDC_21_JUNE_BKP';
grant read,write on directory imp_dp2_Jun to public;
(or)
grant read,write on directory imp_dp2_Jun to system;

Since,import is done with system user.
impdp system/database directory=IMP_DP2_JUN dumpfile=STARMIG_210610.dmp logfile=STARMIG_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREP_210610.dmp logfile=STARREP_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARREPAPP_210610.dmp logfile=STARREPAPP_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXN_210610.dmp logfile=STARTXN_210610.log version=10.2
impdp system/database directory=IMP_DP2_JUN dumpfile=STARTXNAPP_210610.dmp logfile=STARTXNAPP_210610.log version=10.2

Step 6:Verify the log files for each import

Step 7: Verified the objects and tables that are imported.
Connect to SQL PLUS with each user:
1)startxn:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxn Schema in STARQA_GDC Database.
2)Starrep:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrep Schema in STARQA_GDC Database.
3)Startxnapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxnapp Schema in STARQA_GDC Database.
4)Starrepapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;

Compared this with startrepapp Schema in STARQA_GDC Database.
5)Starmig:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startrepapp Schema in STARQA_GDC Database.

Step 8: Compared the schemas in case of difference in the object.
Open the source and Destination Database Toad sessions separately.
If we find some schema difference Then go to toad utility and compare the schemas.Check for the objects and tables didn't got imported.In the TOAD utility go to the DBA tab and click on Compare schemas.Click the objects we need to compare.We will get here the objects which are less in particular Destination schemas.

Hope this helps.


Best regards,

Rafi.

Releasing locks

Hi,
Steps for releasing lock on a table:

Step1:To verify the lock object Here are the important queries:
---------------------------------------------------------------

Find the tables(objects) which are locked:

SELECT o.owner, o.object_name, o.object_type, o.last_ddl_time, o.status, l.session_id, l.oracle_username, l.locked_mode
FROM dba_objects o, gv$locked_object l
WHERE o.object_id = l.object_id;

Find the session which are holding the lock:

select type, id1, id2, lmode, request
from v$lock
where sid = (select sid from v$mystat where rownum = 1)
;

Step 2:Killing the session holding the lock.
---------
Find the serial# for the sessions holding the lock:

SQL> select SERIAL# from v$session where SID=667;

SERIAL#
----------
21091

SQL> alter system kill session '667,21091';

System altered.
Usually the locks are released when the DML statements are rollback or commited.There are 2 lock modes for row level locks:

1)3 =>row exclusive lock
2)2=>row shared lock

For complete details and concept building on Oracle locks,Read the below
article from Jonathan levis:

http://jonathanlewis.wordpress.com/2010/02/15/lock-horror/


Best regards,

Rafi.

Sunday, January 2, 2011

Tablespace critical threshold message

Hi,
A DBA's main responsibility also include checking the tablespace space regularly,We can check this space each time with help of some useful views like dba_data_files provided by oracle.But this is tough some time whenever some developer load the data and suddenly you see the space almost occupied.So I decided to use the below script which will send me the alert mail whenever the space becomes 85% filled(critical threshold) which will be very much helpful to make sure the tablespace is having enough space.

The script is as given below:

Tablespace critical message when it becomes 85% occupied:
----------------------------------------------------------


#!/bin/ksh

export PATH=/star/app/oracle/product/11.1.0/db_1/bin:/usr/sbin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/star/bin
export ORACLE_BASE=/star/app/oracle
export LD_LIBRARY_PATH=/star/app/oracle/product/11.1.0/db_1/lib:/lib:/usr/lib
export LD_LIBRARY_PATH=/ora11g/app/oracle/product/11.1.0.7/lib:/usr/lib:/usr/ccs/bin:/usr/dt/lib
export ORACLE_HOME=/star/app/oracle/product/11.1.0/db_1


MAILLIST=`cat /home/star/scripts/email.lst`
LOGFILE=/tmp/chk_ts_$1.log

echo $1

CHKUP=`ps -ef | grep pmon | grep $1 | wc -l`

if [ "${CHKUP}" -eq 1 ]
then
echo "The Pmon Process is running on host"
else
echo " ****************************************************************" >> $LOGFILE
echo " " `date` " Pmon Proces is not found" >> $LOGFILE
echo " ****************************************************************" >> $LOGFILE
exit 0
fi

export ORACLE_SID=$1

sqlplus -s "/ as sysdba" <WHENEVER SQLERROR EXIT SQL.SQLCODE
set echo off
set trimspool on
set trimout on
set verify off
set feedback off
column TABLESPACE_NAME format a18
column "%ocup" format a6
set lines 200
spool /tmp/chk_ts_$1.spl
SELECT total.tablespace_name tablespace_name,
ROUND (tot / 1024 / 1024) total,
ROUND ((tot - tot_l) / 1024 / 1024) occupied,
ROUND (tot_l / 1024 / 1024) remain,
ROUND (max_l / 1024 / 1024) max_extent,
ROUND ((tot - tot_l) * 100 / tot) || '%' AS "%ocup"
FROM (SELECT tablespace_name, SUM (BYTES) tot_l, MAX (BYTES) max_l
FROM dba_free_space
GROUP BY tablespace_name
UNION
SELECT d.tablespace_name,
f.bytes_free + f.bytes_used - NVL (p.bytes_used, 0) tot_l,
0 max_l
FROM SYS.v_\$temp_space_header f,
dba_temp_files d,
SYS.v_\$temp_extent_pool p
WHERE f.tablespace_name(+) = d.tablespace_name AND f.file_id(+) = d.file_id
AND p.file_id(+) = d.file_id) libre,
(SELECT tablespace_name, SUM (BYTES) tot
FROM dba_data_files
GROUP BY tablespace_name
UNION
(SELECT tablespace_name, SUM (BYTES) tot
FROM dba_temp_files
GROUP BY tablespace_name)) total,
dba_tablespaces dba_t
WHERE total.tablespace_name = libre.tablespace_name(+)
AND total.tablespace_name = dba_t.tablespace_name
AND ROUND (tot_l / 1024 / 1024) < 1024
AND ROUND ((tot - tot_l) * 100 / tot) > 85
AND dba_t.tablespace_name NOT IN ('RBS')
ORDER BY ROUND ((tot - tot_l) * 100 / tot) DESC;
spo off
EOF
if [ -s /tmp/chk_ts_$1.spl ]; then
cat /tmp/chk_ts_$1.spl | mailx -s "Critical : ($1 on machinename.abcd.com.sg) Missing free space on tablespace of the database $1 -- Immediate Action" $MAILLIST
echo "---------------------------------------------------------------------------" >> $LOGFILE
echo `date` >> $LOGFILE
cat /tmp/chk_ts_$1.spl >> $LOGFILE
echo "---------------------------------------------------------------------------" >> $LOGFILE
rm -fr /tmp/chk_ts_$1.spl
fi



Best regards,


Rafi.

Executing Export and import jobs

Hi,
Below are the methods you can use to run your export and import jobs when you are working in any unix environment like linux,solaris,hp/ux and ibm-aix,When you want to transfer the data from one database to other in same machine you can follow the below steps 1 & step 3,If you want to do transfer the data from one database running in one machine to other database running in different machine than you can use ftp or winscp.

Step 1: Export
In the below example I have exported five schemas from one database and imported to other Database.


Exporting the Data:
-------------------


1)starmig:A user whose data we have to export.
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starmig_23jun10.dmp' log=/work/Rafi/exp_orcl_starmig_23jun10.log owner=starmig statistics=none
vi imp_starmig.sh =>Add the exp script here
To run in background without interupt:
nohup sh imp_starmig.sh > a.out &

2)startxn:A user whose data we have to export.
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxn_23jun10.dmp' log=/work/Rafi/exp_orcl_startxn_23jun10.log owner=startxn statistics=none
nohup sh imp_startxn.sh > b.out &

3)startxnapp:A user whose data we have to export.
exp system/database@orcl.world file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp' log=/work/Rafi/exp_orcl_startxnapp_23jun10.log owner=startxnapp statistics=none
nohup sh imp_startxnapp.sh > c.out &

4)starrep:A user whose data we have to export.
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrep_25jun10.dmp' log=/work/Rafi/exp_orcl_starrep_25jun10.log owner=starrep statistics=none
nohup sh imp_starrep25.sh > r.out &

5)starrepapp:A user whose data we have to export.
exp system/database@orcl.world file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp' log=/work/Rafi/exp_orcl_starrepapp_23jun10.log owner=starrepapp statistics=none
nohup sh imp_starrepapp.sh > e.out &

Step2:Copy the dumps using ftp or winscp
Note if you are using winscp,Copy the file using binary mode of winscp.

Importing the Data :
1)starmig:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starmig_23jun10.dmp' log=imp_starmig_stardev23jun10.log fromuser=starmig touser=starmig
nohup sh imp_starmigDEV.sh > f.out &

2)startxn:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxn_23jun10.dmp' log=imp_startxn_stardev23jun10.log fromuser=startxn touser=startxn
nohup sh imp_startxnDEV.sh > g.out &

3)startxnapp:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_startxnapp_23jun10.dmp' log=imp_startxnapp_stardev23jun10.log fromuser=startxnapp touser=startxnapp
nohup sh imp_startxnappDEV.sh > h.out &

4)starrep:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starrep_25jun10.dmp' log=imp_starrep_stardev25jun10.log fromuser=starrep touser=starrep
nohup sh imp_starrepDEV.sh > i.out &

5)starrepapp:
imp system/star@STARDEV file='/work/Rafi/exp_orcl_starrepapp_23jun10.dmp' log=imp_starrepapp_stardev23jun10.log fromuser=starrepapp touser=starrepapp
nohup sh imp_starrepappDEV.sh > j.out &

Step4:
Verified the objects and tables that are imported.
-------
Connect to SQL PLUS with each user:

1)starmig:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with starmig Schema in STARDEV Database.

2)startxn:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxn Schema in STARDEV Database.

3)Startxnapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with startxnapp Schema in STARDEV Database.

4)Starrep:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with starrep Schema in STARDEV Database.

5)Starrepapp:
SQL>Select count(*) from user_objects;
SQL>Select count(*) from user_tables;
Compared this with starrepapp Schema in STARDEV Database.

Hope this helps while doing your export import activities,which we do frequently as DBA.


Best regards,

Rafi.

Comparing two schemas

Hi,
We can compare two schemas with the help of tools like toad,But what if you don't have tool,We can do this with the help of the below queries:

/* Query 1:To compare the two users in terms of tables: */

SELECT TABLE_NAME FROM ALL_TABLES
WHERE TABLE_NAME NOT IN
(SELECT TABLE_NAME FROM ALL_TABLES WHERE OWNER='EMP1')
AND OWNER='EMP2';

/* Query 2:To compare the two users in terms of objects */

SELECT OBJECT_NAME||' '||OBJECT_TYPE FROM ALL_OBJECTS
WHERE OBJECT_NAME NOT IN
(SELECT OBJECT_NAME FROM ALL_OBJECTS WHERE OWNER='EMP1')
AND OWNER='EMP2';

Above queries can be very much helpful,when we transfer data from one schema to other schema and we want to verify them after successful transfer(i.e) after doing datapump or export import activities.Hope it helps.


Best regards,

Rafi.