Hi,
As a DBA we might come across doing the Below task for APEX Component existing
in our Database from Oracle 11g onwards.
Before Uninstalling any component(apex) from Oracle Database(Oracle 11g rel2) discuss with team and inform clearly and once this is confirmed,than follow the below steps,By default in Oracle 11g rel2 Database APEX Component is installed (Apex version 3.2.1.00.10).
SQL> select *from v$version;
BANNER
-------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
SQL>SELECT COMP_NAME||' '||VERSION||' '||STATUS FROM DBA_REGISTRY;
COMP_NAME||''||VERSION||''||STATUS
--------------------------------------------------------------------------------
OWB 11.2.0.1.0 VALID
Oracle Application Express 3.2.1.00.10 VALID
Oracle Enterprise Manager 11.2.0.1.0 VALID
OLAP Catalog 11.2.0.1.0 VALID
Spatial 11.2.0.1.0 VALID
Oracle Multimedia 11.2.0.1.0 VALID
Oracle XML Database 11.2.0.1.0 VALID
Oracle Text 11.2.0.1.0 VALID
Oracle Expression Filter 11.2.0.1.0 VALID
Oracle Rules Manager 11.2.0.1.0 VALID
Oracle Workspace Manager 11.2.0.1.0 VALID
Oracle Database Catalog Views 11.2.0.1.0 VALID
Oracle Database Packages and Types 11.2.0.1.0 VALID
JServer JAVA Virtual Machine 11.2.0.1.0 VALID
Oracle XDK 11.2.0.1.0 VALID
Oracle Database Java Packages 11.2.0.1.0 VALID
OLAP Analytic Workspace 11.2.0.1.0 VALID
Oracle OLAP API 11.2.0.1.0 VALID
Apex Uninstallation:
Apex uninstallation can be done as follows:
[oracle@node1 apex]$ cd $ORACLE_HOME/apex
SQL> @apxremov.sql
...Removing Application Express
old 1: alter session set current_schema = &APPUN
new 1: alter session set current_schema = APEX_030200
Session altered.
begin
*
ERROR at line 1:
ORA-04063: package body "APEX_030200.WWV_FLOW_UPGRADE" has errors
ORA-06508: PL/SQL: could not find program unit being called:
"APEX_030200.WWV_FLOW_UPGRADE"
ORA-06512: at line 2
no rows selected
old 2: if '&UPGRADE' = '1' then
new 2: if '1' = '1' then
begin
*
ERROR at line 1:
ORA-04063: package body "APEX_030200.WWV_FLOW_UPGRADE" has errors
ORA-06508: PL/SQL: could not find program unit being called:
"APEX_030200.WWV_FLOW_UPGRADE"
ORA-06512: at line 3
Session altered.
PL/SQL procedure successfully completed.
old 1: drop user &APPUN cascade
new 1: drop user APEX_030200 cascade
User dropped.
old 2: if '&UPGRADE' = '1' then
new 2: if '1' = '1' then
begin
*
ERROR at line 1:
ORA-01940: cannot drop a user that is currently connected
ORA-06512: at line 4
old 5: if '&UPGRADE' = '1' then
new 5: if '1' = '1' then
declare
*
ERROR at line 1:
ORA-24231: database access descriptor (DAD) APEX not found
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_EPG", line 67
ORA-06512: at "SYS.DBMS_EPG", line 261
ORA-06512: at line 15
old 2: if '&UPGRADE' = '1' then
new 2: if '1' = '1' then
begin
*
ERROR at line 1:
ORA-04043: object VALIDATE_APEX does not exist
ORA-06512: at line 3
...Application Express Removed
After running the above script if some APEX Schemas and synonyms still exist in our Database than we have to drop them manually.Below is the query to find public synonym and drop them .Spool into some sql file,for making this task simpler.
SQL)Spool Drop_pub_synonym.sql
SQL>select 'drop public synonym ' || synonym_name || CHR(10) || '/'
from sys.dba_synonyms
where table_owner in ('FLOWS_010500','FLOWS_010600','FLOWS_020000',
'FLOWS_020100','FLOWS_020200','FLOWS_030000','FLOWS_030100','APEX_030200',
'APEX_040000','FLOWS_FILES');
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym APEX_ACTIVITY_LOG
/
drop public synonym APEX_APPLICATION
/
drop public synonym APEX_APPLICATION_FILES
/
drop public synonym APEX_APPLICATION_GLOBAL
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym APEX_APPLICATION_INSTALL
/
drop public synonym APEX_COLLECTION
/
drop public synonym APEX_COLLECTIONS
/
drop public synonym APEX_CSS
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
/
drop public synonym APEX_CUSTOM_AUTH
/
drop public synonym APEX_DEBUG_MESSAGE
/
drop public synonym APEX_FEEDBACK_TYPES
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym APEX_INSTANCE_ADMIN
/
drop public synonym APEX_ITEM
/
drop public synonym APEX_JAVASCRIPT
/
drop public synonym APEX_LANG
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym APEX_LDAP
/
drop public synonym APEX_LOGIN
/
drop public synonym APEX_MAIL
/
drop public synonym APEX_MAIL_ATTACHMENTS
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
/
drop public synonym APEX_MAIL_LOG
/
drop public synonym APEX_MAIL_QUEUE
/
drop public synonym APEX_PLSQL_JOB
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym APEX_PLSQL_JOBS
/
drop public synonym APEX_PLUGIN
/
drop public synonym APEX_PLUGIN_UTIL
/
drop public synonym APEX_REST
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym APEX_SITE_ADMIN_PRIVS
/
drop public synonym APEX_UI_DEFAULT
/
drop public synonym APEX_USER_ACCESS_LOG
/
drop public synonym APEX_UTIL
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
/
drop public synonym APEX_WEB_SERVICE
/
drop public synonym HTMLDB_ACTIVITY_LOG
/
drop public synonym HTMLDB_APPLICATION
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym HTMLDB_APPLICATION_FILES
/
drop public synonym HTMLDB_APPLICATION_GLOBAL
/
drop public synonym HTMLDB_COLLECTION
/
drop public synonym HTMLDB_COLLECTIONS
/
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
drop public synonym HTMLDB_LDAP
/
drop public synonym HTMLDB_MAIL
/
drop public synonym HTMLDB_MAIL_LOG
/
drop public synonym HTMLDB_MAIL_QUEUE
'DROPPUBLICSYNONYM'||SYNONYM_NAME||CHR(10)||'/'
----------------------------------------------------
/
drop public synonym HTMLDB_PLSQL_JOB
/
drop public synonym HTMLDB_PLSQL_JOBS
/
all synonyms are dropped for apex users:
---------------------------------=------------
SQL> select synonym_name from sys.dba_synonyms where owner = 'FLOWS_FILES';
no rows selected
SQL>SPOOL OFF
SQL>@Drop_pub_synonym.sql
All the synonyms are dropped.
We have to make sure the below all the APEX related schemas are dropped or else drop them manually,In my case I upgraded to Apex 4.x so additional user APEX_040000 is also present:
SQL>DROP USER APEX_040000 CASCADE;
SQL>DROP USER APEX_PUBLIC_USER CASCADE;
SQLDROP USER FLOW_FILES CASCADE;
SQLDROP USER APEX_03000 CASCADE;
Now our Apex Uninstallation is successful.
Apex Installation:
1) Go to $ORACLE_HOME/apex path: For Oracle 11g rel2 Database apex component is already present when we install the software in our Operating system in '$ORACLE_HOME/apex' path.
$cd $ORACLE_HOME/apex
$pwd
/u01/oracle11g/product/11.2.0/dbhome_1/apex
2)Connect to SQL-PLUS and execute apexins for installing apex
SQL>@apexins SYSAUX SYSAUX TEMP /i/
Apply any upgraded Apex Component if required by following same process as above.
Apex Patching:
For patching the apex component,Run apxpatch.sql.
For example:
SQL>@apxpatch.sql
Apex ADMIN user password change:
We might get request from our APEX team to change the APEX ADMIN user password after Apex installation,We can change the APEX ADMIN user password by executing 'apxchpwd.sql'.
$cd $ORACLE_HOME/apex
SQL>@apxchpwd.sql
Enter a value below for the password for the Application Express ADMIN user.
Enter a password for the ADMIN user []
Session altered.
...changing password for ADMIN
PL/SQL procedure successfully completed.
Commit complete.
Hope it helps in doing APEX administration.If I come across some other Apex
related task than I will be posting the same.
Best regards,
Rafi.
Believe that anything you can imagine you can achieve it real. Having the devotion,passion and dedication to learn & Work in Oracle field mail me @ rafidba.alvi@gmail.com."Knowledge grows when it is shared". Best Career related guidance in Oracle DBA,Oracle Apps DBA,Oracle SOA Admin/Developers,ODI Developers,OBI Developers and Oracle Designers. "A Catalyst for Oracle DBA & Apps DBA Interviews,Jobs & Career."
Friday, June 24, 2011
Monday, June 20, 2011
Automate Startup and Shutdown of Oracle Database,Listener and HTTP Serivices on Linux
Hi,
Automating Regular Database tasks is the key priority of DBA.In the below script we can automate Databases,Listerners,HTTP services start and stop.
Step 1: Please make sure that oratab file is correct and complete.
Check for oratab file either in /etc/oratab or in /u01/app/oracle/oratab.
Database entries in the oratab file have the following format:
$ORACLE_SID:$ORACLE_HOME:[Y|N]
Here Y indicates that the database can be started up and shutdown using dbstart/dbshut script.
If in my database there is two database named testdb1 and testdb2 then my oratab file will contain the entry like,
testdb1:/u01/app/oracle/product/10.2.0/db_1:Y
testdb2:/u01/app/oracle/product/10.2.0/db_1:Y
where :/u01/app/oracle/product/10.2.0/db_1 is the $ORACLE_HOME of my database.
Step 2: Create a script to call dbstart and dbshut.
In this example I will create one script that will do both startup and shutdown operation. I will name this script as dbora and will be placed in '/etc/init.d'.
a) Login as 'root'.
b) Change directories to /etc/init.d
$cd /etc/init.d
c) Create an empty file called 'dbora' with 'touch' command and Grant permission with chmod as 750.
# touch dbora
# chmod 750 dbora
d)Edit the dbora file and make the contents of it like below.
dbora:
#!/bin/bash
#
# chkconfig: 35 99 10
# description: Starts and stops Oracle processes
#
ORA_HOME=/u01/app/oracle/product/10.2.0/db_1
ORA_OWNER=oracle
case "$1" in
'start')
# Start the TNS Listener
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start"
# Start the Oracle databases:
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart
# Start the Intelligent Agent
if [ -f $ORA_HOME/bin/emctl ];
then
su - $ORA_OWNER -c "$ORA_HOME/bin/emctl start agent"
elif [ -f $ORA_HOME/bin/agentctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/agentctl start"
else
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl dbsnmp_start"
fi
# Start Management Server
if [ -f $ORA_HOME/bin/emctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/emctl start dbconsole"
elif [ -f $ORA_HOME/bin/oemctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/oemctl start oms"
fi
# Start HTTP Server
if [ -f $ORA_HOME/Apache/Apache/bin/apachectl]; then
su - $ORA_OWNER -c "$ORA_HOME/Apache/Apache/bin/apachectl start"
fi
touch /var/lock/subsys/dbora
;;
'stop')
# Stop HTTP Server
if [ -f $ORA_HOME/Apache/Apache/bin/apachectl ]; then
su - $ORA_OWNER -c "$ORA_HOME/Apache/Apache/bin/apachectl stop"
fi
# Stop the TNS Listener
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
# Stop the Oracle databases:
su - $ORA_OWNER -c $ORA_HOME/bin/dbshut
rm -f /var/lock/subsys/dbora
;;
esac
# End of script dbora
Step 3:As 'root' user perform the following to create symbolic links:
# ln -s /etc/init.d/dbora /etc/rc3.d/S99oracle
# ln -s /etc/init.d/dbora /etc/rc0.d/K01oracle
Alternatively we can register the Service using
/sbin/chkconfig --add dbora
This action registers the service to the Linux service mechanism.
Step 4: Test the script to make sure it works.
The real test is to reboot unix box and then see whether oracle is started up automatically or not.
However to test the script created in step 2, without rebooting, do the following:
Login as root and then,
# /etc/init.d/dbora start (for startup)
# /etc/init.d/dbora stop (for shutdown)
If we restart,start and stop oracle database is successful then we are almost done.The real Beauty of the above shell script is It will start and stop all the listener,Databases,HTTP Services whenever server is rebooted for start and stop.
Note:The above Shell Script we can modify as per our convenience.Suppose if we want to Start only Databases and Listener Sevices than comment the portion of the shell script for Starting HTTP services.
Hope it helps.
Best regards,
Rafi
Automating Regular Database tasks is the key priority of DBA.In the below script we can automate Databases,Listerners,HTTP services start and stop.
Step 1: Please make sure that oratab file is correct and complete.
Check for oratab file either in /etc/oratab or in /u01/app/oracle/oratab.
Database entries in the oratab file have the following format:
$ORACLE_SID:$ORACLE_HOME:[Y|N]
Here Y indicates that the database can be started up and shutdown using dbstart/dbshut script.
If in my database there is two database named testdb1 and testdb2 then my oratab file will contain the entry like,
testdb1:/u01/app/oracle/product/10.2.0/db_1:Y
testdb2:/u01/app/oracle/product/10.2.0/db_1:Y
where :/u01/app/oracle/product/10.2.0/db_1 is the $ORACLE_HOME of my database.
Step 2: Create a script to call dbstart and dbshut.
In this example I will create one script that will do both startup and shutdown operation. I will name this script as dbora and will be placed in '/etc/init.d'.
a) Login as 'root'.
b) Change directories to /etc/init.d
$cd /etc/init.d
c) Create an empty file called 'dbora' with 'touch' command and Grant permission with chmod as 750.
# touch dbora
# chmod 750 dbora
d)Edit the dbora file and make the contents of it like below.
dbora:
#!/bin/bash
#
# chkconfig: 35 99 10
# description: Starts and stops Oracle processes
#
ORA_HOME=/u01/app/oracle/product/10.2.0/db_1
ORA_OWNER=oracle
case "$1" in
'start')
# Start the TNS Listener
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl start"
# Start the Oracle databases:
su - $ORA_OWNER -c $ORA_HOME/bin/dbstart
# Start the Intelligent Agent
if [ -f $ORA_HOME/bin/emctl ];
then
su - $ORA_OWNER -c "$ORA_HOME/bin/emctl start agent"
elif [ -f $ORA_HOME/bin/agentctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/agentctl start"
else
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl dbsnmp_start"
fi
# Start Management Server
if [ -f $ORA_HOME/bin/emctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/emctl start dbconsole"
elif [ -f $ORA_HOME/bin/oemctl ]; then
su - $ORA_OWNER -c "$ORA_HOME/bin/oemctl start oms"
fi
# Start HTTP Server
if [ -f $ORA_HOME/Apache/Apache/bin/apachectl]; then
su - $ORA_OWNER -c "$ORA_HOME/Apache/Apache/bin/apachectl start"
fi
touch /var/lock/subsys/dbora
;;
'stop')
# Stop HTTP Server
if [ -f $ORA_HOME/Apache/Apache/bin/apachectl ]; then
su - $ORA_OWNER -c "$ORA_HOME/Apache/Apache/bin/apachectl stop"
fi
# Stop the TNS Listener
su - $ORA_OWNER -c "$ORA_HOME/bin/lsnrctl stop"
# Stop the Oracle databases:
su - $ORA_OWNER -c $ORA_HOME/bin/dbshut
rm -f /var/lock/subsys/dbora
;;
esac
# End of script dbora
Step 3:As 'root' user perform the following to create symbolic links:
# ln -s /etc/init.d/dbora /etc/rc3.d/S99oracle
# ln -s /etc/init.d/dbora /etc/rc0.d/K01oracle
Alternatively we can register the Service using
/sbin/chkconfig --add dbora
This action registers the service to the Linux service mechanism.
Step 4: Test the script to make sure it works.
The real test is to reboot unix box and then see whether oracle is started up automatically or not.
However to test the script created in step 2, without rebooting, do the following:
Login as root and then,
# /etc/init.d/dbora start (for startup)
# /etc/init.d/dbora stop (for shutdown)
If we restart,start and stop oracle database is successful then we are almost done.The real Beauty of the above shell script is It will start and stop all the listener,Databases,HTTP Services whenever server is rebooted for start and stop.
Note:The above Shell Script we can modify as per our convenience.Suppose if we want to Start only Databases and Listener Sevices than comment the portion of the shell script for Starting HTTP services.
Hope it helps.
Best regards,
Rafi
Tuesday, June 14, 2011
Scheduling RMAN cold backup and copy controlfile
Hi,
RMAN cold backup has its own advantage compare to the traditional cold backup:
1)RMAN as we all know backup only used blocks so it is fast.Here entire Database I'm taking backup.
2)In my below script I'm copying the control file to have the latest copy with me in case of recovery.
This is because of below factors:
-------------------------------------------------------------------------
"copy current controlfile"
Personally I like and use this,Because of below reasons:
*restore after total loss.
I run this command as the *final* command of my database,
run {allocate channel c1 type disk; copy current controlfile to '/some OS location'.
Effectively,I'm backing up my "Backup Meta Data" at the end of each RMAN backup.
My RMAN shell scripts look like this:
--------------------------------------------
1)Backup database in mount State because rman expect Database to be in mount state for taking cold backup
2)copy current controlfile to some location
I'm not saying do not use these other options, just be sure we know ,What we are getting and whether we are able to recover the Database.Test your backups in competency servers and be sure that you can do restore & recovery in case of failure.
3)In my rman scripts,I've used
%U=To have unique backupset number
%T=Date of backup
4)Snapshot Controlfile:
Snapshot controlfile is used by rman to backup control file.Personal speaking not required to do so.If we have copy of current controlfile.Here I have used to save one in the current location of rman backup location.
5)Retention policy:
It is that period of time,till when you want to keep the backup in rman repository.Usually decide this policy well in advance and inform the team.In my case till 7 days I can obtain the backup for restore and recovery purpose.
Below are my scripts:
------------------------
1)run_TESTDB.txt:
---------------------
run_TESTDB.txt is for the setting I'm going to configure for RMAN.We have to create this file first than the actual shell script for taking rman backup.
-- RMAN Configured
run{
shutdown immediate;
startup mount;
allocate channel ch1 device type disk format '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_BK_SET1_%U_%T' maxpiecesize 5G;
allocate channel ch2 device type disk format '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_BK_SET2_%U_%T';
backup database TAG='TESTDB_BACKUP_WEEKLY';
allocate channel c1 type disk;
copy current controlfile to '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_ctrl_%U_%T';
Release channel c1;
configure retention policy to recovery window of 7 days;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/snapcf_TESTDB.f';
CROSSCHECK BACKUP;
release channel ch1;
release channel ch2;
alter database open;
}
2)rman_TESTDB.sh:
--------------------
rman_TESTDB.sh consist of setting Oracle Environment Variables and calling the run_TESTDB.txt file for taking the rman cold backup.
-- TESTDB RMAN COLD BACKUP
export ORACLE_SID=TESTDB
export ORACLE_HOME=/u01/oracle11g/product/11.2.0/dbhome_1
export ORACLE_BASE=/u01/oracle11g
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
rman target sys/TESTDBdba @/u05/DB_BACKUP_TESTDB/Backup_Scipts/run_TESTDB.txt log=/u05/DB_BACKUP_TESTDB/logs/rman_cold_weekly_backup_logs/rman_TESTDB.log
Scheduling RMAN Backup job:
-------------------------------
If I want to schedule this backup every sunday at 9pm.The cronjob script will be:
$vi crontab.oracle
#Script for RMAN COLD BACKUP WEEKLY TESTDB DATABASE
################################################################
00 21 * * 0 /u05/DB_BACKUP_TESTDB/SCRIPTS/rman_TESTDB.sh 2>&1 >/u05/DB_BACKUP_TESTDB/logs/CRONJOBS_LOGS/testdb_rman_cron.log
$crontab crontab.oracle =>scheduling job using 'crontab' command at 9pm every sundays.
Verifying cronjob:
-----------------------
$crontab -l =>List the jobs that are scheduled using Cronjob.Verify the rman job also if we have schedule as above.
Hope it helps.
Best regards,
Rafi.
RMAN cold backup has its own advantage compare to the traditional cold backup:
1)RMAN as we all know backup only used blocks so it is fast.Here entire Database I'm taking backup.
2)In my below script I'm copying the control file to have the latest copy with me in case of recovery.
This is because of below factors:
-------------------------------------------------------------------------
"copy current controlfile"
Personally I like and use this,Because of below reasons:
*restore after total loss.
I run this command as the *final* command of my database,
run {allocate channel c1 type disk; copy current controlfile to '/some OS location'.
Effectively,I'm backing up my "Backup Meta Data" at the end of each RMAN backup.
My RMAN shell scripts look like this:
--------------------------------------------
1)Backup database in mount State because rman expect Database to be in mount state for taking cold backup
2)copy current controlfile to some location
I'm not saying do not use these other options, just be sure we know ,What we are getting and whether we are able to recover the Database.Test your backups in competency servers and be sure that you can do restore & recovery in case of failure.
3)In my rman scripts,I've used
%U=To have unique backupset number
%T=Date of backup
4)Snapshot Controlfile:
Snapshot controlfile is used by rman to backup control file.Personal speaking not required to do so.If we have copy of current controlfile.Here I have used to save one in the current location of rman backup location.
5)Retention policy:
It is that period of time,till when you want to keep the backup in rman repository.Usually decide this policy well in advance and inform the team.In my case till 7 days I can obtain the backup for restore and recovery purpose.
Below are my scripts:
------------------------
1)run_TESTDB.txt:
---------------------
run_TESTDB.txt is for the setting I'm going to configure for RMAN.We have to create this file first than the actual shell script for taking rman backup.
-- RMAN Configured
run{
shutdown immediate;
startup mount;
allocate channel ch1 device type disk format '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_BK_SET1_%U_%T' maxpiecesize 5G;
allocate channel ch2 device type disk format '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_BK_SET2_%U_%T';
backup database TAG='TESTDB_BACKUP_WEEKLY';
allocate channel c1 type disk;
copy current controlfile to '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/TESTDB_ctrl_%U_%T';
Release channel c1;
configure retention policy to recovery window of 7 days;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u05/DB_BACKUP_TESTDB/RMAN_COLDBACKUP_WEEKLY/snapcf_TESTDB.f';
CROSSCHECK BACKUP;
release channel ch1;
release channel ch2;
alter database open;
}
2)rman_TESTDB.sh:
--------------------
rman_TESTDB.sh consist of setting Oracle Environment Variables and calling the run_TESTDB.txt file for taking the rman cold backup.
-- TESTDB RMAN COLD BACKUP
export ORACLE_SID=TESTDB
export ORACLE_HOME=/u01/oracle11g/product/11.2.0/dbhome_1
export ORACLE_BASE=/u01/oracle11g
PATH=$ORACLE_HOME/bin:$PATH
LD_LIBRARY_PATH=$ORACLE_HOME/lib
rman target sys/TESTDBdba @/u05/DB_BACKUP_TESTDB/Backup_Scipts/run_TESTDB.txt log=/u05/DB_BACKUP_TESTDB/logs/rman_cold_weekly_backup_logs/rman_TESTDB.log
Scheduling RMAN Backup job:
-------------------------------
If I want to schedule this backup every sunday at 9pm.The cronjob script will be:
$vi crontab.oracle
#Script for RMAN COLD BACKUP WEEKLY TESTDB DATABASE
################################################################
00 21 * * 0 /u05/DB_BACKUP_TESTDB/SCRIPTS/rman_TESTDB.sh 2>&1 >/u05/DB_BACKUP_TESTDB/logs/CRONJOBS_LOGS/testdb_rman_cron.log
$crontab crontab.oracle =>scheduling job using 'crontab' command at 9pm every sundays.
Verifying cronjob:
-----------------------
$crontab -l =>List the jobs that are scheduled using Cronjob.Verify the rman job also if we have schedule as above.
Hope it helps.
Best regards,
Rafi.
Friday, June 10, 2011
Moving Database files from one location to other location in a Planned way
Hi,
Whenever we get issue like some mount point is not getting sufficient in terms of disk space,we decide to move the Database files to a separate mount point.While doing this task all the end users or affected users should be informed well in advance and we have to ask for suitable downtime and duration considering the size of Data we are moving from one mount point to other and if possible we have to do such task at less peak hour of Database usage.Planning and creation of Scripts should be done well in advance,so that the actual process can be completed on time or much before time.
Step 1 and step2 can be planned and implemented well before the actual process can start.I'm doing this small test environment,Remember when we are doing such task in a very large size Database that time planning and implementing this small task on
time is very crucial.
1)Check the source location where Database files reside:
-------------------------------------------------------------------
SQL> select file_name||' '||tablespace_name||' '||bytes/1024/1024 as MB from dba_data_files;
MB
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/TESTDB/users01.dbf USERS 203.75
/u01/app/oracle/oradata/TESTDB/undotbs01.dbf UNDOTBS1 235
/u01/app/oracle/oradata/TESTDB/sysaux01.dbf SYSAUX 970
/u01/app/oracle/oradata/TESTDB/system01.dbf SYSTEM 940
/u01/app/oracle/oradata/TESTDB/example01.dbf EXAMPLE 102.5
/u01/app/oracle/oradata/TESTDB/apex01.dbf APEX 100
6 rows selected.
SQL> select file_name||' '||tablespace_name||' '||bytes/1024/1024 as MB from dba_temp_files;
MB
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/TESTDB/temp01.dbf TEMP 52
[oracle@host1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
18G 14G 2.6G 85% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 940M 0 940M 0% /dev/shm
/dev/sdb1 20G 3.0G 16G 17% /u02
Step 2:Create the directory structure in /u02 where we can move database files.
-------
$cd /u02
$mkdir -p app/oracle/oradata/TESTDB
$cd app/oracle/oradata/TESTDB/
$pwd
/u02/app/oracle/oradata/TESTDB
Step 3: Shutdown the Database
---------
SQL> select name from v$database;
NAME
---------
TESTDB
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Step 4: Using 'mv' command move all database files(.dbf) files.
-------
$ cd /u01/app/oracle/oradata/TESTDB/
$ pwd
/u01/app/oracle/oradata/TESTDB
$ ls -altr
total 3016160
-rw-r----- 1 oracle oinstall 10043392 Jun 3 2010 control03.ctl
-rw-r----- 1 oracle oinstall 10493964 Sep 23 2010 test_01.dfb__old
-rw-r----- 1 oracle oinstall 10493874 Sep 23 2010 test_01.dfb
drwxr-x--- 4 oracle oinstall 4096 Mar 17 14:19 ..
drwxr-x--- 2 oracle oinstall 4096 Jun 2 19:51 .
-rw-r----- 1 oracle oinstall 52429312 Jun 9 19:56 redo02.log
-rw-r----- 1 oracle oinstall 52429312 Jun 9 19:56 redo02a.log
-rw-r----- 1 oracle oinstall 52429312 Jun 10 08:00 redo03.log
-rw-r----- 1 oracle oinstall 52429312 Jun 10 08:00 redo03a.log
-rw-r----- 1 oracle oinstall 54534144 Jun 10 15:30 temp01.dbf
-rw-r----- 1 oracle oinstall 213655552 Jun 10 15:43 users01.dbf
-rw-r----- 1 oracle oinstall 246423552 Jun 10 15:43 undotbs01.dbf
-rw-r----- 1 oracle oinstall 985669632 Jun 10 15:43 system01.dbf
-rw-r----- 1 oracle oinstall 1017126912 Jun 10 15:43 sysaux01.dbf
-rw-r----- 1 oracle oinstall 52429312 Jun 10 15:43 redo01.log
-rw-r----- 1 oracle oinstall 52429312 Jun 10 15:43 redo01a.log
-rw-r----- 1 oracle oinstall 107487232 Jun 10 15:43 example01.dbf
-rw-r----- 1 oracle oinstall 104865792 Jun 10 15:43 apex01.dbf
-rw-r----- 1 oracle oinstall 10043392 Jun 10 15:43 control01.ctl
$ mv *.dbf /u02/app/oracle/oradata/TESTDB/
Step 5:Verify at the target location(/u02)
-------
[oracle@host1 TESTDB]$ cd /u02/app/oracle/oradata/TESTDB/
[oracle@host1 TESTDB]$ pwd
/u02/app/oracle/oradata/TESTDB
[oracle@host1 TESTDB]$ ls -altr
total 2668428
-rw-r----- 1 oracle oinstall 54534144 Jun 10 15:30 temp01.dbf
drwxr-xr-x 3 oracle oinstall 4096 Jun 10 15:31 ..
-rw-r----- 1 oracle oinstall 213655552 Jun 10 15:43 users01.dbf
-rw-r----- 1 oracle oinstall 246423552 Jun 10 15:43 undotbs01.dbf
-rw-r----- 1 oracle oinstall 985669632 Jun 10 15:43 system01.dbf
-rw-r----- 1 oracle oinstall 1017126912 Jun 10 15:43 sysaux01.dbf
-rw-r----- 1 oracle oinstall 107487232 Jun 10 15:43 example01.dbf
-rw-r----- 1 oracle oinstall 104865792 Jun 10 15:43 apex01.dbf
Step 6:Use 'Alter database' commmand to rename or change the location in mount state
--------
$ sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 10 15:54:05 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area 764157952 bytes
Fixed Size 2217184 bytes
Variable Size 335547168 bytes
Database Buffers 423624704 bytes
Redo Buffers 2768896 bytes
Database mounted.
-- For single file The below command can be used.
SQL>alter database rename file '/u01/app/oracle/oradata/TESTDB/users01.dbf' to '/u02/app/oracle/oradata/TESTDB/users01.dbf';
Rename_datafile_TESTDB.sql:
---------------------------
-- Renaming Database files for TESTDB Database
alter database rename file '/u01/app/oracle/oradata/TESTDB/system01.dbf' to '/u02/app/oracle/oradata/TESTDB/system01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/sysaux01.dbf' to '/u02/app/oracle/oradata/TESTDB/sysaux01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/undotbs01.dbf' to '/u02/app/oracle/oradata/TESTDB/undotbs01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/users01.dbf' to '/u02/app/oracle/oradata/TESTDB/users01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/example01.dbf' to '/u02/app/oracle/oradata/TESTDB/example01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/apex01.dbf' to '/u02/app/oracle/oradata/TESTDB/apex01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/temp01.dbf' to '/u02/app/oracle/oradata/TESTDB/temp01.dbf';
[oracle@host1 SCRIPTS]$ chmod -R 775 /u02/SCRIPTS/
Make sure Rename_datafile_TESTDB.sql is having the execute privileges.
[oracle@host1 SCRIPTS]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 10 16:08:21 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name from v$database;
NAME
---------
TESTDB
SQL> @Rename_datafile_TESTDB.sql
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Step 7:Open the Database and verify the new location for your Database files
-------
SQL> alter database open;
Database altered.
SQL> select tablespace_name||' '||file_name||' '||bytes/1024/1024 as MB from dba_data_files;
MB
--------------------------------------------------------------------------------
USERS /u02/app/oracle/oradata/TESTDB/users01.dbf 203.75
UNDOTBS1 /u02/app/oracle/oradata/TESTDB/undotbs01.dbf 235
SYSAUX /u02/app/oracle/oradata/TESTDB/sysaux01.dbf 970
SYSTEM /u02/app/oracle/oradata/TESTDB/system01.dbf 940
EXAMPLE /u02/app/oracle/oradata/TESTDB/example01.dbf 102.5
APEX /u02/app/oracle/oradata/TESTDB/apex01.dbf 100
6 rows selected.
SQL> select tablespace_name||' '||file_name||' '||bytes/1024/1024 as MB from dba_temp_files;
MB
--------------------------------------------------------------------------------
TEMP /u02/app/oracle/oradata/TESTDB/temp01.dbf 52
[oracle@host1 TESTDB]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
18G 12G 5.2G 69% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 940M 0 940M 0% /dev/shm
/dev/sdb1 20G 5.6G 14G 30% /u02
Note: Moving or renaming a datafile while the database is in the 'OPEN' stage is also possible,But we have to make tablespace offline,When no downtime is accepted than we can go for the below steps.
The steps involved are:
-------------------------
1)Take the tablespace OFFLINE.
2)Use operating system commands to move or rename the files.
3)Use the ALTER TABLESPACE command to rename the file within the database.
4)Bring the tablespace back ONLINE.
sqlplus "/ as sysdba"
SQL> alter tablespace TEST_TB offline;
SQL> !mv /u05/app/oradata/TESTDB/user_data_01.dbf /u06/app/oradata/TESTDB/user_data_01.dbf
SQL> alter tablespace TEST_TB
rename datafile '/u05/app/oradata/TESTDB/user_data_01.dbf' to '/u06/app/oradata/TESTDB/user_data_01.dbf';
Bring the tablespace back online.
SQL> alter tablespace TEST_TB online;
SQL> exit
Hope it helps.
Best regards,
Rafi.
Whenever we get issue like some mount point is not getting sufficient in terms of disk space,we decide to move the Database files to a separate mount point.While doing this task all the end users or affected users should be informed well in advance and we have to ask for suitable downtime and duration considering the size of Data we are moving from one mount point to other and if possible we have to do such task at less peak hour of Database usage.Planning and creation of Scripts should be done well in advance,so that the actual process can be completed on time or much before time.
Step 1 and step2 can be planned and implemented well before the actual process can start.I'm doing this small test environment,Remember when we are doing such task in a very large size Database that time planning and implementing this small task on
time is very crucial.
1)Check the source location where Database files reside:
-------------------------------------------------------------------
SQL> select file_name||' '||tablespace_name||' '||bytes/1024/1024 as MB from dba_data_files;
MB
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/TESTDB/users01.dbf USERS 203.75
/u01/app/oracle/oradata/TESTDB/undotbs01.dbf UNDOTBS1 235
/u01/app/oracle/oradata/TESTDB/sysaux01.dbf SYSAUX 970
/u01/app/oracle/oradata/TESTDB/system01.dbf SYSTEM 940
/u01/app/oracle/oradata/TESTDB/example01.dbf EXAMPLE 102.5
/u01/app/oracle/oradata/TESTDB/apex01.dbf APEX 100
6 rows selected.
SQL> select file_name||' '||tablespace_name||' '||bytes/1024/1024 as MB from dba_temp_files;
MB
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/TESTDB/temp01.dbf TEMP 52
[oracle@host1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
18G 14G 2.6G 85% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 940M 0 940M 0% /dev/shm
/dev/sdb1 20G 3.0G 16G 17% /u02
Step 2:Create the directory structure in /u02 where we can move database files.
-------
$cd /u02
$mkdir -p app/oracle/oradata/TESTDB
$cd app/oracle/oradata/TESTDB/
$pwd
/u02/app/oracle/oradata/TESTDB
Step 3: Shutdown the Database
---------
SQL> select name from v$database;
NAME
---------
TESTDB
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Step 4: Using 'mv' command move all database files(.dbf) files.
-------
$ cd /u01/app/oracle/oradata/TESTDB/
$ pwd
/u01/app/oracle/oradata/TESTDB
$ ls -altr
total 3016160
-rw-r----- 1 oracle oinstall 10043392 Jun 3 2010 control03.ctl
-rw-r----- 1 oracle oinstall 10493964 Sep 23 2010 test_01.dfb__old
-rw-r----- 1 oracle oinstall 10493874 Sep 23 2010 test_01.dfb
drwxr-x--- 4 oracle oinstall 4096 Mar 17 14:19 ..
drwxr-x--- 2 oracle oinstall 4096 Jun 2 19:51 .
-rw-r----- 1 oracle oinstall 52429312 Jun 9 19:56 redo02.log
-rw-r----- 1 oracle oinstall 52429312 Jun 9 19:56 redo02a.log
-rw-r----- 1 oracle oinstall 52429312 Jun 10 08:00 redo03.log
-rw-r----- 1 oracle oinstall 52429312 Jun 10 08:00 redo03a.log
-rw-r----- 1 oracle oinstall 54534144 Jun 10 15:30 temp01.dbf
-rw-r----- 1 oracle oinstall 213655552 Jun 10 15:43 users01.dbf
-rw-r----- 1 oracle oinstall 246423552 Jun 10 15:43 undotbs01.dbf
-rw-r----- 1 oracle oinstall 985669632 Jun 10 15:43 system01.dbf
-rw-r----- 1 oracle oinstall 1017126912 Jun 10 15:43 sysaux01.dbf
-rw-r----- 1 oracle oinstall 52429312 Jun 10 15:43 redo01.log
-rw-r----- 1 oracle oinstall 52429312 Jun 10 15:43 redo01a.log
-rw-r----- 1 oracle oinstall 107487232 Jun 10 15:43 example01.dbf
-rw-r----- 1 oracle oinstall 104865792 Jun 10 15:43 apex01.dbf
-rw-r----- 1 oracle oinstall 10043392 Jun 10 15:43 control01.ctl
$ mv *.dbf /u02/app/oracle/oradata/TESTDB/
Step 5:Verify at the target location(/u02)
-------
[oracle@host1 TESTDB]$ cd /u02/app/oracle/oradata/TESTDB/
[oracle@host1 TESTDB]$ pwd
/u02/app/oracle/oradata/TESTDB
[oracle@host1 TESTDB]$ ls -altr
total 2668428
-rw-r----- 1 oracle oinstall 54534144 Jun 10 15:30 temp01.dbf
drwxr-xr-x 3 oracle oinstall 4096 Jun 10 15:31 ..
-rw-r----- 1 oracle oinstall 213655552 Jun 10 15:43 users01.dbf
-rw-r----- 1 oracle oinstall 246423552 Jun 10 15:43 undotbs01.dbf
-rw-r----- 1 oracle oinstall 985669632 Jun 10 15:43 system01.dbf
-rw-r----- 1 oracle oinstall 1017126912 Jun 10 15:43 sysaux01.dbf
-rw-r----- 1 oracle oinstall 107487232 Jun 10 15:43 example01.dbf
-rw-r----- 1 oracle oinstall 104865792 Jun 10 15:43 apex01.dbf
Step 6:Use 'Alter database' commmand to rename or change the location in mount state
--------
$ sqlplus
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 10 15:54:05 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Enter user-name: /as sysdba
Connected to an idle instance.
SQL> startup mount
ORACLE instance started.
Total System Global Area 764157952 bytes
Fixed Size 2217184 bytes
Variable Size 335547168 bytes
Database Buffers 423624704 bytes
Redo Buffers 2768896 bytes
Database mounted.
-- For single file The below command can be used.
SQL>alter database rename file '/u01/app/oracle/oradata/TESTDB/users01.dbf' to '/u02/app/oracle/oradata/TESTDB/users01.dbf';
Rename_datafile_TESTDB.sql:
---------------------------
-- Renaming Database files for TESTDB Database
alter database rename file '/u01/app/oracle/oradata/TESTDB/system01.dbf' to '/u02/app/oracle/oradata/TESTDB/system01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/sysaux01.dbf' to '/u02/app/oracle/oradata/TESTDB/sysaux01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/undotbs01.dbf' to '/u02/app/oracle/oradata/TESTDB/undotbs01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/users01.dbf' to '/u02/app/oracle/oradata/TESTDB/users01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/example01.dbf' to '/u02/app/oracle/oradata/TESTDB/example01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/apex01.dbf' to '/u02/app/oracle/oradata/TESTDB/apex01.dbf';
alter database rename file '/u01/app/oracle/oradata/TESTDB/temp01.dbf' to '/u02/app/oracle/oradata/TESTDB/temp01.dbf';
[oracle@host1 SCRIPTS]$ chmod -R 775 /u02/SCRIPTS/
Make sure Rename_datafile_TESTDB.sql is having the execute privileges.
[oracle@host1 SCRIPTS]$ sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.1.0 Production on Fri Jun 10 16:08:21 2011
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> select name from v$database;
NAME
---------
TESTDB
SQL> @Rename_datafile_TESTDB.sql
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Database altered.
Step 7:Open the Database and verify the new location for your Database files
-------
SQL> alter database open;
Database altered.
SQL> select tablespace_name||' '||file_name||' '||bytes/1024/1024 as MB from dba_data_files;
MB
--------------------------------------------------------------------------------
USERS /u02/app/oracle/oradata/TESTDB/users01.dbf 203.75
UNDOTBS1 /u02/app/oracle/oradata/TESTDB/undotbs01.dbf 235
SYSAUX /u02/app/oracle/oradata/TESTDB/sysaux01.dbf 970
SYSTEM /u02/app/oracle/oradata/TESTDB/system01.dbf 940
EXAMPLE /u02/app/oracle/oradata/TESTDB/example01.dbf 102.5
APEX /u02/app/oracle/oradata/TESTDB/apex01.dbf 100
6 rows selected.
SQL> select tablespace_name||' '||file_name||' '||bytes/1024/1024 as MB from dba_temp_files;
MB
--------------------------------------------------------------------------------
TEMP /u02/app/oracle/oradata/TESTDB/temp01.dbf 52
[oracle@host1 TESTDB]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
18G 12G 5.2G 69% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 940M 0 940M 0% /dev/shm
/dev/sdb1 20G 5.6G 14G 30% /u02
Note: Moving or renaming a datafile while the database is in the 'OPEN' stage is also possible,But we have to make tablespace offline,When no downtime is accepted than we can go for the below steps.
The steps involved are:
-------------------------
1)Take the tablespace OFFLINE.
2)Use operating system commands to move or rename the files.
3)Use the ALTER TABLESPACE command to rename the file within the database.
4)Bring the tablespace back ONLINE.
sqlplus "/ as sysdba"
SQL> alter tablespace TEST_TB offline;
SQL> !mv /u05/app/oradata/TESTDB/user_data_01.dbf /u06/app/oradata/TESTDB/user_data_01.dbf
SQL> alter tablespace TEST_TB
rename datafile '/u05/app/oradata/TESTDB/user_data_01.dbf' to '/u06/app/oradata/TESTDB/user_data_01.dbf';
Bring the tablespace back online.
SQL> alter tablespace TEST_TB online;
SQL> exit
Hope it helps.
Best regards,
Rafi.
Wednesday, June 8, 2011
Scheduling expdp and impdp backup jobs using cronjob
Hi,
We can schedule expdp and impdp using cronjob to have logical backup of our Databases on daily basis or depending on the requirement..In the below article
I'm taking two Databases TESTDB_1 and TESTDB_2.In the below shell scripts,I'm doing tasks for scheduling expdp job using
cronjob. In the below article we are doing below tasks for 'expdp',However similary we can do for 'impdp' by changing impdp scripts:
1)I'm setting the Oracle Environment,
2)Creating expdp script for taking export Datapump backup(Logical Database backup).
3)Moving log files to a separate Directory on the server.
4)Giving all the required permission for the dumpfile backup.
5)Zipping the dumpfiles using gunzip utility.
6)Deleting the dumpfiles which are more than 6 days(7days) old. &
7)Using Cronjob to execute and schedule this task Daily at 9pm & 11pm.
Pre-requistics steps:
-----------------------
1)TESTDB_1: Before executing and scheduling this script perform the below task in the Database side.Create a directory
for taking the datapump export backup and grant the required permissions to the user which you are going to use for
doing this export backup.In my case I took 'system' user.
Creating Directory:
--------------------
CREATE DIRECTORY expdp_TESTDB_2_dir as '/u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily';
GRANT READ,WRITE ON DIRECTORY expdp_TESTDB_2_dir TO SYSTEM;
Note: If you a creating a separate user or plannning to use separate user for taking the datapump export backup than,
Grant the required privileges.
Eg: Grant connect,exp_full_database,imp_full_database to Test_user; or Grant DBA to Test_user;
Shell script for TESTDB_1 Database is below:
--------------------------------------------
backup TESTDB_1.sh:
--------------------------
#!/bin/bash
#Script to Perform Datapump Export backup Every Day
################################################################
#Change History
#================
#DATE AUTHOR http://rafioracledba.blogspot.com LAST-CHANGE
#--------- ----------------------- --------------------------------- -----------
#26-MAY-2011 RAFI ALVI SCRIPT FOR FULL EXPORT TESTDB_1 01-JUN-2011
#
#
#
################################################################
export ORACLE_BASE=/u01/oracle11g
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_SID=TESTDB_1
export PATH=$PATH:$ORACLE_HOME/bin
expdp system/TESTDB_1dba@TESTDB_1 dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=expdp_TESTDB_1_dir logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log FULL=Y EXCLUDE=STATISTICS CONTENT=ALL JOB_NAME=FULL_EXPDP_TESTDB_1
############################################################################
#Moving log files
mv /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup/expdp*.log /u05/DB_BACKUP_TESTDB_1/logs/expdp_fulldb_backup_logs
###############################################################################3
#Granting permissions for Dump files
chmod -R 775 /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup/*
#################################################################################
#Compressing files
gzip /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup/*.*
#################################################################################33
#Removing 7 days old dump files
find /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup -mtime +6 -exec rm {} \;
######################################################################################
cronjob script:
----------------------
##############################################################
#Script for full export backup TESTDB_1 DATABASE
###############################################################
00 21 * * * /u05/DB_BACKUP_TESTDB_1/Backup_Scipts/expdp_FULL_BACKUP_TESTDB_1.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/UGal_cron.log
Similary you can do for any Database.In the below case I'm doing it for the 2nd Database 'TESTDB_2'.
2)TESTDB_2:
------------
SQL>CREATE DIRECTORY expdp_TESTDB_2_dir as '/u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily';
SQL>GRANT READ,WRITE ON DIRECTORY expdp_TESTDB_2_dir TO SYSTEM;
backup TESTDB_2.sh:
--------------------------------
#!/bin/bash
#Script to Perform Datapump Export Every Day
################################################################
#Change History
#================
#DATE AUTHOR http://rafioracledba.blogspot.com LAST-CHANGE
#--------- ----------------------- --------------------------------- -------------
#26-MAY-2011 RAFI ALVI SCRIPT FOR FULL EXPORT TESTDB_2 01-JUN-2011
#
#
#
################################################################
export ORACLE_BASE=/u01/oracle11g
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_SID=TESTDB_2
export PATH=$PATH:$ORACLE_HOME/bin
expdp system/TESTDB_2dba@TESTDB_2 dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=expdp_TESTDB_2_dir logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log FULL=Y EXCLUDE=STATISTICS CONTENT=ALL JOB_NAME=FULL_EXPDP_TESTDB_2
###################################################################
#Moving log files
mv /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily/expdp*.log /u05/DB_BACKUP_TESTDB_2/logs/expdp_fulldb_backup_logs
#####################################################################
#Granting Permission to dump files
chmod -R 775 /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily/*
########################################################################
#Compressing the files
gzip /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily/*.*
###########################################################################
#Removing 7 days old dump files
find /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily -mtime +6 -exec rm {} \;
cronjob script:
--------------------------
##############################################################
#Script for full export backup TESTDB_2 DATABASE
###############################################################
00 23 * * * /u05/DB_BACKUP_TESTDB_2/SCRIPTS/expdp_FULL_DB_TESTDB_2.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/wgal_cron.log
Scheduling task using Cronjob:
----------------------------------------
We have to schedule the 2 export Datapump job using cronjob for doing this task perform the steps described below:
1)Create one separate directory for all your DBA/Cronjob tasks and grant all the permissions so that scripts can execute.
$mkdir DBA_TASKS
$chmod -R 775 /u04/DBA_TASKS
2)All the shell scripts and Cronjob script you can keep in /u04/DBA_TASKS.
3)We can create a separate file or edit the existing crontab file.
In my example I'm creating a separate file 'crontab.oracle' where I will put the below cronjob scripts:
vi crontab.oracle
##############################################################
#Script for full export backup TESTDB_1 DATABASE
###############################################################
00 21 * * * /u05/DB_BACKUP_TESTDB_1/Backup_Scipts/expdp_FULL_BACKUP_TESTDB_1.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/UGal_cron.log
##############################################################
#Script for full export backup TESTDB_2 DATABASE
###############################################################
00 23 * * * /u05/DB_BACKUP_TESTDB_2/SCRIPTS/expdp_FULL_DB_TESTDB_2.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/wgal_cron.log
:wq
Save and quit
(or) crontab -e
##############################################################
#Script for full export backup TESTDB_1 DATABASE
###############################################################
00 21 * * * /u05/DB_BACKUP_TESTDB_1/Backup_Scipts/expdp_FULL_BACKUP_TESTDB_1.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/UGal_cron.log
##############################################################
#Script for full export backup TESTDB_2 DATABASE
###############################################################
00 23 * * * /u05/DB_BACKUP_TESTDB_2/SCRIPTS/expdp_FULL_DB_TESTDB_2.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/wgal_cron.log
4)Schedule the job for execution using cronjob:
--------------------------------------------------
$crontab crontab.oracle
=>This command will schedule the job for execution,In the above case at 9pm and 11pm daily.
For more details regarding Cronjob Check my link Label 'UNIX FOR DBA'.
5)For verifying the cronjobs scheduled for execution:
$crontab -l
This command will list all jobs schedule for execution.
Hope it helps.
Best regards,
Rafi.
We can schedule expdp and impdp using cronjob to have logical backup of our Databases on daily basis or depending on the requirement..In the below article
I'm taking two Databases TESTDB_1 and TESTDB_2.In the below shell scripts,I'm doing tasks for scheduling expdp job using
cronjob. In the below article we are doing below tasks for 'expdp',However similary we can do for 'impdp' by changing impdp scripts:
1)I'm setting the Oracle Environment,
2)Creating expdp script for taking export Datapump backup(Logical Database backup).
3)Moving log files to a separate Directory on the server.
4)Giving all the required permission for the dumpfile backup.
5)Zipping the dumpfiles using gunzip utility.
6)Deleting the dumpfiles which are more than 6 days(7days) old. &
7)Using Cronjob to execute and schedule this task Daily at 9pm & 11pm.
Pre-requistics steps:
-----------------------
1)TESTDB_1: Before executing and scheduling this script perform the below task in the Database side.Create a directory
for taking the datapump export backup and grant the required permissions to the user which you are going to use for
doing this export backup.In my case I took 'system' user.
Creating Directory:
--------------------
CREATE DIRECTORY expdp_TESTDB_2_dir as '/u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily';
GRANT READ,WRITE ON DIRECTORY expdp_TESTDB_2_dir TO SYSTEM;
Note: If you a creating a separate user or plannning to use separate user for taking the datapump export backup than,
Grant the required privileges.
Eg: Grant connect,exp_full_database,imp_full_database to Test_user; or Grant DBA to Test_user;
Shell script for TESTDB_1 Database is below:
--------------------------------------------
backup TESTDB_1.sh:
--------------------------
#!/bin/bash
#Script to Perform Datapump Export backup Every Day
################################################################
#Change History
#================
#DATE AUTHOR http://rafioracledba.blogspot.com LAST-CHANGE
#--------- ----------------------- --------------------------------- -----------
#26-MAY-2011 RAFI ALVI SCRIPT FOR FULL EXPORT TESTDB_1 01-JUN-2011
#
#
#
################################################################
export ORACLE_BASE=/u01/oracle11g
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_SID=TESTDB_1
export PATH=$PATH:$ORACLE_HOME/bin
expdp system/TESTDB_1dba@TESTDB_1 dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=expdp_TESTDB_1_dir logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log FULL=Y EXCLUDE=STATISTICS CONTENT=ALL JOB_NAME=FULL_EXPDP_TESTDB_1
############################################################################
#Moving log files
mv /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup/expdp*.log /u05/DB_BACKUP_TESTDB_1/logs/expdp_fulldb_backup_logs
###############################################################################3
#Granting permissions for Dump files
chmod -R 775 /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup/*
#################################################################################
#Compressing files
gzip /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup/*.*
#################################################################################33
#Removing 7 days old dump files
find /u05/DB_BACKUP_TESTDB_1/expdp_fulldb_backup -mtime +6 -exec rm {} \;
######################################################################################
cronjob script:
----------------------
##############################################################
#Script for full export backup TESTDB_1 DATABASE
###############################################################
00 21 * * * /u05/DB_BACKUP_TESTDB_1/Backup_Scipts/expdp_FULL_BACKUP_TESTDB_1.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/UGal_cron.log
Similary you can do for any Database.In the below case I'm doing it for the 2nd Database 'TESTDB_2'.
2)TESTDB_2:
------------
SQL>CREATE DIRECTORY expdp_TESTDB_2_dir as '/u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily';
SQL>GRANT READ,WRITE ON DIRECTORY expdp_TESTDB_2_dir TO SYSTEM;
backup TESTDB_2.sh:
--------------------------------
#!/bin/bash
#Script to Perform Datapump Export Every Day
################################################################
#Change History
#================
#DATE AUTHOR http://rafioracledba.blogspot.com LAST-CHANGE
#--------- ----------------------- --------------------------------- -------------
#26-MAY-2011 RAFI ALVI SCRIPT FOR FULL EXPORT TESTDB_2 01-JUN-2011
#
#
#
################################################################
export ORACLE_BASE=/u01/oracle11g
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/dbhome_1
export ORACLE_SID=TESTDB_2
export PATH=$PATH:$ORACLE_HOME/bin
expdp system/TESTDB_2dba@TESTDB_2 dumpfile=expdp-`date '+%d%m%Y_%H%M%S'`.dmp directory=expdp_TESTDB_2_dir logfile=expdp-`date '+%d%m%Y_%H%M%S'`.log FULL=Y EXCLUDE=STATISTICS CONTENT=ALL JOB_NAME=FULL_EXPDP_TESTDB_2
###################################################################
#Moving log files
mv /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily/expdp*.log /u05/DB_BACKUP_TESTDB_2/logs/expdp_fulldb_backup_logs
#####################################################################
#Granting Permission to dump files
chmod -R 775 /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily/*
########################################################################
#Compressing the files
gzip /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily/*.*
###########################################################################
#Removing 7 days old dump files
find /u05/DB_BACKUP_TESTDB_2/expdp_fulldb_backup_daily -mtime +6 -exec rm {} \;
cronjob script:
--------------------------
##############################################################
#Script for full export backup TESTDB_2 DATABASE
###############################################################
00 23 * * * /u05/DB_BACKUP_TESTDB_2/SCRIPTS/expdp_FULL_DB_TESTDB_2.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/wgal_cron.log
Scheduling task using Cronjob:
----------------------------------------
We have to schedule the 2 export Datapump job using cronjob for doing this task perform the steps described below:
1)Create one separate directory for all your DBA/Cronjob tasks and grant all the permissions so that scripts can execute.
$mkdir DBA_TASKS
$chmod -R 775 /u04/DBA_TASKS
2)All the shell scripts and Cronjob script you can keep in /u04/DBA_TASKS.
3)We can create a separate file or edit the existing crontab file.
In my example I'm creating a separate file 'crontab.oracle' where I will put the below cronjob scripts:
vi crontab.oracle
##############################################################
#Script for full export backup TESTDB_1 DATABASE
###############################################################
00 21 * * * /u05/DB_BACKUP_TESTDB_1/Backup_Scipts/expdp_FULL_BACKUP_TESTDB_1.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/UGal_cron.log
##############################################################
#Script for full export backup TESTDB_2 DATABASE
###############################################################
00 23 * * * /u05/DB_BACKUP_TESTDB_2/SCRIPTS/expdp_FULL_DB_TESTDB_2.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/wgal_cron.log
:wq
Save and quit
(or) crontab -e
##############################################################
#Script for full export backup TESTDB_1 DATABASE
###############################################################
00 21 * * * /u05/DB_BACKUP_TESTDB_1/Backup_Scipts/expdp_FULL_BACKUP_TESTDB_1.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/UGal_cron.log
##############################################################
#Script for full export backup TESTDB_2 DATABASE
###############################################################
00 23 * * * /u05/DB_BACKUP_TESTDB_2/SCRIPTS/expdp_FULL_DB_TESTDB_2.sh 2>&1 >/u05/DB_BACKUP_TESTDB_1/logs/CRONJOBS_LOGS/wgal_cron.log
4)Schedule the job for execution using cronjob:
--------------------------------------------------
$crontab crontab.oracle
=>This command will schedule the job for execution,In the above case at 9pm and 11pm daily.
For more details regarding Cronjob Check my link Label 'UNIX FOR DBA'.
5)For verifying the cronjobs scheduled for execution:
$crontab -l
This command will list all jobs schedule for execution.
Hope it helps.
Best regards,
Rafi.
Monday, June 6, 2011
How to recover Database when accidentally deleted datafile without backup
How to recover Database when accidentally deleted datafile without backup:
------------------------------------------------------------------------------
If you have accidentally deleted a datafile from a physical storage without backup you can recover database by dropping that particular datafile and Tablespace as follows,However if that datafile is very much crucial you can try the below link to hit and try:
http://gmdba.blogspot.com/2009/12/oracle-recover-deleted-datafile-enen.html
Step 1:Bring the tablespace offline so that you can shutdown the database.
-------
SQL>alter tablespace test11 offline immediate;
OR
$ rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Tue Jun 7 10:26:54 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1248484971)
RMAN> run
{
sql 'alter tablespace test11 offline immediate';
}
using target database control file instead of recovery catalog
sql statement: alter tablespace test11 offline immediate
SQL> shut immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Step 2: In the mount state drop the datafile and than Drop the tablespace including contents.
-------
SQL> startup mount
ORACLE instance started.
Total System Global Area 764157952 bytes
Fixed Size 2217184 bytes
Variable Size 335547168 bytes
Database Buffers 423624704 bytes
Redo Buffers 2768896 bytes
Database mounted.
SQL> ALTER DATABASE DATAFILE '/u02/SCRIPTS/test11_data_01.dbf' OFFLINE DROP;
Database altered.
SQL> select file_name||' '||tablespace_name from dba_data_files;
FILE_NAME||''||TABLESPACE_NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
/u01/app/oracle/oradata/orcl/apex01.dbf APEX
/u02/SCRIPTS/test11_data_01.dbf TEST11
7 rows selected.
SQL> DROP TABLESPACE TEST11 INCLUDING CONTENTS;
Tablespace dropped.
SQL> select file_name||' '||tablespace_name from dba_data_files;
FILE_NAME||''||TABLESPACE_NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
/u01/app/oracle/oradata/orcl/apex01.dbf APEX
Note 1: If the datafile is drop accidentally or taken offline than RMAN we cannot take full database rman backup,As RMAN will be searching for that
file for taking physical backup.
Note 2: I tried this experiment in my competancy server,It is always advisable to have one backup for complete recovery and to avoid data loss and never try such experiment in any development or real time systems.
Hope it helps.
Best regards,
Rafi.
------------------------------------------------------------------------------
If you have accidentally deleted a datafile from a physical storage without backup you can recover database by dropping that particular datafile and Tablespace as follows,However if that datafile is very much crucial you can try the below link to hit and try:
http://gmdba.blogspot.com/2009/12/oracle-recover-deleted-datafile-enen.html
Step 1:Bring the tablespace offline so that you can shutdown the database.
-------
SQL>alter tablespace test11 offline immediate;
OR
$ rman target /
Recovery Manager: Release 11.2.0.1.0 - Production on Tue Jun 7 10:26:54 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
connected to target database: ORCL (DBID=1248484971)
RMAN> run
{
sql 'alter tablespace test11 offline immediate';
}
using target database control file instead of recovery catalog
sql statement: alter tablespace test11 offline immediate
SQL> shut immediate
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Step 2: In the mount state drop the datafile and than Drop the tablespace including contents.
-------
SQL> startup mount
ORACLE instance started.
Total System Global Area 764157952 bytes
Fixed Size 2217184 bytes
Variable Size 335547168 bytes
Database Buffers 423624704 bytes
Redo Buffers 2768896 bytes
Database mounted.
SQL> ALTER DATABASE DATAFILE '/u02/SCRIPTS/test11_data_01.dbf' OFFLINE DROP;
Database altered.
SQL> select file_name||' '||tablespace_name from dba_data_files;
FILE_NAME||''||TABLESPACE_NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
/u01/app/oracle/oradata/orcl/apex01.dbf APEX
/u02/SCRIPTS/test11_data_01.dbf TEST11
7 rows selected.
SQL> DROP TABLESPACE TEST11 INCLUDING CONTENTS;
Tablespace dropped.
SQL> select file_name||' '||tablespace_name from dba_data_files;
FILE_NAME||''||TABLESPACE_NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/orcl/users01.dbf USERS
/u01/app/oracle/oradata/orcl/undotbs01.dbf UNDOTBS1
/u01/app/oracle/oradata/orcl/sysaux01.dbf SYSAUX
/u01/app/oracle/oradata/orcl/system01.dbf SYSTEM
/u01/app/oracle/oradata/orcl/example01.dbf EXAMPLE
/u01/app/oracle/oradata/orcl/apex01.dbf APEX
Note 1: If the datafile is drop accidentally or taken offline than RMAN we cannot take full database rman backup,As RMAN will be searching for that
file for taking physical backup.
Note 2: I tried this experiment in my competancy server,It is always advisable to have one backup for complete recovery and to avoid data loss and never try such experiment in any development or real time systems.
Hope it helps.
Best regards,
Rafi.
ORA-00845: MEMORY_TARGET not supported on this system
Hi,
As we know MEMORY_TARGET is a new dynamic parameter added in Oracle 11g for automatic SGA and PGA management.Working with LINUX OS you might face error
ORA-00845.
Error:
ORA-00845: MEMORY_TARGET not supported on this system
The ORA-00845:can arises for the following two reasons on linux system.
1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET.
or,
2)If the shared memory is not mapped to /dev/shm directory.
For me the first case had happen i.e MEMORY_TARGET=1G,So I decided to increase to 2G,This is because of the below factor:
•The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process so we need extra memory or size of this parameter must be high,If you have 2 Databases running in your machine you can keep MEMORY_TARGET=2G.For 3 or 4 Database MEMORY_TARGET=2.5g should
be fine.
Resolving ORA-00845:
As a root user
mount -t tmpfs shmfs -o size=2g /dev/shm
In order to make the settings persistence so that it will affect after
restarting machine add an entry in /etc/fstab similar to the following:
vi /etc/fstab
shmfs /dev/shm tmpfs size=12g 0
[root@node1 /]# mount -t tmpfs shmfs -o size=2g /dev/shm
:wq
Save and quit
Note 1:In the above example I've created the shmfs(shared memory file system) with a size of 2G as that is the size of the buffer cache I am planning to use.
The other elements of the SGA are placed in regular memory,not this shared memory file system, so they should not be included when deciding on the size of the shmfs.
It is recommended to size this slightly bigger than the actual size needed,
Here I've used a 2G shmfs for a 2G buffer cache.
Note 2: The size of memory_target parameter we can grow upto memory_max_target and not beyond memory_max_target.
SQL> show parameter %MEMORY%
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address integer 0
memory_max_target big integer 1G
memory_target big integer 1G
[root@node1 /]# reboot
Broadcast message from root (pts/3) (Thu May 19 19:28:02 2011):
The system is going down for reboot NOW
Check after reboot:
-------------------
[root@node1 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda6 5952252 4117872 1527140 73% /
/dev/sda7 5080796 4292008 526532 90% /u01
/dev/sda3 6605824 1807356 4457492 29% /u03
/dev/sda2 35550448 26956160 6759264 80% /u02
/dev/sda1 497829 16695 455432 4% /boot
tmpfs 2097152 0 2097152 0% /dev/shm
/dev/sdb1 39381744 35555696 1825556 96% /u04
/dev/sdc1 51605436 1233976 47750056 3% /u05
shmfs 2097152 0 2097152 0% /dev/shm
Check after starting database:
-------------------------------
[oracle@node1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 5.7G 4.0G 1.5G 73% /
/dev/sda7 4.9G 4.1G 515M 90% /u01
/dev/sda3 6.3G 1.8G 4.3G 29% /u03
/dev/sda2 34G 26G 6.5G 80% /u02
/dev/sda1 487M 17M 445M 4% /boot
tmpfs 2.0G 1.4G 714M 66% /dev/shm
/dev/sdb1 38G 34G 1.8G 96% /u04
/dev/sdc1 50G 1.2G 46G 3% /u05
shmfs 2.0G 1.4G 714M 66% /dev/shm
Hope it helps.
Best regards,
Rafi.
As we know MEMORY_TARGET is a new dynamic parameter added in Oracle 11g for automatic SGA and PGA management.Working with LINUX OS you might face error
ORA-00845.
Error:
ORA-00845: MEMORY_TARGET not supported on this system
The ORA-00845:can arises for the following two reasons on linux system.
1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET.
or,
2)If the shared memory is not mapped to /dev/shm directory.
For me the first case had happen i.e MEMORY_TARGET=1G,So I decided to increase to 2G,This is because of the below factor:
•The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process so we need extra memory or size of this parameter must be high,If you have 2 Databases running in your machine you can keep MEMORY_TARGET=2G.For 3 or 4 Database MEMORY_TARGET=2.5g should
be fine.
Resolving ORA-00845:
As a root user
mount -t tmpfs shmfs -o size=2g /dev/shm
In order to make the settings persistence so that it will affect after
restarting machine add an entry in /etc/fstab similar to the following:
vi /etc/fstab
shmfs /dev/shm tmpfs size=12g 0
[root@node1 /]# mount -t tmpfs shmfs -o size=2g /dev/shm
:wq
Save and quit
Note 1:In the above example I've created the shmfs(shared memory file system) with a size of 2G as that is the size of the buffer cache I am planning to use.
The other elements of the SGA are placed in regular memory,not this shared memory file system, so they should not be included when deciding on the size of the shmfs.
It is recommended to size this slightly bigger than the actual size needed,
Here I've used a 2G shmfs for a 2G buffer cache.
Note 2: The size of memory_target parameter we can grow upto memory_max_target and not beyond memory_max_target.
SQL> show parameter %MEMORY%
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
hi_shared_memory_address integer 0
memory_max_target big integer 1G
memory_target big integer 1G
[root@node1 /]# reboot
Broadcast message from root (pts/3) (Thu May 19 19:28:02 2011):
The system is going down for reboot NOW
Check after reboot:
-------------------
[root@node1 ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda6 5952252 4117872 1527140 73% /
/dev/sda7 5080796 4292008 526532 90% /u01
/dev/sda3 6605824 1807356 4457492 29% /u03
/dev/sda2 35550448 26956160 6759264 80% /u02
/dev/sda1 497829 16695 455432 4% /boot
tmpfs 2097152 0 2097152 0% /dev/shm
/dev/sdb1 39381744 35555696 1825556 96% /u04
/dev/sdc1 51605436 1233976 47750056 3% /u05
shmfs 2097152 0 2097152 0% /dev/shm
Check after starting database:
-------------------------------
[oracle@node1 ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 5.7G 4.0G 1.5G 73% /
/dev/sda7 4.9G 4.1G 515M 90% /u01
/dev/sda3 6.3G 1.8G 4.3G 29% /u03
/dev/sda2 34G 26G 6.5G 80% /u02
/dev/sda1 487M 17M 445M 4% /boot
tmpfs 2.0G 1.4G 714M 66% /dev/shm
/dev/sdb1 38G 34G 1.8G 96% /u04
/dev/sdc1 50G 1.2G 46G 3% /u05
shmfs 2.0G 1.4G 714M 66% /dev/shm
Hope it helps.
Best regards,
Rafi.
Subscribe to:
Posts (Atom)