Wednesday, March 24, 2010

Difference between DBMS and RDBMS(IMPORTANT INTERVIEW QUESTION)

Hi,
This is sure an important interview question asked to me in many interviews

what is the difference between DBMS and RDBMS??

1)Data Base Management System is a process of managing
data for efficient retrivel & storage of data.
Ex: sysbase , Foxpro
The database which is used by relations(tables) to
acquire information retrival are known as RDBMS
EX: SQL, ORACLE,MY-SQLSERVER

2)In DBMS,client server concept is not present.
In RDBMS,client server architecture is present (i.e) the
client sends the request to the server and the server
responds to that particular request.

3)DBMS does not impose any constraints or security with
regard to data manipulation. It is user or the programmer
responsibility to ensure the ACID PROPERTY of the database.
RDBMS defines the integrity constraint for the purpose of
holding ACID PROPERTY.

4)In DBMS Normalization process will not be present.
In RDBMS, normalization process will be present to check
the database table cosistency.

5)In DBMS we treats Data as Files internally.
In RDBMS we treats data as Tables internally.

6)DBMS supports 3 rules of E.F.CODD out off 12 rules.
RDBMS supports minimum 6 rules of E.F.CODD.

7)DBMS does not support distributed databases.
RDBMS support distributed databases.

8)In DBMS we see small organization with small amount of data.
RDBMS designed to take care of large amount of data.

9)DBMS contains only flat data.
RDBMS contains some relation between entities.

10)DBMS supports single user
RDBMS supports multiple user

Best regards,

Rafi.

Monday, March 22, 2010

ORACLE DBA GENERAL INTERVIEW QUESTIONS

Hi ,
Below are some general interview questions which can be useful:

Q 1)What is tablespace and Quota,if i create any user how much quota is allocated to it?How to check the user had been granted unlimited quota for a particular tablespace?(TCS INTERVIEW QUESTION)

Ans: TABLESPACE A tablespace is a logical storage unit within an Oracle database. It is logical because a tablespace is not visible in the file system of the machine on which the database resides. A tablespace, in turn, consists of at least one datafile which, in turn, are physically located in the file system of the server. Btw, a datafile belongs to exactly one tablespace. Each table, index and so on that is stored in an Oracle database belongs to a tablespace. The tablespace builds the bridge between the Oracle database and the filesystem in which the table's or index' data is stored. There are three types of tablespaces in Oracle:
•Permanent tablespaces
•Undo tablespaces
•temporary tablespaces

Quotas on tablespaces
Users can have (space) quotas on tablespaces. This is a means to limit how much space a user uses on a tablespace. This quota can be set using alter user quota... Use DBA_TS_QUOTAS/USER_TS_QUOTAS to find out a user's quota on a tablespace as well as how much he has already occupied.

If we just create a user and query dba_ts_quotas it will result in no rows as given below:

http://forums.oracle.com/forums/thread.jspa?threadID=1046926&tstart=0
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:3396797633001

If I get the value -1 in maxbytes column of DBA_TS_QUOTAS it will tell you that we had allocated unlimited quota to a particular tablespace.
Refer the above for more details.This is something interesting.

Q 2) What are batch files?(TCS interview question)
Ans:
In DOS, OS/2, and Microsoft Windows, a batch file is a text file containing a series of commands intended to be executed by the command interpreter. When a batch file is run, the shell program (usually COMMAND.COM or cmd.exe) reads the file and executes its commands, normally line-by-line. Batch files are useful for running a sequence of executables automatically and are often used by system administrators to automate tedious processes.[1] Unix-like operating systems (such as Linux) have a similar type of file called a shell script.[2]
DOS batch files have the filename extension .bat. Batch files for other environments may have different extensions, e.g. .cmd or .bat in the Microsoft Windows NT-family of operating systems and OS/2, or .btm in 4DOS and 4NT related shells. The Windows 9x family of operating systems only recognize the .bat extension

Explanation
Batch files are executed by every line being executed in order until the end is reached or something else stops it (such as the key shortcut for terminating batch processing; 'Ctrl' + 'C'). This batch file first turns off the 'echo' with ECHO OFF. This stops the display of input from the batch file and limits the display to output from commands only. Since this command is executed before the input is silenced, the @ symbol is used at the start of the command which stops that command line showing input. Then the ECHO command is used again in the form ECHO Hello World! which outputs the line Hello World!. Then the command ECHO. is used which adds the empty line below Hello World!, using the . so that the command doesn't output the input display's state (ECHO is on. or ECHO is off.) and just outputs an empty line. The . can also be used to prevent the ECHO command from confusing an attempt to output a line beginning with 'ON' or 'OFF' from changing the state of showing input. Then the PAUSE command is used which pauses execution until the user presses a key. The Press any key to continue . . . prompt is output by the command. Lastly, after the user presses a key the command ECHO ONis used which turns the prompt and input on again, so that if the file is executed from the Command Prompt, rather than Windows Explorer then when the execution ends the user can see the prompt again to use normally. After the last line is reached the batch file ends execution automatically. If it was started from the command prompt (by entering the name of the file when in its directory) then the window remains when finished, but when started from Windows Explorer the window automatically closes upon the end of execution

Q 3)How can we upgrade a database from one version to another version with zero or low downtime in 24/7 or RAC environment?
Ans: Rolling upgrade is a zero-downtime method for upgrading the Oracle software.

The term rolling upgrade refers to upgrading different databases or different instances of the same database (in a Real Application Clusters environment) one at a time, without stopping the database.

The advantage of a RAC rolling upgrade is that it enables at least some instances of the RAC installation to be available during the scheduled outage required for patch upgrades. Only the RAC instance that is currently being patched needs to be brought down. The other instances can continue to remain available. This means that the impact on the application downtime required for such scheduled outages is further minimized. Oracle's opatch utility enables the user to apply the patch successively to the different instances of the RAC installation.

Rolling upgrade is available only for patches that have been certified by Oracle to be eligible for rolling upgrades. Typically, patches that can be installed in a rolling upgrade include:
Patches that do not affect the contents of the database such as the data dictionary
Patches not related to RAC internode communication
Patches related to client-side tools such as SQL*PLUS, Oracle utilities, development libraries, and Oracle Net
Patches that do not change shared database resources such as datafile headers, control files, and common header definitions of kernel modules
Rolling upgrade of patches is currently available for one-off patches only. It is not available for patch sets.
Rolling patch upgrades are not available for deployments where the Oracle Database software is shared across the different nodes. This is the case where the Oracle home is on Cluster File System (CFS) or on shared volumes provided by file servers or NFS-mounted drives. The feature is only available where each node has its own copy of the Oracle Database software.


Q 4)What is the difference between delete and truncate(HCL interview question)?
Ans:
Deletes perform normal DML. That is, they take locks on rows, they generate redo (lots of it), and they require segments in the UNDO tablespace. Deletes clear records out of blocks carefully. If a mistake is made a rollback can be issued to restore the records prior to a commit. A delete does not relinquish segment space thus a table in which all records have been deleted retains all of its original blocks.

Truncates are DDL and, in a sense, cheat. A truncate moves the High Water Mark of the table back to zero. No row-level locks are taken, no redo or rollback is generated. All extents bar the initial are de-allocated from the table (if you have MINEXTENTS set to anything other than 1, then that number of extents is retained rather than just the initial). By re-positioning the high water mark, they prevent reading of any table data, so they have the same effect as a delete, but without all the overhead. Just one slight problem: a truncate is a DDL command, so you can't roll it back if you decide you made a mistake. (It's also true that you can't selectively truncate -no "WHERE" clause is permitted, unlike with deletes, of course).

By resetting the High Water Mark, the truncate prevents reading of any table's data, so they it has the same effect as a delete, but without the overhead. There is, however, one aspect of a Truncate that must be kept in mind. Because a Truncate is DDL it issues a COMMIT before it acts and another COMMIT afterward so no rollback of the transaction is possible.


Q 5)When your recovery catalog gets corrupted how you recover your database?(PATNI INTERVIEW QUESTION)
Ans: I answer as per my knowledge that If you have the backup of datafiles and redolog files you can create the control file by using the create control file command and open the database with resetlogs option if required.

To be on the safer side we should backup our recovery catalog also.

The below link might be useful for backing up Recovery catalog:


http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmcatdb.htm#CIHFEIIH

Q 6)When your current redolog gets corrupted how you recover your Database?
:
(** PATNI INTERVIEW QUESTION **)
Ans:
Recovery From Current Redolog Corruption
1. startup error message
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) cannot find file
5.1.2 Current Redolog Corruption
Two case:
A. shutdown normally, no transaction to recovery, recreate log group with 'alter database clear unarchived logfile group n'
B. active transaction exists. database need media recovery. log group need synchronized. there are 2 methods:
a. imcomplete recovery. keep the consistence of database. but this method requires archivelog mode and valid backup.
b. force recovery. but may cause inconsistence.
5.1.2.1 recovery with backup
1. accounting a error when open database
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) OS cannot found file
2. check V$log, and notice the current redolog corrupted
SQL> select group#,sequence#,archived,status from v$log;

GROUP# SEQUENCE# ARCHIVED STATUS
---------- ---------- -------- ----------------
1 1 NO CURRENT
2 2 YES INACTIVE
3 3 YES INACTIVE
3. cannot clear the current redolog
SQL> alter database clear unarchived logfile group 1;
alter database clear unarchived logfile group 1
*
ERROR at line 1:
ORA-01624: log 1 needed for crash recovery of thread 1
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
4. copy a full backup and recover database
until scn or until cnacel
recover database until cancel
select auto recover with all the valid archivelogs and redologs, and
recover database until cancel
enter cancel to do a incomplete recovery (that's need recover twice)
Example:
SQL> recover database until cancel;
Auto
...
SQL> recover database until cancel;
Cancel;
5. open database with: alter database open resetlogs
Description:
1. incomplete recovery may cause data lost of current redolog.
2. need archivelog mode and full backup.
3. backup database after recovery.
4. strongly recomment that make log mirror on different disks.
5.1.2.2 force recovery without backup
1. accounting a error
ORA-00313: open failed for members of log group 1 of thread 1
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2)
2、notice curruent redolog corrupted
SQL> select group#,sequence#,archived,status from v$log;

GROUP# SEQUENCE# ARCHIVED STATUS
---------- ---------- -------- ----------------
1 1 NO CURRENT
2 2 YES INACTIVE
3 3 YES INACTIVE
3、cannot clear
SQL> alter database clear unarchived logfile group 1;
alter database clear unarchived logfile group 1
*
ERROR at line 1:
ORA-01624: log 1 needed for crash recovery of thread 1
ORA-00312: online log 1 thread 1: 'D:\ORACLE\ORADATA\TEST\REDO01.LOG'
4、shutdown database
SQL>shutdown immediate
5、add the following parameter to init.ora
_allow_resetlogs_corruption=TRUE
6、restart database and recover with 'until cancel'
SQL>recover database until cancel;
Cancel

SQL>alter database open resetlogs;
7、full export database after open database successfully
8、shutdown database, remove init parameter
_all_resetlogs_corrupt
9、recreate database
10、import and finish recovery
11、recomment to run
ANALYZE TABLE ...VALIDATE STRUCTURE CASCADE;
Description
1. use this method only when no other method. because it may cause database inconsistent
2. cause data loss, but less than recover with backup. because the data without commit be recovered too.
3. strongly recomment run complete recreation and valication check.
4. should backup database fullly.
5. strongly recomment that make log mirror on different disks, no data loss can be accept on production environment.
Reference http://www.itpub.net/thread-126320-1-1.html


Q 7)What are the parameters responsible for generating Execution Plan? what is db_file_multiblock_read_count?
Ans: The important parameters are:
1)optimizer_mode
2)db_file_multiblock_read_count


DB_FILE_MULTIBLOCK_READ_COUNT is one of the parameters you can use to minimize I/O during table scans. It specifies the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan depends on such factors as the size of the table, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10g release 2, the default value of this parameter is a value that corresponds to the maximum I/O size that can be performed efficiently. This value is platform-dependent and is 1MB for most platforms.Because the parameter is expressed in blocks, it will be set to a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size. Note that if the number of sessions is extremely large the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers.

Even though the default value may be a large value, the optimizer will not favor large plans if you do not set this parameter. It would do so only if you explicitly set this parameter to a large value.

Online transaction processing (OLTP) and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to choose a full table scan over an index if the value of this parameter is high.

The maximum value is the operating system's maximum I/O size expressed as Oracle blocks ((max I/O size)/DB_BLOCK_SIZE). If you set this parameter to a value greater than the maximum, Oracle uses the maximum.

The db_file_multiblock_read_count is Operating system dependent and in turn depends on three things
a)Sequential read time
b)Scattered read speed and
c)CPU consumption
Apart from optimizer_mode & db_file_multiblock_read_count we have parameter like Optimizer_index_cost_adj,Optimizer_index_caching & Parallel_automatic_tuning which helps optimizer to generate execution plan for SQL statements.

MY RMAN practices

RMAN BACKUP taken on different locations by using 'format' keyword. Here is how we can take rman backup in different locations:

RUN
{
DELETE NOPROMPT OBSOLETE;
CROSSCHECK BACKUP;
CROSSCHECK ARCHIVELOG ALL;
DELETE NOPROMPT EXPIRED BACKUP;
DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;
ALLOCATE CHANNEL C1 DEVICE TYPE DISK MAXPIECESIZE 100M format 'C:\RMANBKP\%U';
ALLOCATE CHANNEL C2 DEVICE TYPE DISK MAXPIECESIZE 100M format 'C:\RMANBKP\%U';
ALLOCATE CHANNEL C3 DEVICE TYPE DISK MAXPIECESIZE 100M format 'C:\RMANBKP\%U';
BACKUP INCREMENTAL LEVEL 0 DATABASE TAG Weekly_Friday_0_Hotpsms_bkp;
SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT';
ALLOCATE CHANNEL B1 DEVICE TYPE DISK MAXPIECESIZE 100M format 'C:\RMANBKP\%U';
ALLOCATE CHANNEL B2 DEVICE TYPE DISK MAXPIECESIZE 100M format 'C:\RMANBKP\%U';
BACKUP DATABASE ARCHIVELOG ALL DELETE ALL INPUT TAG Weekly_Friday_0_Hotpsms_bkp;
BACKUP VALIDATE DATABASE;
BACKUP VALIDATE DATABASE ARCHIVELOG ALL;
}


Best regards,

Rafi.

Friday, March 19, 2010

BUFFER BUSY WAITS AND REMEDIES

UNDERSTANDING BUFFER BUSY WAITS AND REMEDIES


Hi ,
Buffer busy wait comes in top 5 wait events.Below is the detail of dealing with them.

The Buffer Busy Waits Oracle metric occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of the data block is locked. This buffer busy wait condition can happen for either of the following reasons:

The block is being read into the buffer by another session, so the waiting session must wait for the block read to complete.

Another session has the buffer block locked in a mode that is incompatible with the waiting session's request.
Because buffer busy waits are due to contention between particular blocks, there's nothing you can do until you know which blocks are in conflict and why the conflicts are occurring. Tuning therefore involves identifying and eliminating the cause of the block contention.

One of the most confounding problems with Oracle is the resolution of buffer busy wait events. Buffer busy waits are common in an I/O-bound Oracle system, as evidenced by any system with read (sequential/scattered) waits in the top-five waits in the Oracle STATSPACK report, like this:
Top 5 Timed Events
% Total
Event Waits Time (s) Ela Time
--------------------------- ------------ ----------- -----------
db file sequential read 2,598 7,146 48.54
db file scattered read 25,519 3,246 22.04
library cache load lock 673 1,363 9.26
CPU time 2,154 934 7.83
log file parallel write 19,157 837 5.68

The main way to reduce buffer busy waits is to reduce the total I/O on the system. This can be done by tuning the SQL to access rows with fewer block reads (i.e., by adding indexes). Even if we have a huge db_cache_size, we may still see buffer busy waits, and increasing the buffer size won't help.

The resolution of a "buffer busy wait" events is one of the most confounding problems with Oracle. In an I/O-bound Oracle system, buffer busy waits are common, as evidenced by any system with read (sequential/scattered) waits in the top-five waits.

Reducing buffer busy waits reduces the total I/O on the system. This can be accomplished by tuning the SQL to access rows with fewer block reads by adding indexes, adjusting the database writer or adding freelists to tables and indexes. Even if there is a huge db_cache_size , the DBA may still see buffer busy waits and, in this case, increasing the buffer size will not help.

The most common remedies for high buffer busy waits include database writer (DBWR) contention tuning, adding freelists (or ASSM), and adding missing indexes

Best regards,

Rafi.

RAC QUESTIONS AND ANSWERS

Q 1)How do we find the location of OCR and voting disks?

Ans: The location of OCR and Voting disks can be found as follows:

OCR location:

[oracle@dbcl1n1 AUCS1 ~]$ cat /etc/oracle/ocr.locO/P:
ocrconfig_loc=/export/ocw/oracle/ocr1
ocrmirrorconfig_loc=/export/ocw/oracle/ocr2
local_only=FALSE

Voting disk location:

[oracle@dbcl1n1 AUCS1 ~]$ cd $ORA_CRS_HOME/bin
[oracle@dbcl1n1 AUCS1 bin]$ ./crsctl query css votedisk
0. 0 /export/ocw/oracle/vote1
1. 0 /export/ocw/oracle/vote2
2. 0 /export/ocw/oracle/vote3


Q 3) What is cache fusion?
Ans :Cache fusion is the block shipping from one instance to another instance in a RAC environment.Due to Cache Fusion and the elimination of disk writes that occur when other instances request blocks for modifications, the performance overhead to manage shared data between instances is greatly diminished. Not only do Cache Fusion's concurrency controls greatly improve performance, but they also reduce the administrative effort for Real Application Clusters environments.

Cache Fusion addresses several types of concurrency as described under the following headings:

Concurrent Reads on Multiple Nodes
Concurrent Reads and Writes on Different Nodes
Concurrent Writes on Different Nodes

Q 4)Which process is responsible for cache fusion mechanism?
Ans:Global cache service(GCS)

A read request from an instance for a block that was modified by another instance and not yet written to disk can be a request for either the current version of the block or for a read-consistent version. In either case, the Global Cache Service Processes (LMSn) transfer the block from the holding instance's cache to the requesting instance's cache over the interconnect.

Q 5)If we perform a DML acivity in a 2 node RAC environment,if that node disconnect due to some reason,What will be the result?
Ans: The DML statement will execute successfully since Database is there on the shared device.

Note: Thanks Junad for noticing the above answer and helping me in clarifying that in actual:
We neeed to write the callout functions using FAST APPLICATION NOTIFICATION (FAN) Notifications FOR RAC.The callout can be written in OCI, for example JAVA for dml failover to work.

http://download.oracle.com/docs/cd/B19306_01/rac.102/b14197/hafeats.htm

If you are using TAF (Transparent Application failover) with RAC Than only: session(alter session is not) failover,select failover,pre-connect and Basic Failover are supported.

http://download.oracle.com/docs/cd/B28359_01/network.111/b28316/advcfg.htm#NETAG338
http://www.scribd.com/doc/19211546/Failover-for-Rac

In Oracle 11g rel2:
http://www.oracle.com/technetwork/database/app-failover-oracle-database-11g-173323.pdf

Q 6)How do we backup OCR and Voting disk?
Ans:VOTING DISK BACKUP:

To make a backup copy of the voting disk, use the Linux dd command. Perform this operation on every voting disk as needed where voting_disk_name is the name of the active voting disk and backup_file_name is the name of the file to which you want to back up the voting disk contents:

$dd if=voting_disk_name of=backup_file_name
If your voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:

dd if=/dev/sdd1 of=/tmp/voting.dmp

When you use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk

OCR BACKUP:

Viewing Available OCR Backups
To find the most recent backup of the OCR, on any node in the cluster, use the following command:

$ocrconfig -showbackup

Backing Up the OCR
Because of the importance of OCR information, Oracle recommends that you use the ocrconfig tool to make copies of the automatically created backup files at least once a day.

In addition to using the automatically created OCR backup files, you should also export the OCR contents to a file before and after making significant configuration changes, such as adding or deleting nodes from your environment, modifying Oracle Clusterware resources, or creating a database. Exporting the OCR contents to a file lets you restore the OCR if your configuration changes cause errors. For example, if you have unresolvable configuration problems, or if you are unable to restart your cluster database after such changes, then you can restore your configuration by importing the saved OCR content from the valid configuration.

To export the contents of the OCR to a file, use the following command, where backup_file_name is the name of the OCR backup file you want to create:

$ocrconfig -exportbackup_file_name

Q 7)What is global cache service,global enque service and global resource directory?
Ans: Global Cache Service
GCS is the main controlling process that implements Cache Fusion. GCS tracks the location and the status (mode and role) of the data blocks, as well as the access privileges of various instances. GCS is the mechanism, which guarantees the data integrity by employing global access levels. GCS maintains the block modes for data blocks in the global role. It is also responsible for block transfers between the instances. Upon a request from an Instance GCS organizes the block shipping and appropriate lock mode conversions. The Global Cache Service is implemented by various background processes, such as the Global Cache Service Processes (LMSn) and Global Enqueue Service Daemon (LMD).

Global Enqueue Service

The Global Enqueue Service (GES) manages or tracks the status of all the Oracle enqueuing mechanisms. This involves all non Cache-fusion intra-instance operations. GES performs concurrency control on dictionary cache locks, library cache locks and the transactions. GES does this operation for resources that are accessed by more than one instance.

GES/GCS Areas

GES and GCS have the memory structures associated with global resources. It is distributed across all instances in a cluster. This area is located in the variable or shared pool section of the SGA. The list below shows the additions.

Global Resource Directory Global Resource Directory (GRD) is the internal database that records and stores the current status of the data blocks. Whenever a block is transferred out of a local cache to another instance’s cache, the GRD is updated. The following resources information is available in GRD.

* Data Block Identifiers (DBA)
* Location of most current version
* Modes of the data blocks ( (N)Null, (S)Shared, (X)Exclusive )
*The Roles of the data blocks (local or global) held by each instance
*Buffer caches on multiple nodes in the cluster

Alert Log Scraping with Oracle's ADRCI Utility

Alert Log Scraping with Oracle's ADRCI Utility
By James Koopmann



Oracle’s new ADR with command interface shows promise for database administrators who like to script their own solution for quickly scraping the alert log and automatically looking for errors.

Oracle’s alert log is a chronological log of important messages and errors that are generated using an Oracle database. Often times it is the first place DBAs look to see if there are errors being generated or checking to see if the Oracle database is healthy, or at least not spitting out errors and complaining about something. If you have yet to pry open the hood of Oracle and have never seen an alert log, below is a snippet of some of the errors (ORA-) and messages that Oracle keeps in the alert log. Notice that I’ve included two different snippets. One is what most DBAs are familiar with, a pure text file, and one that is new to many, an XML alert log. Both are generated but the prior is, as suggested by Oracle, not being maintained and not receiving the same messaging as the XML alert log.

From the text alert log:

Wed Dec 09 14:55:16 2009
Errors in file /opt/app/oracle/diag/rdbms/db11/db11/trace/db11_ora_14798.trc:
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: '/oradata/db11/redo03.log'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3
From the XML alert log:

client_id='' type='UNKNOWN' level='16'
module='sqlplus@ludwig (TNS V1-V3)' pid='14798'>
Errors in file /opt/app/oracle/diag/rdbms/db11/db11/trace/db11_ora_14798.trc:
ORA-00313: open failed for members of log group 3 of thread 1
ORA-00312: online log 3 thread 1: '/oradata/db11/redo03.log'
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory
Additional information: 3



Clearly, they are very similar except for the XML tags. These XML tags cause some interesting problems for those of us that like to scrape the alert log with system utilities. It isn’t as easy as pointing the scripts at a new file out on disk. Thankfully Oracle has provided a utility (ADRCI) that can be used to extract diagnostic information generated from an Oracle database, including the alert log.

When used interactively by issuing the “adrci” command from a system prompt, which is often the way it is used, a user can easily extract alert log detail with very simple commands such as the following:

SHOW ALERT –TAIL; To display the last 10 lines of the alert log.
SHOW ALERT -TAIL 50; To display the last 50 lines of the alert log.
SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-%'"; search for alert lines that have ORA- in them.
However, like many other DBA tasks we clearly hate logging into a database server and issuing commands. What is more expected is to have some set of commands that can be executed through a schedule to check and alert us, such as by email. Additionally if we schedule something to check on a regular interval, say 10 minutes, it becomes a better and more reliable monitoring methodology then having a DBA waste time logging into 10s or 100s of database servers every 10 minutes. Therefore, and to not belabor the point, here are the scripts that I’ve started to develop. I hope that you can use them:

This solution makes use of a very simple directory structure under the oracle user’s home directory. Here are the directories/files used:

/home/oracle/alert
This is where the main script and supporting files exist

/home/oracle/alert/CHKALERT
This file contains ORA- errors, or any other errors we are interested in checking for in the alert log

/home/oracle/alert/CHKALERT.sh
This is the script that will need to be executed

/home/oracle/alert/CHKALERT.curr
File that contains information on the last time the alert log was scanned; containing information about the errors found and put into the CHKALERT.yyyy-mm-dd files

/home/oracle/alert/lst
The directory where output is generated

/home/oracle/alert/lst/CHKALERT.adrci
Is dynamically generated during runtime and will be used as a script to the ADRCI utility

/home/oracle/alert/lst/CHKALERT.lst
Is dynamically generated during runtime and will be used as output to a SQL*Plus call to get some variables to be used when calling ADRCI

/home/oracle/alert/lst/CHKALERT.sql
Is the SQL*Plus SQL used to generate the CHKALERT.lst output

/home/oracle/alert/lst/CHKALERT.tmp
Contains header output from the alert log to get a timezone

/home/oracle/alert/lst/CHKALERT.yyyy-mm-dd
will be files generated for each day that contains ORA- errors for that day


Below is the script itself. To use, just execute by typing in at the command prompt:

[oracle@ludwig alert]$ ./CHKALERT.shThe file you will be interested in looking at, if you’ve setup the /home/oracle/alert/CHKALERT file properly to look for specific ORA errors, will be the /home/oracle/alert/lst/CHKALERT.log file. This file contains each of the ORA errors found for the last run and looks like the following:

CHKALERT.log
::::::::::::::
001:2010:03:04:02:32:29:ORA-00312: online log 1 thread 1: '/oradata/db11/redo01.log'
001:2010:03:04:02:32:29:ORA-00312: online log 2 thread 1: '/oradata/db11/redo02.log'
001:2010:03:04:02:32:29:ORA-00312: online log 3 thread 1: '/oradata/db11/redo03.log'
To fully automate the process, not covered in this article, would be to interrogate this file and if it contains something, it should be sent to the appropriate DBAs to figure out what to do with the errors found. Have fun with the script. Change as you see fit and begin to take advantage of Oracle new ADRCI utility for scraping the alert logs.

#!/bin/sh
#--File: CHKALERT.sh

#--setup
PGM="CHKALERT"
ALRT=$HOME/alert
LST=${ALRT}/lst
LOG=${LST}/${PGM}.log
CURR=${ALRT}/${PGM}.curr

#--Unix environment variables
ECHO=echo; export ECHO
CAT=/bin/cat; export CAT
RM=/bin/rm; export RM
TOUCH=/bin/touch; export TOUCH
GREP=/bin/grep; export GREP
AWK=/bin/awk; export AWK
WC=/usr/bin/wc; export WC
TAIL=/usr/bin/tail; export TAIL
HEAD=/usr/bin/head; export HEAD
SUM=/usr/bin/sum; export SUM

#--Oracle environment variables
ORACLE_SID=db11; export ORACLE_SID
ORACLE_HOME=`${GREP} ${ORACLE_SID}: /etc/oratab | ${AWK} -F: '{print $2}'`; export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH; export PATH

#--code
start=`date "+%Y:%m:%d:%H:%M:%S"`
${RM} ${LOG}
${TOUCH} ${LOG}

#--execute SQL to get some diagnostic variables
echo "set echo off" > ${LST}/${PGM}.sql
echo "set feedback off" >> ${LST}/${PGM}.sql
echo "set heading off" >> ${LST}/${PGM}.sql
echo "set linesize 40" >> ${LST}/${PGM}.sql
echo "set pagesize 55" >> ${LST}/${PGM}.sql
echo "set verify off" >> ${LST}/${PGM}.sql
echo "set linesize 300" >> ${LST}/${PGM}.sql
echo "SELECT 'homepath:'||replace(homepath.value,adrbase.value||'/','')" >> ${LST}/${PGM}.sql
echo " FROM v\$diag_info homepath, v\$diag_info adrbase" >> ${LST}/${PGM}.sql
echo " WHERE homepath.name = 'ADR Home'" >> ${LST}/${PGM}.sql
echo " AND adrbase.name = 'ADR Base';" >> ${LST}/${PGM}.sql
echo "SELECT 'day:'||to_char(sysdate ,'yyyy-mm-dd') FROM dual;" >> ${LST}/${PGM}.sql
echo "SELECT 'nextday:'||to_char(sysdate+1,'yyyy-mm-dd') FROM dual;" >> ${LST}/${PGM}.sql
echo "SELECT 'prevday:'||to_char(sysdate-1,'yyyy-mm-dd') FROM dual;" >> ${LST}/${PGM}.sql
echo "exit" >> ${LST}/${PGM}.sql
sqlplus -s '/as sysdba' @${LST}/${PGM}.sql > ${LST}/${PGM}.lst

#-- get diag information variables just queried from the database
homepath=`${GREP} homepath ${LST}/${PGM}.lst | ${AWK} -F":" '{print $2}'`
day=`${GREP} "^day" ${LST}/${PGM}.lst | ${AWK} -F":" '{print $2}'`
nextday=`${GREP} nextday ${LST}/${PGM}.lst | ${AWK} -F":" '{print $2}'`
prevday=`${GREP} prevday ${LST}/${PGM}.lst | ${AWK} -F":" '{print $2}'`

#-- get the timezone from the alert log (safest place to get)
#-- the proper timezone is needed to properly filter the alert log for date ranges you
#-- want to look at
echo "set echo off" > ${LST}/${PGM}.adrci
echo "set termout off" >> ${LST}/${PGM}.adrci
echo "set homepath ${homepath}" >> ${LST}/${PGM}.adrci
echo "spool ${LST}/${PGM}.tmp" >> ${LST}/${PGM}.adrci
echo "show alert -tail 1" >> ${LST}/${PGM}.adrci
echo "spool off" >> ${LST}/${PGM}.adrci
adrci script=${LST}/${PGM}.adrci 1>/dev/null 2>/dev/null
timezone=`${HEAD} -1 ${LST}/${PGM}.tmp | ${AWK} -F" " '{print $3}'`

#-- extract alert log errors for the current day (today) and previous day (yesterday)
#-- previous day alerts will be used if the current file has yesterday's day as last day;
#-- meaning that we have had a switch to a new day and might have errors still to
#-- process from the previous day
echo "set echo off" > ${LST}/${PGM}.adrci
echo "set termout off" >> ${LST}/${PGM}.adrci
echo "set homepath ${homepath}" >> ${LST}/${PGM}.adrci
echo "spool ${LST}/${PGM}.${day}" >> ${LST}/${PGM}.adrci
echo "show alert -P \"ORIGINATING_TIMESTAMP BETWEEN '${day} 00:00:00.000000 ${timezone}' AND
'${nextday} 00:00:00.000000 ${timezone}' AND MESSAGE_TEXT LIKE '%ORA-%'\" -term" >>
${LST}/${PGM}.adrci
echo "spool off" >> ${LST}/${PGM}.adrci
echo "spool ${LST}/${PGM}.${prevday}" >> ${LST}/${PGM}.adrci
echo "show alert -P \"ORIGINATING_TIMESTAMP BETWEEN '${prevday} 00:00:00.000000 ${timezone}' AND
'${day} 00:00:00.000000 ${timezone}' AND MESSAGE_TEXT LIKE '%ORA-%'\" -term" >>
${LST}/${PGM}.adrci
echo "spool off" >> ${LST}/${PGM}.adrci
adrci script=${LST}/${PGM}.adrci 1>/dev/null 2>/dev/null

#-- get current contents of the current file
#-- default to current day if no current file
if [ -r "${CURR}" ]
then
#-- if the current exists then get the information it contains
daychecksum=`${GREP} day ${CURR} | ${AWK} -F":" '{print $2}'`
daylastline=`${GREP} day ${CURR} | ${AWK} -F":" '{print $3}'`
daylastday=`${GREP} day ${CURR} | ${AWK} -F":" '{print $4}'`
else
#-- if the current does not exist then default to today
daychecksum=0
daylastline=3
daylastday=${day}
fi

#-- set the days to search through for alerts
#-- if last day in current file was yesterday then include previous day
#-- if last day in current file is not yesterday then just scan today's alerts
if [ "${daylastday}" = "${prevday}" ]
then
alertdays="${prevday} ${day}"
else
alertdays="${day}"
fi

#-- for each of the days to scan for alerts
for theday in ${alertdays}
do
#-- check alert errors for the last day.
if [ -r "${LST}/${PGM}.${theday}" ]
then
#-- If the checksum generated is DIFFERENT we should start reporting from the top.
#--
#-- If the checksum generated is the SAME we should start reporting from end of
#-- the previously generated output.
new_daychecksum=`${HEAD} -4 ${LST}/${PGM}.${theday} | ${SUM} | ${AWK} '{print $1}'`
if [ ${new_daychecksum} -ne ${daychecksum} ]
then
daychecksum=${new_daychecksum}
daylastline=3
fi

#-- get the number of lines in the generated errors so we can report to the
#-- end of the file and we know where to start next time.
new_daylastline=`${WC} -l ${LST}/${PGM}.${theday} | ${AWK} -F" " '{print $1}'`

#-- if the number of lines in the output is 3 then there are no errors found.
if [ ${new_daylastline} -ne 3 ]
then
#-- if number of lines in extracted alerts is the same as last time then no new alerts
if [ ${new_daylastline} -ne ${daylastline} ]
then
#-- find the line to begin reporting new alerts from
fromline=`expr ${new_daylastline} - ${daylastline}`
#-- produce alert lines for alerts defined in file CHKALERT
${TAIL} -${fromline} ${LST}/${PGM}.${theday} |
while read LINE
do
for ORAS in `${CAT} ${ALRT}/CHKALERT`
do
ora=`${ECHO} ${LINE} | ${GREP} ${ORAS}`
if [ $? -eq 0 ]
then
#-- you might want to do something here
#-- that is specific to certain ORA- errors
err="001"
echo "${err}:${start}:${LINE}" >> ${LOG}
fi
done
done
fi
fi
daylastline=${new_daylastline}
#-- update the current file only if the day being processed is current day
if [ "${theday}" = "${day}" ]
then
${ECHO} "day:"${daychecksum}":"${daylastline}":"${day} > ${CURR}
fi
fi
done

OPTIMIZER HINTS

What are the hints and what are the different types of hints (INTERVIEW Question)


Ans: Hints(Optimizer hints) are used to alter the execution plan of the sql statments.

We should first get the explain plan of our SQL and determine what changes can be done to make the code operate without using hints if possible. However, Oracle hints such as ORDERED, LEADING, INDEX, FULL, and the various AJ and SJ Oracle hints can tame a wild optimizer and give you optimal performance.

With hints one can influence the optimizer. The usage of hints (with exception of the RULE-hint) causes Oracle to use the Cost Based optimizer.

The following syntax is used for hints:

select /*+ HINT */ name
from emp
where id =1;

Where HINT is replaced by the hint text.
When the syntax of the hint text is incorrect, the hint text is ignored and will not be used.
here you can find undocumented hints.

Hints for Optimization Approaches and Goals

ALL_ROWS The ALL_ROWS hint explicitly chooses the cost-based approach to optimize a statement block with a goal of best throughput (that is, minimum total resource consumption).

FIRST_ROWS The FIRST_ROWS hint explicitly chooses the cost-based approach to optimize a statement block with a goal of best response time (minimum resource usage to return first row). In newer Oracle version you should give a parameter with this hint: FIRST_ROWS(n) means that the optimizer will determine an executionplan to give a fast response for returning the first n rows.

CHOOSE The CHOOSE hint causes the optimizer to choose between the rule-based approach and the cost-based approach for a SQL statement based on the presence of statistics for the tables accessed by the statement

RULE The RULE hint explicitly chooses rule-based optimization for a statement block. This hint also causes the optimizer to ignore any other hints specified for the statement block. The RULE hint does not work any more in Oracle 10g.
Hints for Access Paths

FULL The FULL hint explicitly chooses a full table scan for the specified table. The syntax of the FULL hint is FULL(table) where table specifies the alias of the table (or table name if alias does not exist) on which the full table scan is to be performed.

ROWID The ROWID hint explicitly chooses a table scan by ROWID for the specified table. The syntax of the ROWID hint is ROWID(table) where table specifies the name or alias of the table on which the table access by ROWID is to be performed. (This hint depricated in Oracle 10g)

CLUSTER The CLUSTER hint explicitly chooses a cluster scan to access the specified table. The syntax of the CLUSTER hint is CLUSTER(table) where table specifies the name or alias of the table to be accessed by a cluster scan.

HASH The HASH hint explicitly chooses a hash scan to access the specified table. The syntax of the HASH hint is HASH(table) where table specifies the name or alias of the table to be accessed by a hash scan.

HASH_AJ The HASH_AJ hint transforms a NOT IN subquery into a hash anti-join to access the specified table. The syntax of the HASH_AJ hint is HASH_AJ(table) where table specifies the name or alias of the table to be accessed.(depricated in Oracle 10g)

INDEX The INDEX hint explicitly chooses an index scan for the specified table. The syntax of the INDEX hint is INDEX(table index) where:table specifies the name or alias of the table associated with the index to be scanned and index specifies an index on which an index scan is to be performed. This hint may optionally specify one or more indexes:

NO_INDEX The NO_INDEX hint explicitly disallows a set of indexes for the specified table. The syntax of the NO_INDEX hint is NO_INDEX(table index)
INDEX_ASC The INDEX_ASC hint explicitly chooses an index scan for the specified table. If the statement uses an index range scan, Oracle scans the index entries in ascending order of their indexed values.

INDEX_COMBINE If no indexes are given as arguments for the INDEX_COMBINE hint, the optimizer will use on the table whatever boolean combination of bitmap indexes has the best cost estimate. If certain indexes are given as arguments, the optimizer will try to use some boolean combination of those particular bitmap indexes. The syntax of INDEX_COMBINE is INDEX_COMBINE(table index).

INDEX_JOIN Explicitly instructs the optimizer to use an index join as an access path. For the hint to have a positive effect, a sufficiently small number of indexes must exist that contain all the columns required to resolve the query.

INDEX_DESC The INDEX_DESC hint explicitly chooses an index scan for the specified table. If the statement uses an index range scan, Oracle scans the index entries in descending order of their indexed values.

INDEX_FFS This hint causes a fast full index scan to be performed rather than a full table.

NO_INDEX_FFS Do not use fast full index scan (from Oracle 10g)

INDEX_SS Exclude range scan from query plan (from Oracle 10g)

INDEX_SS_ASC Exclude range scan from query plan (from Oracle 10g)

INDEX_SS_DESC Exclude range scan from query plan (from Oracle 10g)

NO_INDEX_SS The NO_INDEX_SS hint causes the optimizer to exclude a skip scan of the specified indexes on the specified table. (from Oracle 10g)

Hints for Query Transformations

NO_QUERY_TRANSFORMATION Prevents the optimizer performing query transformations. (from Oracle 10g)

USE_CONCAT The USE_CONCAT hint forces combined OR conditions in the WHERE clause of a query to be transformed into a compound query using the UNION ALL set operator. Normally, this transformation occurs only if the cost of the query using the concatenations is cheaper than the cost without them.

NO_EXPAND The NO_EXPAND hint prevents the optimizer from considering OR-expansion for queries having OR conditions or IN-lists in the WHERE clause. Usually, the optimizer considers using OR expansion and uses this method if it decides that the cost is lower than not using it.

REWRITE The REWRITE hint forces the optimizer to rewrite a query in terms of materialized views, when possible, without cost consideration. Use the REWRITE hint with or without a view list. If you use REWRITE with a view list and the list contains an eligible materialized view, then Oracle uses that view regardless of its cost.

NOREWRITE / NO_REWRITE In Oracle 10g renamed to NO_REWRITE. The NOREWRITE/NO_REWRITE hint disables query rewrite for the query block, overriding the setting of the parameter QUERY_REWRITE_ENABLED.

MERGE The MERGE hint lets you merge views in a query.

NO_MERGE The NO_MERGE hint causes Oracle not to merge mergeable views. This hint is most often used to reduce the number of possible permutations for a query and make optimization faster.

FACT The FACT hint indicated that the table should be considered as a fact table. This is used in the context of the star transformation.

NO_FACT The NO_FACT hint is used in the context of the star transformation to indicate to the transformation that the hinted table should not be considered as a fact table.

STAR_TRANSFORMATION The STAR_TRANSFORMATION hint makes the optimizer use the best plan in which the transformation has been used. Without the hint, the optimizer could make a query optimization decision to use the best plan generated without the transformation, instead of the best plan for the transformed query.

NO_STAR_TRANSFORMATION Do not use star transformation (from Oracle 10g)
UNNEST The UNNEST hint specifies subquery unnesting.
NO_UNNEST Use of the NO_UNNEST hint turns off unnesting for specific subquery blocks.

Hints for Join Orders

LEADING Give this hint to indicate the leading table in a join. This will indicate only 1 table. If you want to specify the whole order of tables, you can use the ORDERED hint. Syntax: LEADING(table)

ORDERED The ORDERED hint causes Oracle to join tables in the order in which they appear in the FROM clause. If you omit the ORDERED hint from a SQL statement performing a join , the optimizer chooses the order in which to join the tables. You may want to use the ORDERED hint to specify a join order if you know something about the number of rows selected from each table that the optimizer does not. Such information would allow you to choose an inner and outer table better than the optimizer could.

Hints for Join Operations

USE_NL The USE_NL hint causes Oracle to join each specified table to another row source with a nested loops join using the specified table as the inner table. The syntax of the USE_NL hint is USE_NL(table table) where table is the name or alias of a table to be used as the inner table of a nested loops join.

NO_USE_NL Do not use nested loop (from Oracle 10g)

USE_NL_WITH_INDEX Specifies a nested loops join. (from Oracle 10g)

USE_MERGE The USE_MERGE hint causes Oracle to join each specified table with another row source with a sort-merge join. The syntax of the USE_MERGE hint is USE_MERGE(table table) where table is a table to be joined to the row source resulting from joining the previous tables in the join order using a sort-merge join.

NO_USE_MERGE Do not use merge (from Oracle 10g)

USE_HASH The USE_HASH hint causes Oracle to join each specified table with another row source with a hash join. The syntax of the USE_HASH hint is USE_HASH(table table) where table is a table to be joined to the row source resulting from joining the previous tables in the join order using a hash join.

NO_USE_HASH Do not use hash (from Oracle 10g)
Hints for Parallel Execution

PARALLEL The PARALLEL hint allows you to specify the desired number of concurrent query servers that can be used for the query. The syntax is PARALLEL(table number number). The PARALLEL hint must use the table alias if an alias is specified in the query. The PARALLEL hint can then take two values separated by commas after the table name. The first value specifies the degree of parallelism for the given table, the second value specifies how the table is to be split among the instances of a parallel server. Specifying DEFAULT or no value signifies the query coordinator should examine the settings of the initialization parameters (described in a later section) to determine the default degree of parallelism.

NOPARALLEL / NO_PARALLEL The NOPARALLEL hint allows you to disable parallel scanning of a table, even if the table was created with a PARALLEL clause. In Oracle 10g this hint was renamed to NO_PARALLEL.

PQ_DISTRIBUTE The PQ_DISTRIBUTE hint improves the performance of parallel join operations. Do this by specifying how rows of joined tables should be distributed among producer and consumer query servers. Using this hint overrides decisions the optimizer would normally make.

NO_PARALLEL_INDEX The NO_PARALLEL_INDEX hint overrides a PARALLEL attribute setting on an index to avoid a parallel index scan operation.
Additional Hints

APPEND When the APPEND hint is used with the INSERT statement, data is appended to the table. Existing free space in the block is not used. If a table or an index is specified with nologging, this hint applied with an insert statement produces a direct path insert which reduces generation of redo.

NOAPPEND Overrides the append mode.

CACHE The CACHE hint specifies that the blocks retrieved for the table in the hint are placed at the most recently used end of the LRU list in the buffer cache when a full table scan is performed. This option is useful for small lookup tables. In the following example, the CACHE hint overrides the table default caching specification.

NOCACHE The NOCACHE hint specifies that the blocks retrieved for this table are placed at the least recently used end of the LRU list in the buffer cache when a full table scan is performed. This is the normal behavior of blocks in the buffer cache.

PUSH_PRED The PUSH_PRED hint forces pushing of a join predicate into the view.

NO_PUSH_PRED The NO_PUSH_PRED hint prevents pushing of a join predicate into the view.

PUSH_SUBQ The PUSH_SUBQ hint causes nonmerged subqueries to be evaluated at the earliest possible place in the execution plan.

NO_PUSH_SUBQ The NO_PUSH_SUBQ hint causes non-merged subqueries to be evaluated as the last step in the execution plan.

QB_NAME Specifies a name for a query block. (from Oracle 10g)

CURSOR_SHARING_EXACT Oracle can replace literals in SQL statements with bind variables, if it is safe to do so. This is controlled with the CURSOR_SHARING startup parameter. The CURSOR_SHARING_EXACT hint causes this behavior to be switched off. In other words, Oracle executes the SQL statement without any attempt to replace literals by bind variables.

DRIVING_SITE The DRIVING_SITE hint forces query execution to be done for the table at a different site than that selected by Oracle

DYNAMIC_SAMPLING The DYNAMIC_SAMPLING hint lets you control dynamic sampling to improve server performance by determining more accurate predicate selectivity and statistics for tables and indexes. You can set the value of DYNAMIC_SAMPLING to a value from 0 to 10. The higher the level, the more effort the compiler puts into dynamic sampling and the more broadly it is applied. Sampling defaults to cursor level unless you specify a table.

SPREAD_MIN_ANALYSIS This hint omits some of the compile time optimizations of the rules, mainly detailed dependency graph analysis, on spreadsheets. Some optimizations such as creating filters to selectively populate spreadsheet access structures and limited rule pruning are still used. (from Oracle 10g)
Hints with unknown status

MERGE_AJ The MERGE_AJ hint transforms a NOT IN subquery into a merge anti-join to access the specified table. The syntax of the MERGE_AJ hint is MERGE_AJ(table) where table specifies the name or alias of the table to be accessed.(depricated in Oracle 10g)

AND_EQUAL The AND_EQUAL hint explicitly chooses an execution plan that uses an access path that merges the scans on several single-column indexes. The syntax of the AND_EQUAL hint is AND_EQUAL(table index index) where table specifies the name or alias of the table associated with the indexes to be merged. and index specifies an index on which an index scan is to be performed. You must specify at least two indexes. You cannot specify more than five. (depricated in Oracle 10g)
STAR The STAR hint forces the large table to be joined last using a nested loops join on the index. The optimizer will consider different permutations of the small tables. (depricated in Oracle 10g)

BITMAP Usage: BITMAP(table_name index_name) Uses a bitmap index to access the table. (depricated ?)
HASH_SJ
Use a Hash Anti-Join to evaluate a NOT IN sub-query. Use this hint in the sub-query, not in the main query. Use this when your high volume NOT IN sub-query is using a FILTER or NESTED LOOPS join. Try MERGE_AJ if HASH_AJ refuses to work.(depricated in Oracle 10g)

NL_SJ Use a Nested Loop in a sub-query. (depricated in Oracle 10g)

NL_AJ Use an anti-join in a sub-query. (depricated in Oracle 10g)
ORDERED_PREDICATES (depricated in Oracle 10g)

EXPAND_GSET_TO_UNION (depricated in Oracle 10g)



Best regards,

Rafi.

Thursday, March 18, 2010

Database upgradation 9i to 10g (Manual Method)

Hi ,

Here are the steps to upgrade the Database from ORACLE 9i to ORACLE 10g:

1. Connect to the database to be upgraded and run utlu101i.sql to determine the preupgrade
tasks to be completed

SQL> spool /tmp/upgrade.txt
SQL>@$ORACLE_HOME/rdbms/admin/utlu101i.sql

2. Resize the redo log files if they are smaller than 4 MB

3. Adjust the size of the tablespaces where the dictionary objects are stored.

4. Perform a cold backup of the database.

5. Shut down the database (do not perform a SHUTDOWN ABORT; perform only SHUTDOWN
IMMEDIATE or SHUTDOWN NORMAL). On Windows you will have to do NET STOP, ORADIM -
DELETE from the old Oracle home directory and ORADIM -NEW from the new Oracle 10g
home directory

6. Copy the parameter file (initDB.ora or spfileDB.ora) and password file from the old Oracle home directory to the Oracle 10g Oracle home directory. The default location for parameter file is $ORACLE_HOME/dbs on Unix platforms and ORACLE_HOME\database on Windows. Adjust the following parameters:

* Adjust the COMPATIBLE parameter; the minimum value required is 9.2.0 for the upgrade.
If you set this to 10.0, you will never be able to downgrade the database to 9i.

* Update the initialization parameters. You must remove obsolete parameters.

* Set the DB_DOMAIN parameter properly.

* Make sure memory parameters have at least the minimum size required for upgrade:
SHARED_POOL_SIZE (96MB for 32-bit platforms, 144MB for 64-bit), PGA_AGGREGATE_
TARGET (24MB), JAVA_POOL_SIZE (48MB), and LARGE_POOL_SIZE (8MB). Use the
sizes recommended by the preinstall verification utility

7. Make sure all the environment variables are set to correctly reference the Oracle 10g Oracle
home. On Unix, verify ORACLE_HOME, PATH, ORA_NLS33, and LD_LIBRARY_PATH.

8. Use SQL*Plus, and connect to the database using the SYSDBA privilege. Start the instance by using the STARTUP UPGRADE mode.

9. Create the SYSAUX tablespace with the following attributes:
* online
* permanent
* read write
* extent management local
* segment space management auto

The syntax could be as follows:

CREATE TALESPACE sysaux
DATAFILE ‘/ora01/oradata/OR0109/sysaux.dbf’ SIZE 500M
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO;

10. Run the upgrade script from the $ORACLE_HOME/rdbms/admin directory. Based on the version of the old database, the name of the upgrade script varies. The following lists the old release and the upgrade script name:

Database Version Script to Run
8.0.6 —-u0800060.sql
8.1.7 —-u0801070.sql
9.0.1 —-u0900010.sql
9.2.0 —-u0902000.sql

For example, to upgrade an Oracle 8.1.7 database to Oracle 10g, you must run
u0801070.sql.
SQL> spool ora8i7upg.log
SQL> @?/rdbms/admin/u0801070.sql
SQL> spool off

If you get any errors during the upgrade script execution, reexecute the script after fixing the error. The postupgrade status utility—utlu101s.sql—gives the name of specific script to run to fix the failed component.

11. Run the utlu101s.sql utility with the TEXT option. It queries the DBA_SERVER_REGISTRY to determine upgrade status and provides information about invalid or incorrect component upgrades. It also provides names of scripts to rerun to fix the errors.

Here is an example (output truncated to fit in single line):

ora10g> @$ORACLE_HOME/rdbms/admin/utlu101s.sql TEXT
PL/SQL procedure successfully completed.

12. Shut down and restart the instance to reinitialize the system parameters for normal operation.
The restart also performs Oracle 10g database initialization for JServer Java Virtual
Machine and other components. Perform a clean shutdown (SHUTDOWN IMMEDIATE);
starting the instance flushes all caches, clears buffers, and performs other housekeeping activities. This is an important step to ensure the integrity and consistency of the upgraded database.

13. Run the utlrp.sql script to recompile all invalid objects.
ora10g> @$ORACLE_HOME/rdbms/admin/utlrp.sql

14. Update the listener.ora file with the new database information.

15. Back up the database.

Once upgrading the database there will be some degradation in the performance …so check the performance through AWR and ADDM….


Best regards,

Rafi.

MY INTERVIEW WITH YODLEE

Hi All,
           This post is to share my experience of interview with yodlee:

Q1)What are the activities you do daily?
Ans:                        I replied:
Let me introduce myself,My name is Rafiuddin Alvi I'm  working as ORACLE DBA since 3 yrs.Currently I'm working with OPENLANE,Previously I used to work with PENTA-C.I have done my oracle 9i,10g & 11g OCP CERTIFICATION & I'm RAC Certified Expert.I have worked on 4 platforms :HP/UX,LINUX,SOLARIS & IBM-AIX included windows servers.My JOB RESPONSIBILITIES includes validating the backup,Checking the performance of the Databases,Checking the tablespace space requirement,Designing the Database and creating the schema changes,starting & shutting of Database as when required,Refreshing the databases as required and Helping the developers to tune the SQL statements.


Q 2)How can we recover the drop table?
Ans:                        I explain with few cases like we can

1)We can restore the entire database to the time where table had dropped  ->Not recommended
2)Restore the tablespace->Better option
3)Restore and recover a subset of the database as a DUMMY database to export

the table data and import it into the primary database. This is the best option
as only the dropped table goes back in time to before the drop

I explained the third case in brief:
Refer the details for the 3 rd case in separate post.

Q 3)How do we recover the database when the redolog file which is current is lost?
Ans:                          This question is really a tricky one I asked to the intervier by mistake is it archived or not,definetly a stupid question because when the group is current it cannot be archived since the logwriter is writing on it.Then suddenly I said I will just switch it by using alter system switch log file;

Finally I answed:
We have to make use of the below command:
SQL>ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP group_number;

        Which is half correct, if the logfile group is corrupt and it avoids archiving.If this is not happening you can perform the below steps:
You have to go for incompelte recovery and Open with resetlogs:


RESTORE CONTROLFILE FROM '';
ALTER DATABASE MOUNT;
RESTORE DATABASE;
RECOVER DATABASE UNTIL TIME "to_date('MAR 18 2010 14:57:00','MON DD YYYY HH24:MI:SS')";
ALTER DATABASE OPEN RESETLOGS;

Q 4)When the users complain Database is slow what will be your reaction?
Ans MY ANSWER:
o
We can just find out and check if it is a network issue by using tools like:
vmstat and iostat.

Then she asked which columns you will look: I was not remembering exactly but these are the columns

vmstat
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
4 3 70928 28364 33488 19395224 0 0 2553 1290 0 0 22 2 66 10

iostat
Linux 2.6.9-55.ELsmp (dbdwh01.autc.com) 03/19/2010

avg-cpu: %user %nice %sys %iowait %idle
22.30 0.00 1.72 10.39 65.60

Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
cciss/c0d0 67.78 140.94 45.85 2280376542 741888688

I should have mention of few more utility like
*Top
*Sar

Top command Functionalily:
The Unix top command is designed to help users determine which processes are running and which applications are using more memory or processing power than they should be. It is an extremely useful command for those looking to reduce the memory footprint on their Unix installation or for those who like a more responsive, more streamlined computing environment. The Unix top command is a highly useful tool that will come in handy in a variety of situations. Furthermore, by default, the Unix top command will update its findings on a regular basis, usually every few seconds, so that a user can get a real time picture of what the computer is doing at any given moment.

The top command is very easy to use and requires little to no prior knowledge of how Unix commands work. It can be run by anyone, so long as they know how to enter text into the command line or terminal
The Unix top command is designed to help users determine which processes are running and which applications are using more memory or processing power than they should be. It is an extremely useful command for those looking to reduce the memory footprint on their Unix installation or for those who like a more responsive, more streamlined computing environment. The Unix top command is a highly useful tool that will come in handy in a variety of situations. Furthermore, by default, the Unix top command will update its findings on a regular basis, usually every few seconds, so that a user can get a real time picture of what the computer is doing at any given moment.


Columns Which I should look for are:

PID , User ,%CPU, %MEMORY and compare with it with users in oracle and sessions in v$session.

I continued with my answer that we can use statspack to monitor the performance of the database.But the question again arises what we usually see in statspack report.

I answered we should identify top 5 wait events which includes:

The top 5 wait events which I can see are:Top 5 Wait Events
~~~~~~~~~~~~~~~~~ Wait % Total
Event Waits Time (cs) Wt Time
-------------------------------------------- ------------ ------------ -------
db file sequential read 1,547,030 167,785 68.61
buffer busy waits 316,992 40,103 16.40
latch free 17,757 17,784 7.27
db file parallel read 6,995 8,257 3.38
db file scattered read 9,969 4,945 2.02

What is buffer busy waits ? she asked which I didn't answered properly and will be explaining in separate post

Than I said I use awr reports instead of statspack report.Than she replied you explain that itself what you see there.

I answered Identifying the wait events as per different test cases and test the performance.But still she was focusing on what I do...

Suddently she said Did you heard what is hints.

Yes,I heard the optimizer hints are use to alter the execution plan of the SQL statement

Q 5 )What are the hints have You used?

I remembered only one hint /*+ all_rows */

So hint I will be studying and will be posting as separate post.

Q 6)How you tune your SQL statement and What you do If the user complain the query is running slow?
Ans: Explained in my post.I answered.But what you see in explain plan or how you interpret the explain plan result is the other question?
Though I didn't explain properly.Let me tell you the steps How we should interprete the output of explain plan or autotrace:

What's an explain plan?

An explain plan is a representation of the access path that is taken when a query is executed within Oracle.

Query processing can be divided into 7 phases:

[1] Syntactic Checks the syntax of the query
[2] Semantic Checks that all objects exist and are accessible
[3] View Merging Rewrites query as join on base tables as opposed to using views
[4] Statement
Transformation Rewrites query transforming some complex constructs into simpler ones where appropriate (e.g. subquery merging, in/or transformation)
[5] Optimization Determines the optimal access path for the query to take. With the Rule Based Optimizer (RBO) it uses a set of heuristics to determine access path. With the Cost Based Optimizer (CBO) we use statistics to analyze the relative costs of accessing objects.
[6] QEP Generation QEP = Query Evaluation Plan
[7] QEP Execution QEP = Query Evaluation Plan

I will explan with eg in my separate post.

Q 7)What is cache fusion?
Ans :
Oracle RAC Cache Fusion and I/O Bandwidth

Oracle RAC Cache Fusion uses a high-speed IPC interconnect to provide cache-to-cache transfers of data blocks between instances in a cluster. This is called data block shipping. This eliminates the disk I/O and optimizes read/write concurrency. Block reads take advantage of the speed of IPC and an interconnecting network.

The cache-to-cache data transfer is performed through the high speed IPC interconnect. The Oracle Global Cache Service (GCS) tracks blocks that were shipped to other instances by retaining block copies in memory. Each such copy is called a past image (PI). The GCS, via the LMSx background process, tracks one or more past image versions (PI) for a block in addition to the traditional GCS resource roles and modes. In the event of a node failure, Oracle can reconstruct the current version of a block by using a saved PI.


Q 8)If I have a 2 node RAC cluster in which each node is having one instance.If I perform delete operation on 1 node and than suddenly that node disconnect will the operation be successfully?

Ans:Yes It should be completed successfully I told.

Reason:
Yes. The database, remember, is on a shared device. The disconnection of one node has no bearing of the data on the shared disks, only a means of accessing it. In theory, as long as the transaction has been committed, the change is permanent to the database.

This was not a great interview for me but it really help me in knowing many things Thats why I say :

GIVING INTERVIEWS IMPROVES YOUR KNOWLEDGE.







Best regards,

Rafi.

Monday, March 15, 2010

How do I complete a massive DELETE operation in my Oracle database without having to pay the heavy performance overhead?

How do I complete a massive DELETE operation in my Oracle database without having to pay the heavy performance overhead?

Perform the massive DELETE operation as a direct-path (direct-load) INSERT (INSERT with APPEND hint) into a table whose logging parameter is set to NOLOGGING. This operation will complete significantly faster than DELETE and with minimum logging, but you have to take a backup afterwards to establish a new baseline.

Direct-path INSERT is a special database operation. Like SQL*Loader, it writes data directly to the database files, bypassing the buffer cache. It does this with minimum logging, recording only data dictionary changes. The logic behind this method is that because the data files are already up-to-date when an instance failure occurs, logging isn't necessary.


The two distinct cases in which direct-path INSERT is important are when:
1. The database is in noarchivelog mode. Media recovery is not possible, and you don't need redo data for that either.
2. The database is in archivelog mode. It logs redo blocks for media recovery by default. However, if you explicitly set a table to NOLOGGING mode, the database won't log redo blocks.

Therefore, with the direct-path INSERT when the database is in noarchivelog mode or when it is in archivelog mode and the table is in NOLOGGING mode, it performs only minimum redo logging—to protect the data dictionary.

Proof:
Practical to prove Direct-path INSERT(insert) is faster than delete & update:






Step 1: , create a table named TestData with half a million rows, a primary key, one non-unique index, and a check constraint. We will use this table in all of the example and performance tests. It employs data from the all_objects view as its source.



create table TestData

(

objectId integer not null,

objectName varchar2(30) not null,

objectType varchar2(30) not null,

runId integer not null,

createDate date ,

status varchar2(7) ,

description varchar2(255)

)

nologging;

Table created.

Step 2:

SQL> select *from v$version;

BANNER

----------------------------------------------------------------

Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Prod

PL/SQL Release 10.2.0.1.0 - Production

CORE 10.2.0.1.0 Production

TNS for 32-bit Windows: Version 10.2.0.1.0 - Production

NLSRTL Version 10.2.0.1.0 - Production

Next, populate it with data using the following block (Since the all_objects view (Oracle 10.2.0.1.0) has 49K+ rows, you need to do the insert 49 times to total 500K+ rows in the TestData table):

declare

vCount pls_integer := 49;

begin

for nIndex in 1..49

loop

insert /*+ APPEND */

into TestData

(

objectId,

objectName,

objectType,

runId,

createDate,

status,

description

)

select object_id,

object_name,

object_type,

nIndex,

created,

status,

object_name

from all_objects;



commit;

end loop;

end;

/

PL/SQL procedure successfully completed.

Step 3:
Check the rows count:
select count(*)from TestData;
=>24,42209 rows…

Add the primary key on the objectId and runId columns, one non-unique index on the objectName and objectType columns, and one check constraint on the runId:

alter table TestData add constraint TestData_PK
primary key(objectId,runId);

create index TestData_NameType on TestData
(
objectName,objectType
);

alter table TestData add constraint TestData_CK
check (runId > 0);

Table altered…

Step3:Gather the statistics using the following block:

begin
dbms_stats.gather_table_stats
(
ownname => user,
tabname => 'TestData',
method_opt => 'FOR ALL INDEXED COLUMNS SIZE AUTO',
degree => 4,
cascade => true
);

PL/SQL procedure successfully completed.

Step 4:
Now, create two tables, TestData_Logging and TestData2_Nologging, with identical structures to TestData but with no indexes or constraints:

create table TestData_Logging as select * from TestData
where 1 = 0;

create table TestData_Nologging nologging as select * from TestData where 1 = 0;

TestData_Logging is in LOGGING mode, while TestData_Nologging is in NOLOGGING mode:

select table_name,logging from user_tables
where table_name like 'TESTDATA\_%' escape '\';

TABLE_NAME LOGGING
------------------------------ -------
TESTDATA_LOGGING YES
TESTDATA_NOLOGGING NO

Take a snapshot of the redo size before doing a regular INSERT:

select a.name, b.value from v$statname a,v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';

NAME VALUE
----------------- -----------
redo size 175103192

Perform the INSERT into the TestData_Logging table (Scenario 1):

set timing on
insert into TestData_Logging
select * from TestData;
2442209 rows created.

Elapsed:00:00:57.00

Take a snapshot of redo and undo after the INSERT:

select a.name,
b.value from v$statname a,
v$mystat b
where a.statistic# = b.statistic#
and a.name = 'redo size';

NAME VALUE
----------------- -----------
redo size 514476836

select used_ublk
from v$transaction;

USED_UBLK
----------
1032

Populating the TestData_Logging table with 2442209 rows in LOGGING mode took 59 seconds and forced the database to generate 175mb(93,019,404 – 39,867,216 = 53,152,188) of redo and 1024 undo blocks.

Now, repeat the same steps for the table in NOLOGGING mode (Scenario 2):

Obtain performance results before load:
select a.name,b.value
from v$statname a,v$mystat b
where a.statistic# = b.statistic# and a.name = 'redo size';

NAME VALUE
----------------- -----------

redo size 514477004

insert /*+ APPEND */
into TestData_Nologging
select * from TestData;

2442209 rows created.

Elapsed:00:00:41.87

Obtain performance results after load:

select a.name,b.value
from v$statname a,v$mystat b
where a.statistic# = b.statistic# and a.name = 'redo size';

NAME VALUE
----------------- -----------
redo size 514971616

select used_ublk  from v$transaction;
----------

1

Populating the TestData_Nologging table with around 24,42209 rows in NOLOGGING mode took 41.87 seconds and generated less redo and 1 undo block.




Below shows the performance numbers for both scenarios.

Elapsed Time (sec.) Redo Size (MB) Undo Blocks

Scenario 1: Regular INSERT, LOGGING mode 59 secs More redo From above results 1024

Scenario 2: Direct-path INSERT, NOLOGGING mode 41 sec Less redo from above results 1

         Hence,Direct insert load is faster way to load data and to delete data(indirectly)

Best regards,

Rafi.

Dedicated or Shared Server Connections

Dedicated or Shared Connections:

Hi ,
       When to choose dedicated & when to choose shared connections?
This can be best described with the below explaination:

The user connected with dedicated server connection has the dedicated resource that is whether or not the user is doing work the connection remains allocated to that user.Thus if your user is not working and if there is less network resource then this user is blocking another user to connect or to execute the user request by making this user works queued.This connection is preferable when you have lot of resource or to some administrative user like "SYS" , "SYS" for administrative works and for batch jobs,


When there is shared server the user connection is shared, that is user is connecting by dispatchers and when a user is idle his resource can be used by other users, thus lessen the load on system, this is more likely in a environment where user is mostly idle, like in a order entry system.

In a shared server there also dedicated connection possible for admin user.

If your environment is OLTP then shared is preferable because the tasks need short time.

Whether is Warehouse, OLAP, Data Mining the users are running time consuming and heavy load query where it is best to use dedicated server.

All things depends on your satisfaction and needs, if you satisfied with the load , user user is satisfied , you have lot resource, you can go for dedicated.

If you have less resource but to support more concurrent user shared is best.



Best regards,

Rafi.

Thursday, March 11, 2010

My Performance tuning practices

Generating Automatic Workload Repository Report:

Generating Automatic Workload Repository through Enterprise Manager Grid control:

Step 1:Login to Enterprise Manager grid control Account with the user havin dba role or grant that user
 dba role.
user:ralvi
passwd:***********

Step 2:Choose the Database whose AWR report You want to generate.In my case I'm generating awr report for our development Database AUCD between 3pm and 4pm(i.e For the interval of 1 hour)

Step 3:Choose the order as given below:
Choose :performance->Snaphosts->Automatic Workload Repository->Run AWR Report

Step 4:Select the range of time for which you want the AWR report:

Select Beginning Snapshot
Go to time 2
12
10 3pm

->Click Go
->Click Ok

Select Ending Snapshot
Go to time 2
12
10 4pm

->Click Go
->Click Ok

You will be seeing on the screen
Processing AWR report.....
This Will give you the
WORKLOAD REPOSITORY report for your Database

Note:Please choose all the above process carefully
 
 
 
Best regards,
 
Rafi.

My Expdp and impdp practices

Use of Expdp & Impdp on Windows:


Note:We have to be in the correct path while exporting or importing using expdp and impdp

I have exported the tables using expdp from user scott and imported to test user using impdp

The steps I followed are as follows:

Step 1: Create a directory or folder by name expdtest in the path mentioned below:
C:\oracle\product\10.2.0>cd expdptest
C:\oracle\product\10.2.0\expdptest>dir
Volume in drive C has no label.
Volume Serial Number is 74A1-B8A4
Directory of C:\oracle\product\10.2.0\expdptest

Step 2:Create a directory which is logical or use by oracle for doing export or import grant the users dba role or exp_full_database or imp_full_database as per your usage.

Sqlplus ‘/as sysdba’
SQL>create directory mydir3 as ‘C:\oracle\product\10.2.0\expdptest’;
SQL> grant read,write on directory mydir1 to public;
Select *from dba_directories;
Will show:Mydir3 directory

Step 3:Now use expdp utility for exporting schema scott.Use the key words DIRECTORY and DUMPFILE in caps to get rid of below errors:
C:\oracle\product\10.2.0\expdptest>expdp system/manager directory=MYDIR3

Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 18:18:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31641: unable to create dump file "C:\oracle\product\10.2.0\expdptest\expdat.dmp"
ORA-27038: created file already exists
OSD-04010: option specified, file already exists

C:\oracle\product\10.2.0\expdptest>dumpfile=expdptest5.dmp schemas=scott 'dumpfile' is not recognized as an internal or external command,
operable program or batch file.

C:\oracle\product\10.2.0\expdptest>expdp system/manager DIRECTORY=MYDIR3 DUMPFILE=expschema.dmp schemas=scott

Export: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 18:20:17
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01": system/******** DIRECTORY=MYDIR3 DUMP
FILE=expschema.dmp schemas=scott
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type  SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICSProcessing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "SCOTT"."DEPT" 5.656 KB 4 rows
. . exported "SCOTT"."EMP" 7.820 KB 14 rows
. . exported "SCOTT"."SALGRADE" 5.585 KB 5 rows
. . exported "SCOTT"."BONUS" 0 KB 0 rows

Master table "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_SCHEMA_01 is:
C:\ORACLE\PRODUCT\10.2.0\EXPDPTEST\EXPSCHEMA.DMP
Job "SYSTEM"."SYS_EXPORT_SCHEMA_01" successfully completed at 18:20:32

Step4: Making use of import:
We have to use remap_schema keyword to export from scott user to test user.

C:\oracle\product\10.2.0\expdptest>impdp system/manager DIRECTORY=MYDIR3 DUMPFILE=expschema.dmp remap_schema=scott:test

Import: Release 10.2.0.1.0 - Production on Monday, 15 February, 2010 18:24:15
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Master table "SYSTEM"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_FULL_01": system/******** DIRECTORY=MYDIR3 DUMPFILE=expschema.dmp remap_schema=scott:test

Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"TEST" already exists

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

. . imported "TEST"."DEPT" 5.656 KB 4 rows
. . imported "TEST"."EMP" 7.820 KB 14 rows
. . imported "TEST"."SALGRADE" 5.585 KB 5 rows
. . imported "TEST"."BONUS" 0 KB 0 rows

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Job "SYSTEM"."SYS_IMPORT_FULL_01" completed with 1 error(s) at 18:24:20

Step 5:Check and confirm:

C:\oracle\product\10.2.0\expdptest>sqlplus
SQL*Plus: Release 10.2.0.1.0 - Production on Mon Feb 15 18:24:31 2010
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Enter user-name: test/test
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> select *from tab;

TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
DEPT TABLE
EMP TABLE
BONUS TABLE
SALGRADE TABLE






Best regards,

Rafi.

Monday, March 8, 2010

performance partitioning table vs partitioning table with local index by Hemant and Girish

Hi All,
          I like this explaination very much.......

hi all, i want to ask about performance (elapsed time) from partitioning table vs table partitioning table with index local.




this result of query :

partitioning table non index



select *

from

equ_param_monitoringrange where id_equ_parameter=19



call cpu elapsed disk rows

------- -------- ---------- ---------- ----------

Parse 0.00 0.00 0 0

Execute 0.00 0.00 0 0

Fetch 0.93 2.51 24378 236383

------- -------- ---------- ---------- ----------

total 0.93 2.51 24378 236383



Parsing user id: 61 (SKENARIO1)



Rows Execution Plan

------- ---------------------------------------------------

0 SELECT STATEMENT MODE: ALL_ROWS

0 PARTITION RANGE (SINGLE) PARTITION: START=2 STOP=2

0 TABLE ACCESS MODE: ANALYZED (FULL) OF

'EQU_PARAM_MONITORINGRANGE' (TABLE) PARTITION: START=2 STOP=2







partitioning table with local index



select *

from

equ_param_monitoringrangex where id_equ_parameter=19



call cpu elapsed disk rows

------- -------- ---------- ---------- ----------

Parse 0.00 0.00 0 0

Execute 0.00 0.00 0 0

Fetch 1.09 3.91 8756 236383

------- -------- ---------- ---------- ----------

total 1.09 3.91 8756 236383



Parsing user id: 61 (SKENARIO1)



Rows Execution Plan

------- ---------------------------------------------------

0 SELECT STATEMENT MODE: ALL_ROWS

0 PARTITION RANGE (SINGLE) PARTITION: START=2 STOP=2

0 TABLE ACCESS MODE: ANALYZED (BY LOCAL INDEX ROWID) OF

'EQU_PARAM_MONITORINGRANGEX' (TABLE) PARTITION: START=2 STOP=

2

0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'RANGE_IX' (INDEX)

PARTITION: START=2 STOP=2









why elapsed time partitioning table with index longer than partitioning table non index??


236383 rows via an Index would have meant a large number of single block reads.


These are unlikely to perform faster than multiblock reads for a full partition read.



Hemant K Chitale
thanks for your reply




so,different between partitioning table and partitioning table index is



if partitioning table non index using multiblock read

if partitioning table index using singleblock read



is it right??
No, that is not what I said.


An Indexed Read is done with Single Block Read Calls, generally.

A FullTableScan is done with Multi Block Read Calls.



I was pointing out that the number of blocks read were different and the corresponding number of read calls were different. These would go towards the difference in execution times.



You have drawn the wrong inference.



Hemant K Chitale



Let me say again




I was pointing out that the number of blocks read were different and the corresponding number of read calls were different. These would go towards the difference in execution times.



If you use an Index to read 10,000 rows and each of the 10,000 rows is in a seperate data block you are making very many separate singleblock read calls to the OS. These take time to execute.



If you do a FullTableScan you do fewer multiblock read calls to the OS. These can, in many cases, be faster.



Say an Index Leaf Block points to 40 different Table Blocks for 40 ROWIDs referenced for the same Index Key value. After having read the Index Leaf Block, your process has to make 40 different read calls to the OS to get those 40 table blocks. Then, for the next set of 40 rows, there will be another 40 different calls and so on. Each call has to be setup and executed to fetch a single block.



Multiblock read calls may have to be setup only once for every 8 or upto 128 blocks together. Thus, the overheads are fewer and these may execute faster.



That is the reason for the common "advice" : If you are reading a few (X%age) rows, use an Index, if you are reading many rows, use a FullTableScan. Note : The X%age rule isn't to be taken literally. I am only pointing out where the underlying logic for that "advice" comes from. There are various other factors (concurrency, hardware performance, extent sizes, disk layout etc) that come into play in the real world.



Hemant K Chitale

If i wish to get knowledge on partition table with local index then i will collect some line as below:




1.When you create a partitioned table you should create an index on the table. The index may be partitioned according to the same range values that were used to partition the table.Local keyword in the index partitiong tells oracle to create a separate index for each partition of the table.The Global clause in create index command allows you to create a non-partitioned index or to specify ranges for the index values that are different from the ranges for the table paratitions.Local indexes may be easier to manage than global indexes however global indexes may perfrom uniqueness checks faster than local(partioned) indexes perform them.

Source:http://www.geekinterview.com/question_details/43556



2.Separate indexes for each partition. A local index can be unique. However, in order for a local index to be unique, the partitioning key of the table must be part of the index's key columns. Unique local indexes are useful for OLTP environments. You cannot explicitly add a partition to a local index. Instead, new partitions are added to local indexes only when you add a partition to the underlying table.

Source:http://www.psoug.org/reference/partitions.html



3.http://myorastuff.blogspot.com/2008/08/local-index-versus-global-index-on.html



Now after reading and understanding the text and links, i will conclude that since there are separate index for each partition; so optimizer has to first identify that which partition index is to be used or not (or no need to go for index scan, if COST of query is less than table scan); and then that partitioned index scanning etc. this takes time; thats why you are getting more time partitioning table with local index.



Regards

Girish Sharma