Wednesday, April 28, 2010

STATSPACK and AWR Statistics Comparison(**An Important interview question**)

STATSPACK and AWR Statistics Comparison

The AWR repository holds all of the statistics available in STATSPACK as well as some additional statistics which are not. The following information on statistics is specific to those stored in the AWR that are not part of the STATSPACK:

1)STATSPACK does not store the Active Session History (ASH) statistics available in the AWR dba_hist_active_sess_history view. The ASH allows DBAs to perform time-series analyses of wait events for a particular session history.

2)An important difference between STATSPACK and the AWR is that STATSPACK does not store history for new metric statistics introduced in Oracle10g. The key AWR views, dba_hist_sysmetric_history and dba_hist_sysmetric_summary , help the DBA build time-series reports for important database performance metrics such as Total Time Waited or Response Time Per Txn.

The AWR also contains views such as dba_hist_service_stat , dba_hist_service_wait_class and dba_hist_service_name , which store history for performance cumulative statistics tracked for specific services.

3)The latest version of STATSPACK included with Oracle10g contains a set of specific tables, which track history of statistics that reflect the performance of the Oracle Streams feature. These tables are stats$streams_capture , stats$streams_apply_sum , stats$buffered_subscribers , stats$rule_set , stats$propagation_sender , stats$propagation_receiver and stats$buffered_queues . The AWR does not contain the specific tables that reflect Oracle Streams activity; therefore, if a DBA relies heavily on the Oracle Streams feature, it would be useful to monitor its performance using STATSPACK utility.

4)

Oracle Apps DBA - Interview Questions

Oracle Apps DBA - Interview Questions

1. what is the utility to change the password of a dba schema in oracle apps?
Ans: FNDCPASS

2. what are mandatory users in oracle apps?
Ans: applsys,applsyspub,apps

3. What simplay a oracle Architechture?
Ans: Desktop Tier, Application Tier, Database Tier

4. What are the components in the Application Tier?
Ans: Apache(http)
Jserver(jre)
Forms Server(f60srv)
Metric Server(d2ls)
Metric Client(d2lc)
Report Server(rwm60)
Concurrent Server(FNDLIBR)
Discoverer

5.What are main file systems in Oracle Apps?
Ans: APPL_TOP, COMMON_TOP,
DB_TOP,ORA_TOP

6. What are there in Desktop Tier?
Ans: Internet Browser, JInitiator

7. What is the location of JInitiator in the Desktop Tier?
Ans: c:\program files\oracle\Jinitiator

8. What is the location of client cache?
Ans: c:\documents and settngs\user\oracle jar Cache

9. What is the location of Server cache?
Ans: $COMMON_TOP/_pages

10. Which package will be used for the user validation by plsql agent?
Ans: oraclemypage

11. What are adadmin utilities? and Its location?
$AD_TOP/bin
Ans: 1.adadmin
2.adpatch
3.adsplice
4.adident
5.adrelink
6.adlicmgr

12.What are the location of JaVA Files?
Ans: JAVA_TOP and all PRODUCT_TOP/Java/Jar

13. What is the name of the xml file of Apps and its location?
Ans: Context Name.xml and $APPL_TOP/admin

14. what is the location of Apps environment file? and its name?
Ans: contextname.env and $APPL_TOP

15. In how many way Jar files are generated?
Ans: Normal and Force

16. Once Jar files are generated what files get effected?
Ans: All Product_top/java/jar files and Two files in JAVA_TOP they are appsborg.zip
appsborg2.zip

17. How do you see the files in zip file?
Ans: unzip -v

18.How do you generate jar files?
Ans: Using adadmin and option 5

19. How do you start the apps services?
Ans: $COMMON_TOP\admin\scripts\Contextname\adstrtal.sh apps/appspwd

20. What is the executable to generate jar files?
Ans: adjava

21. How do you relink a executable of a product
Ans: by relinking option in adadmin or adrelink

22. How do you relink AD product executable? and usage?
Ans: adrelink.sh and adrelink.sh force=y "ad adsplice"

23.When do you relinking?
Ans: 1. when you miss a executable file
2. When there is a problem with any executable file
3. When any product executable get currupted

24. What is DAD?
Ans: It is a file which stores apps passwords in hard coded format. i.e wdbsvr

25.How do you relink OS files and libraries?
Ans: using make command

26.What is compile scheman option in adadmin?
Ans: This option is used to compile/resolve the invalid objects

27. Where do you get the info about invalid objects?
Ans: from dba_objects where status=invalid

28.How do you compile an obect ?
Ans: alter object_ type objet _name compile. Eg: alter table fnd_nodes compile

29.How do you see the errors of a table or view?
Ans: select text from dba_errors where name='emp_view'

30. How do you see the errors in the db?
Ans: show error

31. How do you compile a schema?
Ans: using utlrp.sql (location is ?/rdbms/admin/) or
going adadmin, compile schema option

32. How do you know how many invalid objects are in specific schema?
Ans: select count(*) from dba_objects where status='INVALID' group by owner;

34. How do you know the package version?
Ans: select text from dba_source where name='package name' and type='PACKAGE BODY' and rownum<10>/rdbms/admin)

35. How do you load java class to database?
Ans: loadjava

36. What are restart files? and its location?
Ans: These files contains the previouse session info about adadmin.. location is $APPL_TOP\admin\sid\restart\*.rf9

37.How do you validate apps schema?
Ans: To validate synonyms, missing sysnonyms and all grant. You can do it in adadmin. after validating it iwll produce
a report in the location $APPL_TOP\admin\sid\out\*.out

38. How do you enable maintainance mode?
Ans: using adadmin or running a script called "adsetmmd.sql ENABLE/DISABLE" (AD_TOP/patch/115/sql)

39.What is APPS_MRC Schema?
Ans: It is used for multi language support. To synchronize APPs schema and APPS_MRC

40. How to see the version of a script or form or report or etc?
Ans: grep Header adsetmmd.sql or adident Header adsetmmd.sql
strings -a GLXSTEA.fmx grep Header or adident Header GLXSTEA.fmx

41.What is the location of adadmin log?
Ans: $APPL_TOP\admin\sid\log

42. What are the oracle homes in Apps?
Ans: 8.0.6ORACLE_HOME(Dev 6i products) and IAS_ORACLE_HOME (Apache)

43. How do you configure you ipaddress at client side? and server side?
Ans: c:\windows\system32\drivers\etc\hosts and \etc\host

44. What is the location of Datbase server related scripts?
Ans: $ORACLE_HOME\appsutil\scripts\contextname

45. what is the utility to clean the concurrent manager?
Ans: @cmclean.sql ( You have download from metalink)

46. How do you stage the 11.5.10 Apps software?
Ans: using adautostg.pl

47. What is the location of the source files of forms?
Ans: AU_TOP/forms/US/

48. What is the executable to generate forms?
Ans: f60gen



Best regards,

Rafi.

Tuesday, April 20, 2010

Cognizant interview questions

Hi,

Q 1)What are the difference between RAID 0 ,RAID 1 ,RAID 0+1,RAID 1+0,RAID 5 & RAID 10?(Cognizant)
Ans: Please refer my post for RAID types


Q 2)HOW YOU TAKE BACKUP IN ASM?(Cognizant)

Ans: We have to use RMAN for taking ASM backup.By using backup as copy command we can do this.
Eg:
TEST USING TRANSPORTABLE TABLESPACES, STEP BY STEP
1. Create tablespace accounts
2. Create table accounts on tablespace accounts, insert some records on it
3. Create a Directory for Data Pump and grant privileges on it to Public
4. Check that the tablespace can be part of a transportable tablespace set
5. Flush online logs to disk
6. Set tablespace accounts read only
7. Export tablespace accounts using data pump
8. Set tablespace accounts offline
9. Backup the tablespace using Rman backup as copy set keep until sysdate + 2 years
10. Drop tablespace accounts including datafiles
11. Confirm that you cannot access the table on the dropped tablespace
Restore tablespace steps
12. Restore the datafile backup to ASM
13. Import tablespace accounts back into the database using data pump
14. Set tablespace accounts online and check the control table

1. Create tablespace accounts
SQL> create tablespace accounts;
Tablespace created.
SQL> select file_name from dba_data_files where tablespace_name='ACCOUNTS';
FILE_NAME
-----------------------------------------------

2. Create table accounts on tablespace accounts, insert some records on it
SQL> conn avargas/oracle
Connected.
SQL> create table accounts tablespace accounts as select * from dba_users;
Table created.
SQL> insert into accounts select * from accounts;
15 rows created.
SQL> /
30 rows created.
SQL> commit;
Commit complete.
SQL> select count(*) from accounts;
COUNT(*)
----------
60
3. Create a Directory for Data Pump and grant privileges on it to Public
SQL> create or replace directory XTTS as '/oradisk/app01/oracle/scripts/av';

Directory created.
SQL> grant read, write on directory XTTS to public;
Grant succeeded.
4. Check that the tablespace can be part of a transportable tablespace set
SQL> EXECUTE sys.DBMS_TTS.TRANSPORT_SET_CHECK('accounts',true);
PL/SQL procedure successfully completed.
SQL> select * from sys.transport_set_violations;
no rows selected
5. Flush online logs to disk
SQL> alter system archive log current;
System altered.
6. Set tablespace accounts read only
SQL> alter tablespace accounts read only;
Tablespace altered.
7. Export tablespace accounts using data pump

{oracle} /oradisk/app01/oracle/scripts/av [pollux.com] > expdp system/oracle
dumpfile=xtts.dmp directory=XTTS transport_tablespaces=accounts
Export: Release 10.2.0.3.0 - Production on Thursday, 03 January, 2008 21:34:20
Copyright (c) 2003, 2005, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

8. Set tablespace accounts offline
SQL> alter tablespace accounts offline;
Tablespace altered.
9. Backup the tablespace using Rman backup as copy
Note that in order to be able to setup the keep time for a period longer than the setup of the Flash Recovery Area
we must make this backup outside the Flash Recovery Area
RMAN> backup as copy datafile '+DB1DATADG/redx/datafile/accounts.287.642998077
2> tag accounts_DF_CP
3> format '/oradisk/app01/oracle/scripts/av/%U'
4> keep until time='sysdate + 730' nologs;
Starting backup at 03/01/2008 21:44:33
using channel ORA_DISK_1
backup will be obsolete on

10. Drop tablespace accounts
SQL> drop tablespace accounts including contents and datafiles;
Tablespace dropped.
11. Confirm that you cannot access the table on the dropped tablespace
SQL> select count(*) from accounts;
select count(*) from accounts
*
ERROR at line 1:
ORA-00942: table or view does not exist
SQL> select tablespace_name from dba_tablespaces where tablespace_name='ACCOUNTS';

12. Restore the datafile backup to ASM
On this step we need to use the Rman copy command instead of restore. Restore do require that the tablespace
exist, and we did drop to it as part of the TTS procedure.
Take note of the output filename given by Rman; you will need to use this name on the next step
RMAN> copy datafilecopy '/oradisk/app01/oracle/scripts/av/data_D-REDX_I-
2305343033_TS-ACCOUNTS_FNO-6_0pj58r62' to '+DB1DATADG';
Starting backup at 03/01/2008 22:15:35
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input is copy of datafile 00006: /oradisk/app01/oracle/scripts/av/data_DREDX_
I-2305343033_TS-ACCOUNTS_FNO-6_0pj58r62
output filename=+DB1DATADG/redx/datafile/accounts.287.643068939
tag=ACCOUNTS_DF_CP recid=21 stamp=643068957
channel ORA_DISK_1: datafile copy complete,

13. Import tablespace accounts back into the database using data pump
[pollux.com] > impdp system/oracle dumpfile=xtts.dmp directory=XTTS
transport_datafiles='+DB1DATADG/redx/datafile/accounts.287.643068939'
Import: Release 10.2.0.3.0 - Production on Thursday, 03 January, 2008 22:18:14
Copyright (c) 2003, 2005, Oracle. All rights reserved.

14. Set tablespace accounts online and check the control table
On the previous check our current log sequence was 27, now we got to sequence 29. This check is made to be
sure that despite the database advanced its scn we will not require to do recover of the offline tablespace when
seting it online again.
SQL> alter tablespace accounts online;
Tablespace altered.
SQL> select count(*) from avargas.accounts

Q 4)WHAT IS CLIENT SIDE AND SERVER SIDE LOAD BALANCING IN RAC?
EXPLAIN BRIEFLY IMPLEMENTATION?(Cognizant)
Ans:Client Side Load Balancing is by definition “evenly spreads new connection requests across all listeners”, and “Oracle Database randomly selects an address in the address list and connects to that node’s listener. This provides a balancing of client connections across the available listeners in the cluster.” [source]. Another definition, “whereby if more than one listener services a single database, a client can randomly choose between the listeners for its connect requests. This randomization enables all listeners to share the burden of servicing incoming connect requests.” [source].
Client Side Load Balancing is configured by adding LOAD_BALANCE=ON in tnsnames.ora file. The Net Services Reference tells us how and were to embed this parameter – “embed this parameter under either the DESCRIPTION_LIST parameter, the DESCRIPTION parameter, or the ADDRESS_LIST parameter.” [source]. Note however, whether you place LOAD_BALANCE under ADDRESS_LIST or outside, will make a different in terms of load balancing ADDRESS_LISTs or load balancing ADDRESSES.

Configuration
I have a 2-node 10g RAC cluster, and for load balancing, I have the following entry in my tnsnames.ora file.
MODDB=
(description=
(load_balance=on)
(address=(protocol=tcp)(host=moddb1-vip)(port=2521))
(address=(protocol=tcp)(host=moddb2-vip)(port=2521))
(connect_data=
(service_name=MODDB)
)
)
I want to trace the Client Side Load Balancing behaviour from the client side, so I enable Oracle net tracing by adding the following lines in sqlnet.ora [source], and I should fine trace files for each physical connection made under $ORACLE_HOME/network/trace.
TRACE_LEVEL_CLIENT = USER
TRACE_FILE_CLIENT = SQLTRC
Testing a connection
We can first test one connection…
$ sqlplus -s tester/tester@MODDB

exit
Now take a look at the trace file produced, under $ORACLE_HOME/network/trace. An inspection of of the trace file shows two entries that are of interest (it is possible to observe server-side load balancing as well, but that’s a topic for another post):
view source

print?
1 niotns: Calling address: (description=(load_balance=on)(failover=on)(address=(protocol=tcp)(host=moddb1-vip)(port=2521))(address=(protocol=tcp)(host=moddb2-vip)(port=2521))(connect_data=(service_name=MODDB)(CID=(PROGRAM=e:\oracle\product\10.2.0\client_1\bin\sqlplus.exe)(HOST=LADDO)(USER=tester))))
2 nsc2addr: (DESCRIPTION=(load_balance=on)(failover=on)(address=(protocol=tcp)(host=moddb2-vip)(port=2521))(connect_data=(service_name=MODDB)(CID=(PROGRAM=e:\oracle\product\10.2.0\client_1\bin\sqlplus.exe)(HOST=LADDO)(USER=tester))))
Unfortunately, I couldn’t find much documentation on how to read a trace file, I can only guess…
* niotns entry shows a lookup of tns information based on the service name I supplied “MODDB”.
* nsc2addr entry shows Oracle Net connects to listener on moddb2-vip.
It’s difficult to observe load balancing with just one connection, so I wrote a script to run 1000 connections, using Bash Shell scripting:
#!/bin/bash

for a in {1..1000}
do
echo $a
sqlplus -s tester/tester@MODDB<EOF
done

exit 0
Examining the traces
Extending the idea above, we can use a combination of grep, and wc to see the result of load balancing. For some reason, each connection produced 2 trace files, so we have 2000 trace files for 1000 connections.
$ ls -l *.trc |wc -l
2000

$ grep nsc2addr *.trc | grep load_balance |grep moddb1-vip |wc -l
498

$ grep nsc2addr *.trc | grep load_balance |grep moddb2-vip |wc -l
502
We can see over 1000 connections,498 were made to moddb1-vip listener, whilst 502 were made to moddb2-vip listener. It is a fairly even distribution. But just how random is it? I have used the output of the trace files, and plot the first 100 connections on a scattered chart. The first note one can observe is that, these connections are not in a round-robin fashion. (1 = moddb1-vip; 2 = moddb2-vip)
LOAD BALANCING: The Oracle RAC system can distribute the load over many nodes this feature called as load balancing.

There are two methods of load balancing in broad
1.Client load balancing
2.Server load balancing


Client Load Balancing distributes new connections among Oracle RAC nodes so that no one server is overloaded with connection requests and it is configured at net service name level by providing multiple descriptions in a description list or multiple addresses in an address list. For example, if connection fails over to another node in case of failure, the client load balancing ensures that the redirected connections are distributed among the other nodes in the RAC.

Configure Client-side connect-time load balancing by setting LOAD_BALANCE=ON in the corresponding client side TNS entry.

TESTRAC =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2-VIP)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = testdb.oracleracexpert.com))
)

Server Load Balancing distributes processing workload among Oracle RAC nodes. It divides the connection load evenly between all available listeners and distributes new user session connection requests to the least loaded listener(s) based on the total number of sessions which are already connected. Each listener communicates with the other listener(s) via each database instance’s PMON process.

Configure Server-side connect-time load balancing feature by setting REMOTE_LISTENERS initialization parameter of each instance to a TNS name that describes list of all available listeners.

TESTRAC_LISTENERS =
(DESCRIPTION =
(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1)(PORT = 1521)))
(ADDRESS_LIST =(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2)(PORT = 1521))))
)

Set *.remote_listener= TESTRAC_LISTENERS’ initialization parameter in the database’s shared SPFILE and add TESTRAC_LISTENERS’ entry to the TNSNAMES.ORA file in the Oracle Home of each node in the cluster.

Once you configure Server-side connect-time load balancing, each database’s PMON process will automatically register the database with the database’s local listener as well as cross-register the database with the listeners on all other nodes in the cluster. Now the nodes themselves decide which node is least busy, and then will connect the client to that node.

FAILOVER:

The Oracle RAC system can protect against failures caused by O/S or server crashes or hardware failures. When a node failure occurs in RAC system, the connection attempts can fail over to other surviving nodes in the cluster this feature called as Failover.

There are two methods of failover
1. Connection Failover
2. Transparent Application Failover (TAF)


Connection Failover - If a connection failure occurs at connect time,the application failover the connection to another active node in the cluster. This feature enables client to connect to another listener if the initial connection to the first listener fails.

Enable client-side connect-time Failover by setting FAILOVER=ON in the corresponding client side TNS entry.

TESTRAC =
(DESCRIPTION =
(ADDRESS_LIST=
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC2-VIP)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = testdb.oracleracexpert.com))
)

If LOAD_BALANCE is set to on then clients randomly attempt connections to any nodes. If client made connection attempt to a down node, the client needs to wait until it receives the information that the node is not accessible before trying alternate address in ADDRESS_LIST.

Transparent application Failover (TAF) – If connection failure occurs after a connection is established, the connection fails over to other surviving nodes. Any uncommitted transactions are rolled back and server side program variables and session properties will be lost. In some case the select statements automatically re-executed on the new connection with the cursor positioned on the row on which it was positioned prior to the failover(i.e SELECT FAILOVER can be saved in RAC and No DML failover are saved).

TESTRAC =
(DESCRIPTION =
(LOAD_BALANCE = ON)
(FAILOVER = ON)
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = TESTRAC1-VIP)(PORT = 1521))
(CONNECT_DATA =
(SERVICE_NAME = testdb.oracleracexpert.com)
(FAILOVER_MODE = (TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))
)
)

Q 6)SPECIFY THE ORDER FOR STOPPING & STARTING FOR CLUSTERWARE ,NODES AND DATABASE(SPECIFYING APPS)(Cognizant)

Ans: In Oracle 10g rel2, First we have to have to,Stop Database than ASM,than NODE Application if we are stopping Cluster component wise,As we know clusterware is the base for Database and ASM,We can stop all the component at once using 'crsctl stop crs' command.In the below example we are having OCFS2 cluster filesystem and ASM is not used.

The ORDER IS AS FOLLOWS:

Steps : For Stopping the things:

Step 1:First the applications should be shutdown

The corresponding service should be down in order to make specific application down.

Step 2:Database should be shutdown

# status
srvctl status database -d AUCD

# stop
srvctl stop database -d AUCD

Step 3:Nodes should be shutdown

# status
srvctl status nodeapps -n dbcl1n1

# stop
srvctl stop nodeapps -n dbcl1n1

# status
# Run as root or you should have SUDO permissing in /etc/sudoers file.

ssh dbcl1n1 /home/app/crs/bin/crsctl check crs

Details below:

([oracle@dbperf01 AUCTX1 ~]$ cd $ORA_CRS_HOME/bin
[oracle@dbperf01 AUCTX1 bin]$ pwd
/home/app/crs/bin

[oracle@dbperf01 AUCTX1 bin]$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

(0R)
Step II):The whole Clusterware COMPONENT can be shutdown:
$ sudo /home/app/crs/bin/crsctl stop crs =>Stops all the clusterware components i.e
services,Instances,Node Applications will be stop

Because This command should be run as root user as we do not have/use root user password(see the email Below) But , oracle accout has the "sudo" privelge, so just add "sudo" keyword in front of that command

The ORDER for Starting clusterware component IS AS FOLLOWS:
For Starting things:

Step I) Start Clusterwareware Components:

Note:When any host in the cluster boots,Clusterware should be started.

The scripts listed below help make this happen:
/etc/inittab
/etc/rc5.d/S96init.crs
/etc/init.d/init.crs
/etc/init.d/init.crsd
/etc/init.d/init.cssd
/etc/init.d/init.evmd
Be aware of this for 2 reasons:
1. It may be unneccesary for you to start Clusterware. It might already be started.
2. The script, /etc/init.d/init.crs, can be used to start Clusterware if you want to use it.
To start Clusterware manually, maybe use this script instead of init.crs:
To start Clusterware manually, maybe use this script instead of init.crs:
#!/bin/sh
# Run as root
# start
ssh dbcl1n1 /home/app/crs/bin/crsctl start crs
or
$ sudo /home/app/crs/bin/crsctl start crs =>Starts all the Clusterware components,
i.e Sevices,Instances and Node Applications will be started

(OR)

Step II
Step 1: Start nodeapps

After you start Clusterware, you may start nodeapps.
Use this script to start nodeapps:
#! /bin/sh
# start
srvctl start nodeapps -n dbcl1n1

Step 2:Start Database

After you start the nodeapps, you may start the Databases.
Use this script to start the databases:
#! /bin/sh
# ps
ssh dbcl1n1 ps -ef | grep dbw

# status
srvctl status database -d AUCD

# start
srvctl start database -d AUCD

Step 4:Start the application after starting the Database

Hope this questions help everyone.

Happy INTERVIEWS

Best regards,

Rafi.

Sunday, April 4, 2010

ASM INTERVIEW QUESTIONS

Top 10 ASM Questions
ASM Architecture


Top 10 ASM Questions
Q.1) What init.ora parameters does a user need to
configure for ASM instances?
A. The default parameter settings work perfectly for
ASM. The only parameters needed for 11g ASM:
• PROCESSES*
• ASM_DISKSTRING*
• ASM_DISKGROUPS
• INSTANCE_TYPE
•ASM is a very passive instance in that it doesn’t have a lot concurrent transactions
or queries. So the memory footprint is quite small.
•Even if you have 20 dbs connected to ASM , the ASM SGA does not need to change.
This is b/c the ASM metadata is not directly tied to the number of clients
•The 11g MEMORY_TARGET (DEFAULT VALUE) will be more than sufficient.
•The processes parameter may need to be modified. Use the formula to determine
the approp value:
processes = 40 + (10 + [max number of concurrent database file
creations, and file extend operations possible])*n
Where n is the number of databases connecting to ASM (ASM clients).
The source of concurrent file creations can be any of the following:
•Several concurrent create tablespace commands
•Creation of a Partitioned table with several tablespaces creations
•RMAN backup channels
•Concurrent archive logfile creations

Q.2)How does the database interact with the ASM
instance and how do I make ASM go faster?
A. ASM is not in the I/O path so ASM does not impede
the database file access. Since the RDBMS instance
is performing raw I/O, the I/O is as fast as possible.
•Cover ASM instance architecture
•Cover ASM-Communication via ASMB
•The database communicates with ASM instance using the ASMB
(umblicus process) process. Once the database obtains the necessary
extents from extent map, all database IO going forward is processed
through by the database processes, bypassing ASM. Thus we say
ASM is not really in the IO path. So, the question how do we make
ASM go faster…..you don’t have to.

RDBMS and ASM Instance Interaction
Server
Operating System
DATABASE ASM
(1) Database opens file
(1A) OPEN
(1B) Extent Map
(2) Database Reads file
(2A) READ
(2B) I/O Completes
(3) Database Creates file
(3A) CREATE
(3B) Allocates file
(3C) Extent Map
(3D) Initializes file
(3D) Commit
1A. Database issues open of a database file
1B. ASM sends the extent map for the file to database instance. Starting
with 11g, the RDBMS only receives first 60 extents the remaining extents
in the extent map are paged in on demand, providing a faster open
2A/2B. Database now reads directly from disk
3A.RDBMS foreground initiates a create tablespace for example
3B. ASM does the allocation for its essentially reserving the allocation units
for the file creation
3C. Once allocation phase is done, the extent map is sent to the RDBMS
3D. The RDBMS initialization phase kicks in. In this phase the initializes all
the reserved AUs
3E. If file creation is successful, then the RDBMS commits the file creation
Going forward all I/Os are done by the RDBMS.

Q.3) Do I need to define the RDBMS
FILESYSTEMIO_OPTIONS parameter when I use ASM?
A. No. The RDBMS does I/O directly to the raw disk
devices, the FILESYSTEMIO_OPTIONS parameter is
only for filesystems.
A. Review what the use of FILESYSTEMIO_OPTIONS parameter;
essentially FILESYSTEMIO_OPTIONS is used for filesystem/block
storage.
This parameter controls which IO options are used. The value may be any of
the following:
*asynch - This allows asynchronous IO to be used where supported by the
OS.
*directIO - This allows directIO to be used where supported by the OS.
Direct IO bypasses any Unix buffer cache. *setall - Enables both
ASYNC and DIRECT IO. "none" - This disables ASYNC IO and DIRECT
IO so that Oracle uses normal synchronous writes, without any direct io
options.
A. RDBMS does raw IO against the ASM disks, so need for
FILESYSTEMIO_OPTIONS parameter. The only parameter that needs to
be set is disk_asyncio=true, which is true by default. If using ASMLIB
then even the disk_async does not need to be set.
ASM is also supported for NFS files as ASM disks. In such cases, the
required NFS mount options eliminate the need to set
FILESYSTEMIO_OPTIONS.

Q.4) I read in the ASM Best Practices paper that Oracle
recommends two diskgroups. Why?
A. Oracle recommends two diskgroups to provide a
balance of manageability, utilization, and
performance.
To reduce the complexity of managing ASM and its diskgroups, Oracle recommends that
generally no more than two diskgroups be maintained and managed per RAC cluster or
single ASM instance
oDatabase work area: This is where active database files such as datafiles, control files,
online redo logs, and change tracking files used in incremental backups are stored. This
location is indicated by DB_CREATE_FILE_DEST.
oFlash recovery area: Where recovery-related files are created, such as multiplexed copies
of the current control file and online redo logs, archived redo logs, backup sets, and
flashback log files. This location is indicated by DB-RECOVERY_FILE_DEST.
•Having one DATA container means only place to store all your database files, and obviates
the need to juggle around datafiles or having to decide where to place a new tablespace.
By having one container for all your files also means better storage utilization. Making the IT
director very happy. If more storage capacity or IO capacity is needed, just add an ASM
disk….all online activities.
You have to ensure that this storage pool container houses enough spindles to
accommodate the IO rate of all the database objects
Bottom line, one container == one pool manage, monitor, and track
Note however, that additional diskgroups may be added to support tiered storage classes in
Information Lifecycle Management (ILM) or Hierarchical Storage Management (HSM)
deployments

Q.5) We have a 16 TB database. I’m curious about the
number of disk groups we should use; e.g. 1 large
disk group, a couple of disk groups, or otherwise?
A. For VLDBs you will probably end up with different
storage tiers; e.g with some of our large customers
they have Tier1 (RAID10 FC), Tier2 (RAID5 FC), Tier3
(SATA), etc. Each one of these is mapped to a
diskgroup.
These custs mapped certain tablespaces to specific tiers; eg, system/rollback/syaux
and latency senstive tablespaces in Tier1, and not as IO critical on Tier2, etc.
For 10g VLDBs its best to set an AU size of 16MB, this is more for metadata space
efficiency than for performance. The 16MB recommendation is only necessary if the
diskgroup is going to be used by 10g databases. In 11g we introduced variable size
extents to solve the metadata problem. This requires compatible.rdbms &
compatible.asm to be set to 11.1.0.0. With 11g you should set your AU size to the largest
I/O that you wish to issue for sequential access (other parameters need to be set to
increase the I/O size issued by Oracle). For random small I/Os the AU size does not
matter very much as long as every file is broken into many more extents than there are
disks.
15

Q.6) We have a new app and don’t know our access
pattern, but assuming mostly sequential access, what
size would be a good AU fit?
A. For 11g ASM/RDBMS it is recommended to use 4MB
ASM AU for disk groups.
See Metalink Note 810484.1
For all 11g ASM/DB users, it best to create a disk group using 4 MB ASM AU size. Metalink
Note 810484.1 covers this


Q.7) Would it be better to use BIGFILE tablespaces, or
standard tablespaces for ASM?
A. The use of Bigfile tablespaces has no bearing on ASM
(or vice versa). In fact most database object related
decisions are transparent to ASM.
Nevertheless, Bigfile tablespaces benefits:
Fewer datafiles - which means faster database open (fewer files to
open),
Faster checkpoints, as well fewer files to manage. But you'll need
careful consideration for backup/recovery of these large datafiles.

Q.8) What is the best LUN size for ASM
A. There is no best size! In most cases the storage
team will dictate to you based on their standardized
LUN size. The ASM administrator merely has to
communicate the ASM Best Practices and application
characteristics to storage folks :
• Need equally sized / performance LUNs
• Minimum of 4 LUNs
• The capacity requirement
• The workload characteristic (random r/w, sequential r/w) &
any response time SLA
Using this info , and their standards, the storage folks should build a
nice LUN group set for you.
In most cases the storage team will dictate to you what the standardized LUN size
is. This is based on several factors,
including RAID LUN set builds (concatenated, striped, hypers, etc..). Having too
many LUNs elongates boot
time and is it very hard to manage On the flip side, having too few LUNs makes array
cache management difficult to
control and creates un-manageable large LUNs (which are difficult to expand).
The ASM adminstrator merely has to communicate to SA/storage folks that you need
equally sized/performance LUNs and what the capacity requirement is, say 10TB.
Using this info, the workload characteristic (random r/w, sequential r/w), and their
standards, the storage folks should build a nice LUN group set for you
Having too many LUNs elongates boot time and is it very hard to manage (zoning,
provisioning, masking, etc..)...there's a $/LUN barometer!

Q.9) In 11g RAC we want to separate ASM admins from
DBAs and create different users and groups. How do
we set this up?
A. For clarification
• Separate Oracle Home for ASM and RDBMS.
• RDBMS instance connects to ASM using OSDBA group of the ASM instance.
Thus, software owner for each RDBMS instance connecting to ASM must be
a member of ASM's OSDBA group.
• Choose a different OSDBA group for ASM instance (asmdba) than for
RDBMS instance (dba)
• In 11g, ASM administrator has to be member of a separate SYSASM group to
separate ASM Admin and DBAs.
Operating system authentication using membership in the group or groups
designated
as OSDBA, OSOPER, and OSASM is valid on all Oracle platforms.
A typical deployment could be as follows:
ASM administrator:
User : asm
Group: oinstall, asmdba(OSDBA), asmadmin(OSASM)
Database administrator:
User : oracle
Group: oinstall, asmdba(OSDBA of ASM), dba(OSDBA)
ASM disk ownership : asm:oinstall
Remember that Database instance connects to ASM instance as
sysdba. The user id the database instance runs as needs to be the
OSDBA group of the ASM instance.

A typical deployment could be as follows:
ASM administrator:
User : asm
Group: oinstall, asmdba(OSDBA), asmadmin(OSASM)
Database administrator:
User : oracle
Group: oinstall, asmdba(OSDBA of ASM), dba(OSDBA)
A typical deployment could be as follows:
ASM administrator:
User : asm
Group: oinstall, asmdba(OSDBA), asmadmin(OSASM)
Database administrator:
User : oracle
Group: oinstall, asmdba(OSDBA of ASM), dba(OSDBA)

Q.10) Can my RDBMS and ASM instances run different
versions?
A. Yes. ASM can be at a higher version or at lower
version than its client databases. There’s two
components of compatiblity:
• Software compatibility
• Diskgroup compatibility attributes:
• compatible.asm
• compatible.rdbms
This is a diskgroup level change and not an instance level change…no
rolling upgrade here!

Disk Group Compatibility Example
• Start with 10g ASM and RDBMS
• Upgrade ASM to 11g
• Advance compatible.asm
• ALTER DISKGROUP data
SET ATTRIBUTE
‘compatible.asm’ = ’11.1.0.7.0’
• 10g RDBMS instances are still supported
• 10g ASM instance can no longer mount the disk
group
22
Disk Group Compatibility Example
• Upgrade RDBMS to 11g
• In the RDBMS instance set initialization parameter
• compatible = 11.0
• Advance compatible.rdbms
• ALTER DISKGROUP data
SET ATTRIBUTE
‘compatible.rdbms’ = ’11.1.0.7.0’
• New capabilities enabled
• Variable size extents
• Fast mirror resync
• Preferred read
• AUs > 16MB
• 10g RDBMS instances can no longer access the disk group
23
Disk Group Compatibility Example
• Compatibility may be set during disk group creation
• CREATE DISKGROUP data
DISK ‘/dev/sdd[bcd]1’
ATTRIBUTE
‘compatible.asm’ = ’11.1.0.7.0’,
‘compatible.rdbms’ = ’11.1.0.7.0’,
‘au_size’ = ’4M’
•compatible.asm and compatible.rdbms cannot
be reversed

Q.11) Where do I run my database listener from; i.e., ASM
HOME or DB HOME?
A. It is recommended to run the listener from the ASM
HOME. This is particularly important for RAC env,
since the listener is a node-level resource. In this
config, you can create additional [user] listeners from
the database homes as needed.
- Allows registering multiple databases on the node to register with the
listener without being tied to a specific database home
- From configuration tool standpoint (netca), we promote best practice
of creating one listener per node with node name suffix (that is
registered with CRS) and subsequent tools that create/upgrade
databases will register instances to that listener. One can always
create multiple listeners in different homes and use'em but that would
complicate the configuration
Backups for ASM:

Top 10 ASM Questions
Q.1) How do I backup my ASM instance?
A. Not applicable! ASM has no files to backup
Unlike the database, ASM does require a controlfile type structure or
any other external metadata to bootstrap itself. All the data ASM needs
to startup is on on-disk structures (disk headers and other disk group
metadata).
A Disk Group is the fundamental object managed by ASM. It is
composed of multiple ASM disks. Each Disk Group is self-describing,
like a standard file system. All the metadata about the usage of the
space in the disk group is completely contained within the disk group.
If ASM can find all the disks in a disk group it can provide access to
the disk group without any additional metadata

Q.2)When should I use RMAN and when should I use
ASMCMD copy?
A. RMAN is the recommended and most complete and
flexible method to backup and transport database files
in ASM.
ASMCMD copy is good for copying single files
• Supports all Oracle file types
• Can be used to instantiate a Data Guard environment
• Does not update the controlfile
• Does not create OMF files
RMAN is the most complete and versatile means to backup databases
stored in ASM.
However, many customers use BCV/split-mirrors as backups for ASM
based databases. Many combine BCV mirrors with RMAN backup
of the mirrors. Why would you want to do that? RMAN ensures the
integrity of the database data blocks by running sanity checks as it
backs up the blocks
Now most of you are wondering about the 11g asmcmd copy
command, and how that fits in here. asmcmd cp is not intended to
do wholesale backups (plus you’ll have to put the database in hot
backup).
In 10g the possible ways to migrate - DBMS_FILE_TRANSFER, rman (copy vs.
backup), or XMLDB FTP
In 11g, we introduced the asmcmd copy command. Key point here is that copy files
out is great for:
1. archive logs
2. Controlfiles
3. Datafiles for debugging
4. Dumpsets (can be done across platforms)
Copying files in:
TTS
Copy in only supported files.
28
ASMCMD Copy
ASMCMD> ls
+fra/dumpsets/expdp_5_5.dat
ASMCMD> cp expdp_5_5.dat sys@rac1.orcl1:+DATA/dumpsets/ex
pdp_5_5.dat
source +fra/dumpsets/expdp_5_5.dat
target +DATA/dumpsets/expdp_5_5.dat
copying file(s)...
file, +DATA/dumpsets/expdp_5_5.dat,
copy committed.

Top 10 ASM Questions
Migration

Top 10 ASM Questions
Q. I’m going to do add disks to my ASM diskgroup,
how long will this rebalance take?
A. Rebalance time is heavily driven by the three items:
• Amount of data currently in the diskgroup
• IO bandwidth available on the server
• ASM_POWER_LIMIT or Rebalance Power Level
Use v$asm_operation
31

Q. We are migrating to a new storage array. How do I
move my ASM database from storage A to storage B?
A. Given that the new and old storage are both visible to
ASM, simply add the new disks to the ASM disk group
and drop the old disks. ASM rebalance will migrate
data online. Note 428681.1 covers how to move
OCR/Voting disks to the new storage array
If this is a RAC environment, the Note 428681.1 covers how to move
OCR/Voting disks to the new storage array

ASM_SQL> alter diskgroup DATA
drop disk
data_legacy1, data_legacy2,
data_legacy3
add disk
‘/dev/sddb1’, ‘/dev/sddc1’,
‘/dev/sddd1’;

ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Only move data proportional to storage
added
• No need for manual I/O tuning
Disk Group DATA (legacy disks)

ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA

ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA

ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA

ASM Rebalancing
• Automatic online rebalance whenever
storage configuration changes
• Online migration to new storage
Disk Group DATA (new disks)

Top 10 ASM Questions
Q. Is it possible to unplug an ASM disk group from one
platform and plug into a server on another platform
(for example, from Solaris to Linux)?
A. No. Cross-platform disk group migration not
supported. To move datafiles between endian-ness
platforms, you need to use XTTS, Datapump or
Streams.
The first problem that you run into here is that Solaris and Linux
format their disks differently. Solaris and Linux do not recognize each
other’s partitions, etc.
ASM does track the endian-ness of its data. However, currently, the
ASM code does not handle disk groups whose endian-ness does not
match that of the ASM binary.
Experiments have been done to show that ASM disk groups can be
migrated from platforms that share a common format and endian-ness
(i.e. Windows to Linux), but this functionality is not currently officially
supported because is not regularly tested yet.
The following links show how to migrate across platforms
http://downloadwest.
oracle.com/docs/cd/B19306_01/server.102/b25159/outage.htm#CA
CFFIDD
http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10
gR2_PlatformMigrationTTS.pdf

Top 10 ASM Questions
3rd Party Software
40
Top 10 ASM Questions
Q. How does ASM work with multipathing software?
A. It works great! Multipathing software is at a layer
lower than ASM, and thus is transparent.
You may need to adjust ASM_DISKSTRING to specify
only the path to the multipathing pseudo devices.
Multipathing tools provides the following benefits:
oProvide a single block device interface for a multi-pathed LUN
oDetect any component failures in the I/O path; e.g., fabric port, channel adapter, or
HBA.
oWhen a loss of path occurs, ensure that I/Os are re-routed to the available paths,
with no process disruption.
oReconfigure the multipaths automatically when events occur.
oEnsure that failed paths get revalidated as soon as possible and provide autofailback
capabilities.
oConfigure the multi-paths to maximize performance using various load balancing
methods; e.g., round robin, least I/Os queued, or least service time.
When a given disk has several paths defined, each one will be presented as a unique path
name at the OS level; e.g.; /dev/rdsk/c3t19d1s4 and /dev/rdsk/c7t22d1s4 could be pointing
to same disk device. ASM, however, can only tolerate the discovery of one unique device
path per disk. For example, if the asm_diskstring is ‘/dev/rdsk/*’, then several paths to the
same device will be discovered, and ASM will produce an error message stating this.
When using a multipath driver, which sits above this SCSI-block layer, the driver will
generally produce a pseudo device that virtualizes the sub-paths. For example, in the
case of EMC’s PowerPath, you can use the following asm_diskstring setting of
‘/dev/rdsk/emcpower*’. When I/O is issued to this disk device, the multipath driver will
intercept it and provide the necessary load balancing to the underlying subpaths.
As long as ASM can open/read/write to the multipathing pseudo device, it should
work. Most all MP products are known to work w/ ASM. But remember ASM does not
certify MP products, though we have a list products that work w/ ASM, this is more of
a guide of what’s available by platform/OS.
Examples of multi-pathing software include EMC PowerPath, Veritas DMP, Sun Traffic
Manager, Hitachi HDLM, and IBM SDDPCM. Linux 2.6 has a kernel based multipathing

ASM/RDBMS
/dev/
Multipath driver
/dev/sda1 /dev/sdb1
Controller Controller
Cache
Disk
IO cloud

Top 10 ASM Questions
Q. Is ASM constantly rebalancing to manage “hot
spots”?
A. No…No…Nope!!
Bad rumor. ASM provides even distribution of extents across all disks
in a disk group. Since each disk will equal number of extents, no
single disk will be hotter than another. Thus the answer NO, ASM does
not dynamically move hot spots, because hot spots simply do not
occur in ASM configurations.
Rebalance only occurs on storage configuration changes (e.g. add,
drop, or resize disks).
43
I/O Distribution
• ASM spreads file extents evenly across all disks in disk group
• Since ASM distributes extents evenly, there are no hot spots
Average IOPS per disk during OLTP workload
0
50
100
150
200
250
300
Total
Disk
IOPS
FG1: - cciss/c0d2
FG1: - cciss/c0d3
FG1: - cciss/c0d4
FG1: - cciss/c0d5
FG1: - cciss/c0d6
FG2: - cciss/c0d2
FG2: - cciss/c0d3
FG2: - cciss/c0d4
FG2: - cciss/c0d5
FG2: - cciss/c0d6
FG3: - cciss/c0d2
FG3: - cciss/c0d3
FG3: - cciss/c0d4
FG3: - cciss/c0d5
FG3: - cciss/c0d6
FG4: - cciss/c0d2
FG4: - cciss/c0d3
FG4: - cciss/c0d4
FG4: - cciss/c0d5
FG4: - cciss/c0d6
As indicated, ASM implements the policy of S.A.M.E. that stripes and mirrors
files across all the disks in a Disk Group. If the disks are highly reliable as
the case may be with a high-end array, mirroring can be optionally disabled
for a particular Disk Group. This policy of striping and mirroring across all
disks in a disk group greatly simplifies storage management and provides a
configuration of balanced performance.

Key Value Propositions
• Manageability
• Simple provisioning
• Storage Array migration
• VM/FS co-existence
• SQL, EM, Command line
• Consolidation
• Self-tuning
• Performance
• Distribute load across all
available storage
• No ASM code in data path
• Availability
• Automatic mirror rebuild
• Automatic bad block correction
• Rolling upgrades
• Online patches
• RAC and clusterware support
• Cost Savings
• Shared storage pool
• Just-in-Time provisioning
• No license fees
• No support fees
45
Summary:
• ASM requires very few parameters to run
• ASM based databases inherently leverage raw disk
performance
• No additional database parameters needed to support
ASM
• Mixed ASM-database version support
• Facilitates online storage changes
• RMAN recommended for backing up ASM based
databases
• Spreads I/O evenly across all disks to maximize
performance and eliminates hot spots
ASM provides filesystem and volume manager capabilities built into the Oracle database kernel. With
this capability, ASM simplifies storage management tasks, such as creating/laying out databases and
disk space management. Since ASM allows disk management to be done using familiar
create/alter/drop SQL statements, DBAs do not need to learn a new skill set or make crucial decisions
on provisioning.
The following are some key benefits of ASM:
oASM spreads I/O evenly across all available disk drives to prevent hot spots and maximize
performance.
oASM eliminates the need for over provisioning and maximizes storage resource utilization facilitating
database consolidation.
oInherent large file support.
oPerforms automatic online redistribution after the incremental addition or removal of storage
capacity.
oMaintains redundant copies of data to provide high availability, or leverages 3rd party RAID
functionality.
oSupports Oracle Database as well as Oracle Real Application Clusters (RAC).
oCapable of leveraging 3rd party multipathing technologies.
oFor simplicity and easier migration to ASM, an Oracle database can contain ASM and non-ASM files.
Any new files can be created as ASM files whilst existing files can also be migrated to ASM.
oRMAN commands enable non-ASM managed files to be relocated to an ASM disk group.
oEnterprise Manager Database Control or Grid Control can be used to manage ASM disk and file
activities.
ASM reduces Oracle Database cost and complexity without compromising performance or availability
46
ASM Collateral and Content
http://www.oracle.com/technology/asm
• ASM 11g New Features
• ASM Best Practices
• ASM vendor papers
• ASM-RAC Customer Case Studies

Top 10 ASM Questions
Extra credit questions

Top 10 ASM Questions
Q. Is ASMLIB required on Linux systems and are there
any benefits to using it?
A. ASMLIB is not required to run ASM, but it is certainly
recommended.
ASMLIB has following benefits:
• Simplified disk discovery
• Persistent disk names
• Efficient use of system resources
o Reduced Overhead
ASMLIB provides the capability for a process (RBAL) to perform a global open/close
on the disks that are being dropped or closed.
This reduces the number of open file descriptors on the system, thus reduces the
probability of running out of global file descriptors on the system. Also, the open
and close operations are reduced, ensuring orderly cleanup of file descriptors when
storage configurations changes occur.
A side benefit of the aforementioned items is a faster startup of the database.
o Disk Management and Discovery
With ASMLib the ASM disk name is automatically taken from the name given it by the
administrative tool. This simplifies adding disks and correlating OS names with ASM
names, as well as eliminates erroneous disk management activities since disks are
already pre-named.
The default discovery string for ASM is NULL, however, if ASMLIB is used, the
ASMLIB default string replaces the NULL string, making disk discovery much more
straightforward. Note, disk discovery has been one of the big challenges for
administrators.
The ASMLib permissions are persistent across reboot and in the event of major/minor
number changes
In RAC environments, disk identification and discovery as simply as single instance.
Once the disks are labeled on one node, the other clustered nodes simply use the
default disk discovery string, and discovery is seamless.
o No requirement to setup raw links
With ASMLib, there is not a requirement to modify the initializations (e.g. “/etc/init.d”)

Top 10 ASM Questions
Q. Is it possible to do rolling upgrades on ASMLIB in a
RAC configuration
A. ASMLIB is independent of Oracle Clusterware and
Oracle Database, and thus can be upgraded on its
own
Upgrading ASMLIB one a given node will require that ASMLIB be
disabled/stop, will require the database and ASM to also be
shutdown on that node. Once the ASMLIB is upgarded then the stack can be
restarted.



Best regards,

Rafi.