Wednesday, September 8, 2010

INTERVIEW QUESTIONS OF IBM

Hi,
These are the interview questions of IBM :

Q 1)What is ORA-01555 error?
Ans: This is one of the most favourite interview question,It can be asked by many companies:
Tom had explained beautifully check the below link:
http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:275215756923

Q 2)How you will kill a process completely?
Ans We use the command
SQL>alter system kill session(sid,serial#) immediate;
The process might still be existing in OS
We use kill -9 pid
Through SQL we can get pid of OS in SPID column by combining views v$process and v$session.

Q 3)What are the new features of RMAN in oracle 10g?
Ans: The top 10 new features of RMAN in oracle 10g are:
1)Incrementally Updated Backups:Using this feature all changes between the SCN of the original image copy and the SCN of the incremental backup are applied to the image copy, winding it forward to make the equivalent of a new database image copy without the overhead of such a backup. The following example shows how this can be used:

RUN {
RECOVER COPY OF DATABASE WITH TAG 'incr_backup' UNTIL TIME 'SYSDATE - 7';
BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_backup' DATABASE;
}
The RECOVER COPY... line will not do anything until the script has been running for more than 7 days. The BACKUP INCREMENTAL line will perform a complete backup (level 0) the first day it is run, with all subsequent backups being level 1 incremental backups. After 7 days, the RECOVER COPY... line will start to take effect, merging all incremental backups older than 7 days into the level 0 backup, effectively moving the level 0 backup forward. The effect of this is that you will permanently have a 7 day recovery window with a 7 day old level 0 backup and 6 level 1 incremental backups. Notice that the tag must be used to identify which incremental backups apply to which image copies.

2)Fast Incremental BackupsThere are performance issues associated with incremental backups as the whole of each datafile must be scanned to identify changed blocks. In Oracle 10g it is possible to track changed blocks using a change tracking file. Enabling change tracking does produce a small overhead, but it greatly improves the performance of incremental backups. The current change tracking status can be displayed using the following query:

SELECT status FROM v$block_change_tracking;
Change tracking is enabled using the ALTER DATABASE command:

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
Disabled using:
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

4)BACKUP for Backupsets and Image CopiesIn Oracle 10g the BACKUP command has been extended to allow it to initiate backups of image copies in addition to backupsets. As a result the COPY command has been deprecated in favour of this new syntax.

BACKUP AS COPY DATABASE;
BACKUP AS COPY TABLESPACE users;
BACKUP AS COPY DATAFILE 1;

5)Cataloging Backup PiecesIt is now possible to manually catalog a backup piece using the CATALOG commands in RMAN. This allows backup files to be moved to alternate locations or manually archived to tape and brought back for restore operations. In Oracle 9i this functionality was only availabale for controlfile copies, archivelog copies and datafile copies. In addition, there are some shortcuts to allow multiple files to be cataloged using a single command. The following examples give the general idea:

# Catalog specific backup piece.
CATALOG BACKUPPIECE '/backup/MYSID/01dmsbj4_1_1.bcp';

# Catalog all files and the contents of directories which
# begin with the pattern "/backup/MYSID/arch".
CATALOG START WITH '/backup/MYSID/arch';

# Catalog all files in the current recovery area.
CATALOG RECOVERY AREA NOPROMPT;

# Catalog all files in the current recovery area.
# This is an exact synonym of the previous command.
CATALOG DB_RECOVERY_FILE_DEST NOPROMPT;

6)Automatic Instance Creation for RMAN TSPITRIf a tablespace point-in-time recovery (TSPITR) is initiated with no reference to an auxillary instance RMAN now automatically creates an one. The auxillary instance configuration is based on that of the target database. As a result, any channels required for the restore operations must be present in the target database so they are configured correctly in the auxillary instance. The location of the datafiles for the auxillary instance are specified using the AUXILIARY DESTINATION clause shown below.

RECOVER TABLESPACE users
UNTIL LOGSEQ 2400 THREAD 1
AUXILIARY DESTINATION '/u01/oradata/auxdest';The tablespace is taken offline, restored from a backup, recovered to the specified point-in-time in the auxillary instance and re-imported into the target database. The tablespace in the target database should then be backed up and the tablespace brought back online.

BACKUP TABLESPACE users;
SQL "ALTER TABLESPACE users ONLINE";On successful completion the auxillary instance will be cleaned up automatically. In the event of errors the auxillary instance is left intact to aid troubleshooting.

7)Cross-Platform Tablespace Conversion
The CONVERT TABLESPACE allows tablespaces to be transported between platforms with different byte orders. The mechanism for transporting a tablespaces is unchanged, this command merely converts the tablespace to allow the transport to work.

The platform of the source and destination platforms can be identified using the V$TRANSPORTABLE_PLATFORM view. The platform of the local server is not listed as no conversion in necessary for a matching platform.

SQL> SELECT platform_name FROM v$transportable_platform;

PLATFORM_NAME
------------------------------------
Solaris[tm] OE (32-bit)
...
...
Microsoft Windows 64-bit for AMD

15 rows selected.The tablespace conversion can take place on either the source or the destination server. The following examples show how the command is used in each case:

# Conversion on a Solaris source host to a Linux destincation file.
CONVERT TABLESPACE my_tablespace
TO PLATFORM 'Linux IA (32-bit)'
FORMAT='/tmp/transport_linux/%U';

# Conversion on a Linux destination host from a Solaris source file.
CONVERT DATAFILE=
'/tmp/transport_solaris/my_ts_file01.dbf',
'/tmp/transport_solaris/my_ts_file02.dbf'
FROM PLATFORM 'Solaris[tm] OE (32-bit)'
DB_FILE_NAME_CONVERT
'/tmp/transport_solaris','/u01/oradata/MYDB';In the first example the converted files are placed in the directory specified by the FORMAT clause. In the second example the specified datafiles are converted to the local servers platform and placed in the correct directory specified by the DB_FILE_NAME_CONVERT clause.

8)Enhanced Stored Scripts Commands
Scripts can now be defined as global allowing them to be accessed by all databases within the recovery catalog. The syntax for global script manipulation is the same as that for regular scripts with the addition of the GLOBAL clause prior the word SCRIPT. Examples of it's usage are shown below:

CREATE GLOBAL SCRIPT full_backup
{
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE FORCE NOPROMPT OBSOLETE;
}

CREATE GLOBAL SCRIPT full_backup FROM FILE 'full_backup.txt';

RUN { EXECUTE GLOBAL SCRIPT full_backup; }

PRINT GLOBAL SCRIPT full_backup;

LIST GLOBAL SCRIPT NAMES;
LIST ALL SCRIPT NAMES; # Global and local scripts.

REPLACE GLOBAL SCRIPT full_backup
{
BACKUP DATABASE PLUS ARCHIVELOG;
DELETE FORCE NOPROMPT OBSOLETE;
}

REPLACE GLOBAL SCRIPT full_backup FROM FILE 'full_backup.txt';

DELETE GLOBAL SCRIPT 'full_backup';

8)Backupset CompressionThe AS COMPRESSED BACKUPSET option of the BACKUP command allows RMAN to perform binary compression of backupsets. The resulting backupsets do not need to be uncompressed during recovery. It is most useful in the following circumstances:

You are performing disk-based backup with limited disk space.
You are performing backups across a network where network bandwidth is limiting.
You are performing backups to tape, CD or DVD where hardware compression is not available.
The following examples assume that some persistent parameters are configured in a similar manner to those listed below:

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/backups/MYSID/%d_DB_%u_%s_%p';
The AS COMPRESSED BACKUPSET option can be used explicitly in the backup command:

# Whole database and archivelogs.
BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

# Datafiles 1 and 5 only.
BACKUP AS COMPRESSED BACKUPSET DATAFILE 1,5;Alternatively the option can be defined using the CONFIGURE command:

# Configure compression.
CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET;

# Whole database and archivelogs.
BACKUP DATABASE PLUS ARCHIVELOG;Compression requires additional CPU cycles which may affect the performance of the database. For this reason it should not be used for tape backups where hardware compression is available.

9)Restore PreviewThe PREVIEW option of the RESTORE command allows you to identify the backups required to complete a specific restore operation. The output generated by the command is in the same format as the LIST command. In addition the PREVIEW SUMMARY command can be used to produce a summary report with the same format as the LIST SUMMARY command. The following examples show how these commands are used:

# Preview
RESTORE DATABASE PREVIEW;
RESTORE TABLESPACE users PREVIEW;

# Preview Summary
RESTORE DATABASE PREVIEW SUMMARY;
RESTORE TABLESPACE users PREVIEW SUMMARY;

10)Managing Backup Duration and ThrottlingThe DURATION clause of the of the BACKUP command restricts the total time available for a backup to complete. At the end of the time window backup is interrupted with any incomplete backupsets discarded. All complete backupsets are kept and used for future restore operations. The following examples show how it is used:

BACKUP DURATION 2:00 TABLESPACE users;
BACKUP DURATION 5:00 DATABASE PLUS ARCHIVELOGS;

Q4)What are the advantages of DATAPUMP over export import?Why Datapump is very fast compared to export/import?
Ans:Top 10 difference between exp/imp(export/import) and expdp/impdp(Datapump export and import) are:

1)Data Pump Export and Import operate on a group of files called a dump file set
rather than on a single sequential dump file.

2)Data Pump Export and Import access files on the server rather than on the client.
This results in improved performance. It also means that directory objects are
required when you specify file locations.

3)The Data Pump Export and Import modes operate symmetrically, whereas original
export and import did not always exhibit this behavior.
For example, suppose you perform an export with FULL=Y, followed by an import using SCHEMAS=HR. This will produce the same results as if you performed an
export with SCHEMAS=HR, followed by an import with FULL=Y.

4)Data Pump Export and Import use parallel execution rather than a single stream of
execution, for improved performance. This means that the order of data within
dump file sets and the information in the log files is more variable.

5)Data Pump Export and Import represent metadata in the dump file set as XML
documents rather than as DDL commands. This provides improved flexibility for
transforming the metadata at import time.

6)Data Pump Export and Import are self-tuning utilities. Tuning parameters that
were used in original Export and Import, such as BUFFER and RECORDLENGTH,
are neither required nor supported by Data Pump Export and Import.

7)At import time there is no option to perform interim commits during the
restoration of a partition. This was provided by the COMMIT parameter in original
Import.

8)There is no option to merge extents when you re-create tables. In original Import,
this was provided by the COMPRESS parameter. Instead, extents are reallocated
according to storage parameters for the target table.

9)Sequential media, such as tapes and pipes, are not supported.

10)The Data Pump method for moving data between different database versions is
different than the method used by original Export/Import. With original Export,
you had to run an older version of Export (exp) to produce a dump file that was
compatible with an older database version. With Data Pump, you can use the
current Export (expdp) version and simply use the VERSION parameter to specify the target database version


Q5)What is lock?What is row and table level lock?
Ans:Locks are mechanism that prevent distructive interaction between transaction accessing the same resource.
This is most common scenario where a developer comes and tell you my session got locked can you please release the lock,as a DBA you just need to check the actual session holding the lock with the help of below query:


Step1:To verify the lock object Here is the import query:
---------------------------------------------------------------


SELECT o.owner, o.object_name, o.object_type, o.last_ddl_time, o.status, l.session_id, l.oracle_username, l.locked_mode
FROM dba_objects o, gv$locked_object l
WHERE o.object_id = l.object_id;

Step 2:
---------
Find the serial# for the sessions holding the lock:

SQL> select SERIAL# from v$session where SID=667;

SERIAL#
----------
21091

SQL> alter system kill session '667,21091';

System altered.


Actual speaking all locks acquired by statement within a transaction are held for duration of the transaction.

Oracle release all locks acquired by statement within a transaction when an explicit or implicit commit or rollback is executed.






Q6)What are the Database you are handling?What is the maximum size?
Ans:I'm handling Production,Stand by,Datawarehouse,Performance,Test,Development,Demo and SAP Databases.

The maximum size of my Database(Datawarehouse Database) is 750 gb,usually Datawarehouse Database(OLAP=>Online Analytical processing) will be larger in size compare to transactional Databases(OLTP=>Online Transactional processing.




Best regards,

Rafi.

No comments:

Post a Comment