Showing posts with label Oracle Errors. Show all posts
Showing posts with label Oracle Errors. Show all posts

Sunday, January 20, 2013

WARNING: inbound connection timed out ORA-3136

Hi,

Recently I was troubleshootiong for this error,finally resolved.First,let me explain what exactly it mean, the cause of this error,possible method of diagnosis and finally the resolution.

INBOUND_CONNECT_TIMEOUT:
 The INBOUND_CONNECT_TIMEOUT is the parameter used to specify the time, in seconds, for the client to complete its connect request to the listener after the network connection had been established.
i.e The default timeout for a new connection to be established. This setting is called SQLNET.INBOUND_CONNECT_TIMEOUT.

 If the listener does not receive the client request in the time specified, then it terminates the connection.Check the listener.log for details.

In Oracle 9i,the value is set to unlimited and the new value in 10g is 60 seconds.In Oracle 11g also the default value is 60 seconds.

Causes:

1) Some Malicious client connections. 
2)  Connection takes a long time.
3)Database is heavily loaded and cannot process request in allotted time.

Diagnosis:

1)Check the alert log file and check from where the connection comes.
2)Check the listener is up & running.
3)Ping the server,make sure tnsping is working.

Resolution:

To identify the listener name and ORACLE_HOME,we can use the below command.

$ ps -eaf|grep tns
oracle    1643     1  0  2012 ?        04:51:49 /data01/home/oracle/product/10.2.0/bin/tnslsnr LISTENER -inherit
oracle   31682 16935  0 04:54 pts/0    00:00:00 grep tns


Check if the sqlnet.ora file is existing in $ORACLE_HOME/network/admin path,if not create one.

Adjusted the values for INBOUND_CONNECT_TIMEOUT at both listener and server side sqlnet.ora and reloaded the the listener configuration:

sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=60
listener.ora: INBOUND_CONNECT_TIMEOUT_LISTENER=60
where:Listener_name=>LISTENER
Default value for these is 60 seconds.


Below are the details:

LSNRCTL>set  INBOUND_CONNECT_TIMEOUT_LISTENER=60
[oracle@hostname admin]$ cat sqlnet.ora
#SQLNET.ORA Network Configuration File: /u01/home/oracle/product/10.2.0/network/admin/sqlnet.ora
#To eliminate inbound connection timeout
SQLNET.INBOUND_CONNECT_TIMEOUT=60

#sqlnet.authentication_services = (NONE)

#SQLNET.EXPIRE_TIME = 0

#SQLNET.ENCRYPTION_SERVER = requested

#SQLNET.ENCRYPTION_CLIENT = requested

NAMES.DIRECTORY_PATH= (TNSNAMES,hostname)
[oracle@s259722ch3el19 admin]$

[oracle@s259722ch3el19 admin]$ lsnrctl reload LISTENER

LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 19-JAN-2013 05:01:25

Copyright (c) 1991, 2005, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=10.48.1.511)(PORT=1521)))
The command completed successfully


Keep  monitoring - alerts have stopped for now.

If the alerts continous to come,than you have to increase the value of  INBOUND_CONNECT_TIMEOUT as follows:

sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=300
listener.ora: INBOUND_CONNECT_TIMEOUT_LISTENER=300
 where:Listener_name=>LISTENER

 Hope it help.

Happy DBA tasks and troubleshooting.


Best regards,

Rafi.

Tuesday, November 1, 2011

MDS-01370: MetadataStore configuration for metadata-store-usage "soa-infra-store" is invalid

Hi,
In Windows 7 OS we might face an issue where SOA_INFRA is down even after starting SOA managed server,In this case we need to be aware of the below error.This error can be resolved as given below.

MDS-01370: MetadataStore configuration for metadata-store-usage "soa-infra-store" is invalid

Error Message:

Caused By: oracle.mds.exception.MDSExceptionList: MDS-01329: unable to load element "persistence-config"
MDS-01370: MetadataStore configuration for metadata-store-usage "soa-infra-store" is invalid.
ORA-04063: package body "DEV_MDS.MDS_INTERNAL_SHREDDED" has errors
ORA-06508: PL/SQL: could not find program unit being called: "DEV_MDS.MDS_INTERNAL_SHREDDED"
ORA-06512: at line 1

Resolution:

Step 1: Uninstall the XE Database by launching Oracle XE graphical utility

Step 2:Install the XE Database

Step 3:Set the Oracle Environment variables 'RCU_JDBC_TRIM_BLOCKS=TRUE'
and Go to the repository creation utility Home(rcuHome) and BIN path and execute the rcu.bat batch file


C:\Users\RafiAlvi>set RCU_JDBC_TRIM_BLOCKS=TRUE

C:\Users\RafiAlvi>cd D:\Rafi_docs\Oracle_SOA_INSTALL\ofm_rcu_win_11.1.1.4.0_disk1_1of1\rcuHome\BIN

C:\Users\RafiAlvi>D:

D:\Rafi_docs\Oracle_SOA_INSTALL\ofm_rcu_win_11.1.1.4.0_disk1_1of1\rcuHome\BIN>rcu.bat

Step 4:Drop all the Schema owner and Tablespaces and
then run RCU again to Re-create the SOA Suite schemas and tablespaces.

The list of Component,Schema Owner and Tablespaces are given below:

Component Schema Owner Tablespaces
SOA Infrasture DEV_SOAINFRA DEV_SOAINFRA
BAM DEV_ORABAM DEV_ORABAM
UserMessaging service DEV_ORASDPM DEV_IAS_ORASDPM

Step 5:Find and compile the Invalid objects in the XE Database

SQL> COLUMN object_name FORMAT A30
SQL> SELECT owner,
object_type,
object_name,
status
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner, object_type, object_name;

OWNER OBJECT_TYPE
------------------------------ -------------------
OBJECT_NAME STATUS
------------------------------ -------
DEV_ESS PACKAGE BODY
ESSPURGE INVALID

DEV_ESS PROCEDURE
CLOSE_FILE INVALID

DEV_ESS PROCEDURE
DEBUG_PURGE INVALID


OWNER OBJECT_TYPE
------------------------------ -------------------
OBJECT_NAME STATUS
------------------------------ -------
DEV_ESS PROCEDURE
LOG_ERROR INVALID

DEV_ESS PROCEDURE
LOG_MESSAGE INVALID

DEV_ESS PROCEDURE
WRITE_LINE INVALID


OWNER OBJECT_TYPE
------------------------------ -------------------
OBJECT_NAME STATUS
------------------------------ -------
DEV_SOAINFRA PROCEDURE
PURGE_B2B_INSTANCES_DOWN INVALID

DEV_SOAINFRA PROCEDURE
PURGE_RUNTIME_WITH_TIMEOUT INVALID


8 rows selected.

SQL> conn DEV_SOAINFRA
Enter password:
Connected.
SQL> ALTER PROCEDURE PURGE_B2B_INSTANCES_DOWN COMPILE;

Procedure altered.

SQL> ALTER PROCEDURE PURGE_RUNTIME_WITH_TIMEOUT COMPILE;

Procedure altered.


SQL> CONN DEV_MDS
Enter password:
Connected.
SQL> EXEC DBMS_UTILITY.compile_schema(schema => 'DEV_MDS');

PL/SQL procedure successfully completed.

Step 6:Now Bounce the Weblogic admin server and SOA Managed Server


weblogic_server_stop.bat:

cd D:\oracle_middleware\user_projects\domains\Soa_suitedev_domain\bin
D:
stopWebLogic.cmd

weblogic_server_start.bat:

cd D:\oracle_middleware\user_projects\domains\Soa_suitedev_domain\bin
D:
startWebLogic.cmd

soa_server_stop.bat:

cd D:\oracle_middleware\user_projects\domains\Soa_suitedev_domain\bin
D:
stopManagedWeblogic.cmd soa_server1

soa_serve_start.bat:


soa_server_start.bat:

cd D:\oracle_middleware\user_projects\domains\Soa_suitedev_domain\bin
D:
startManagedWeblogic.cmd soa_server1


Happy SOA learning...

Best regards,

Rafi.

Tuesday, October 25, 2011

Resolving ORA-01652

Hi,

Steps we can follow to resolve the error ORA-01652 is as follows:

Solution 1:

We can check for held TEMP segments with this query:
SQL> set linesize 1000
SQL>select
srt.tablespace,
srt.segfile#,
srt.segblk#,
srt.blocks,
a.sid,
a.serial#,
a.username,
a.osuser,
a.status
from
v$session a,
v$sort_usage srt
where
a.saddr = srt.session_addr
order by
srt.tablespace, srt.segfile#, srt.segblk#,
srt.blocks;


TABLESPACE SEGFILE# SEGBLK# BLOCKS SID SERIAL# USERNAME OSUSER STATUS
------------------------------- ---------- ---------- ---------- ---------- ---------- ------------------------------ ------------------------------ --------
TEMP 201 169216 70912 15 5 TEST1 oracle ACTIVE
TEMP 201 178688 128 9 1450 TEST2 INACTIVE
TEMP 201 240128 128 150 299 APEX_PUBLIC_USER apex_admin INACTIVE



This command may remove a TEMP segment, try:

alter tablespace xxxxx coalesce;

alter tablespace TEMP coalesce;

(OR)

Solution 2:

We can also use a drop segments event to remove temporary space from a tablespace:

ALTER SESSION SET EVENTS 'immediate trace name drop_segments level &x';

where:
x is the value for file# from Tablespace.

(OR)

Solution 3:Add the temp file for Temporary tablespace

SQL> SELECT FILE_NAME||' '||TABLESPACE_NAME||' '||BYTES/1024/1024
2 FROM DBA_TEMP_FILES;

FILE_NAME||''||TABLESPACE_NAME||''||BYTES/1024/1024
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
/u04/TESTDB/TESTDB_temp_01.dbf TEMP 1900

SQL> ALTER TABLESPACE TEMP ADD TEMPFILE '/u04/TESTDB/TESTDB_temp_02.dbf' SIZE 200M;

Tablespace altered.


SQL> SELECT TABLESPACE_SIZE/1024/1024||' '||ALLOCATED_SPACE/1024/1024||' '||FREE_SPACE/1024/1024
FROM DBA_TEMP_FREE_SPACE;

TABLESPACE_SIZE/1024/1024||''||ALLOCATED_SPACE/1024/1024||''||FREE_SPACE/1024/1024
--------------------------------------------------------------------------------------------------------------------------
2100 1898 1519

SQL> select tablespace_name, total_blocks, used_blocks, free_blocks
2 FROM v$sort_segment;

TABLESPACE_NAME TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS
------------------------------- ------------ ----------- -----------
TEMP 242688 74112 168576


SQL> set linesize 1000

SQL> SELECT tablespace_name, SUM(bytes_used)/1024/1024, SUM(bytes_free)/1024/1024
FROM V$temp_space_header
GROUP BY tablespace_name;

TABLESPACE_NAME SUM(BYTES_USED)/1024/1024 SUM(BYTES_FREE)/1024/1024
------------------------------ ------------------------- -------------------------
TEMP 1898 202


Then,resize or add the datafile related to a Tablespace if require or add a new datafile to current tablespace


Addind Datafile to existing Tablespace:
ALTER TABLESPACE RMRTM ADD DATAFILE '/u05/TESTDB/RMRTM_DATA_01.DBF' SIZE 200M;


SQL> SELECT *FROM DBA_DATA_FILES
WHERE TABLESPACE_NAME LIKE '%RM%';

FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS ONLINE_
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------- ------------------------------ ---------- ---------- --------- ------------ --- ---------- ---------- ------------ ---------- ----------- -------
/u04/TESTDB/RMRTM_DATA_01.DBF 39 RMRTM 3221225472 393216 AVAILABLE 39 YES 3221225472 393216 12800 3220176896 393088 ONLINE



SQL> SET LINESIZE 1000
SQL> SELECT F.TABLESPACE_NAME,
TO_CHAR ((T.TOTAL_SPACEF.FREE_SPACE),'999,999')"USEDMB",
TO_CHAR (F.FREE_SPACE, '999,999') "FREEMB",
TO_CHAR (T.TOTAL_SPACE, '999,999') "TOTALMB",
TO_CHAR ((ROUND ((F.FREE_SPACE/T.TOTAL_SPACE)*100)),'999')||' %' FREE
FROM (SELECT TABLESPACE_NAME,
ROUND (SUM (BLOCKS*(SELECT VALUE/1024
FROM V$PARAMETER
WHERE NAME = 'db_block_size')/1024) ) FREE_SPACE
FROM DBA_FREE_SPACE
GROUP BY TABLESPACE_NAME ) F,
(
SELECT TABLESPACE_NAME,
ROUND (SUM (BYTES/1048576)) TOTAL_SPACE
FROM DBA_DATA_FILES
GROUP BY TABLESPACE_NAME ) T
WHERE F.TABLESPACE_NAME = T.TABLESPACE_NAME;


TABLESPACE_NAME USEDMB FREEMB TOTALMB FREE
------------------------------ -------- -------- -------- ------

RMRTM 2,748 324 3,072 11 %


SQL>ALTER TABLESPACE RMRTM ADD DATAFILE '/u05/TESTDB/RMRTM_DATA_01.DBF' SIZE 200M;

Tablespace altered

(OR)

we can resize the existing Datafile if the mount point have enough space

Note:
Resizing Datafile can be done as givne below:

ALTER DATABASE DATAFILE 'D:\ORACLEXE\ORADATA\XE\USERS.DBF' RESIZE 200M;

(OR)

Solution 4:Create a new Temporary Tablespace and make this temporary tablespace default for our Database.


Step 1:Create the new temporary Tablespace

SQL> select name from v$database;

NAME
---------
TESTDB

SQL> select status from v$instance;

STATUS
------------
OPEN

SQL> CREATE TEMPORARY TABLESPACE TEMP02 TEMPFILE '/u05/TESTDB/TEMP02.dbf' SIZE 256M
REUSE AUTOEXTEND ON NEXT 256M MAXSIZE 8192M EXTENT MANAGEMENT LOCAL;

Tablespace created.

Step 2:Make this temporary tablespace default at Database level

SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE TEMP02;

Database altered.

Where:
'TEMP02' is the DEFAULT TEMPORARY TABLESPACE name in the above case.

Step 3:Bounce our Database so that actual temporary space is release from 'temp' Tablespace and Drop tablespace 'temp' including content

we have to Make sure We delete on temp data files and the actual space is recovered.

SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup

ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size 2220200 bytes
Variable Size 855641944 bytes
Database Buffers 205520896 bytes
Redo Buffers 5554176 bytes
Database mounted.
Database opened.
SQL> Drop tablespace temp including contents;

Tablespace dropped.

SQL> select *from dba_temp_files;

FILE_NAME
--------------------------------------------------------------------------------
FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS
---------- ------------------------------ ---------- ---------- -------
RELATIVE_FNO AUT MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
------------ --- ---------- ---------- ------------ ---------- -----------
/u05/TESTDB/TEMP02.dbf
3 TEMP02 268435456 32768 ONLINE
1 YES 8589934592 1048576 32768 267386880 32640

Now,we can see the new Temporary tablespace is 'TEMP02' with enough free space.

Note:We can adopt either of one solution

Hope it helps...


Happy troubleshooting...


Best regards,

Rafi.

Saturday, August 20, 2011

Setting the Oracle Environment for various Oracle Database versions

Hi,
When in we install different Oracle Software in our machine.It is very important we set the correct PATH and ORACLE_HOME environment variables in order to connect them always.Below is one such way which will be helpful.

Error messages:

C:\Windows\system32>sqlplus
Error 6 initializing SQL*Plus
Message file sp1.msb not found
SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory

Pre-requistic step:

On Windows OS:

Delete the environment variable 'PATH' by going to computer=>properties=>Advanced
system properties=>Advanced=>Environment variable=>path and delete this Click OK.

On linux OS:
We have to remove the 'PATH' variable from .bash_profile or we can set separate .bash_profile for each Oracle Database versions and we have to use export in place of set command.

C:\>set ORACLE_HOME=
C:\>set PATH=%ORACLE_HOME%\bin;%PATH%
C:\>set ORACLE_SID=
C:\>sqlplus "/ as sysdba"

Oracle 11g:

Pre-requistics step:Make sure you started the services for Oracle 11g listener
and Database in your machine(services.msc at run prompt and start Oracle 11g listener and Database services).

set ORACLE_HOME=D:\app\RafiAlvi\product\11.2.0\dbhome_1
set PATH=%ORACLE_HOME%\bin;%PATH%
set ORACLE_SID=ORCL
sqlplus "/as sysdba"

Oracle 10g:

Pre-requistics step: Make sure you are started services for Oracle 10g listener and Database in your machine.

set ORACLE_HOME=D:\oracle\product\10.2.0\db_1
set PATH=%ORACLE_HOME%\bin;%PATH%
set ORACLE_SID=TESTDB
sqlplus "/as sysdba"

Hope it helps.


Best regards,

Rafi.





Sunday, July 17, 2011

Resolving ORA-31626,ORA-31633 while running Datapump job

Resolving ORA-31626,ORA-31633 while running Datapump job:
-------------------------------------------------------------
When I check my expdp logs(Datapump backup logs) and backup for one of my development Database I found both the logs and backup for the previous day missing,than I checked the cron job log and found the job actually ran the previous day.After some thinking I decided to run the expdp job(Datapump job) once again manually with the help of one my favourite command 'nohup' as follows:
$nohup sh expdp_FULL_DB_DEVDB.sh>a.out&

In the a.out(log file for nohup job used for running expdp job) I found the below error:

ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.FULL_EXPDP_DEVDB"

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95

ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-00955: name is already used by an existing object

After some research,I found the cause of this error as given below:

Cause: This error occured because Job creation failed because a Master Table(A table created when a Datapump job is started and dropped when a Datapump job is completed) and its indexes could not be created,most commonly due to the pre-existance of a table with the same name (job name) in the user schema

Solution: I decided to Change the JOB_NAME in the expdp script and run the expdp script once again with the help of the 'nohup' command again as given below:

$vi expdp_FULL_DB_DEVDB.sh
JOB_NAME=EXPD_JOB_NEW_NAME
:wq

$nohup sh expdp_FULL_DB_DEVDB.sh>a.out&

This time the 'nohup' command ran successfully,I checked the 'expdp' job it started successfully and the log file also looks fine.

$ps -eaf|grep expdp
=>Datapump(expdp) job found running

$tail -f expdp-18072011_095042.log
=>show running log file and I've verified here that expdp job is running fine and completed successfully.

After some time expdp job completed successfully,My only concern is next time this issue should not occur for this I'm not getting any clue if you have any please reply to this post.

Hope it helps...


Best regards,

Rafi.

Wednesday, July 13, 2011

Resolving DRG-10502 and ORA-06512

Hi,
When I checked my Datapump export backup(logical backup) log file,I saw the below error message,In the below post I'm explaining the cause and the posible solution to get rid of this error in our next Datapump export backup.Remember,we need
to check our log files regularly for resolving such errors and to make sure our backup is complete error free as this is very much vital for DBA to avoid any future issues.

Error message:

ORA-39127: unexpected error from call to local_str := SYS.DBMS_EXPORT_EXTENSION.GET_DOMAIN_INDEX_METADATA('WWV_FLOW_OH_IDX','APEX_040000','TEXTINDEXMETHODS','CTXSYS',11.02.00.00.00,newblock,0)
ORA-20000: Oracle Text error:
DRG-10502: index APEX_040000"."WWV_FLOW_OH_IDX does not exist
ORA-06512: at "SYS.DBMS_EXPORT_EXTENSION", line 270
ORA-06512: at line 1
ORA-06512: at "SYS.DBMS_METADATA", line 6300


Cause:

We got this error because,In my 11g rel2 database Because of the below reason:

The errors are thrown because the domain index has status FAILED, visible in USER_/DBA_INDEXES view, and no entry exists in Text data dictionary, ctxsys schema.Only metadata of valid domain indexes are exported.
I tried to rebuild the index but still the same error thrown.So finally decided to
drop it.

SQL> select name from v$database;

NAME
---------
TESTDB

SQL> select index_name, status, domidx_status, domidx_opstatus
from dba_indexes
where index_type = 'DOMAIN'
and domidx_opstatus = 'FAILED'; 2 3 4

INDEX_NAME STATUS DOMIDX_STATU DOMIDX
------------------------------ -------- ------------ ------
WWV_FLOW_OH_IDX VALID VALID FAILED

SQL> ALTER INDEX APEX_040000.WWV_FLOW_OH_IDX REBUILD ONLINE;
ALTER INDEX APEX_040000.WWV_FLOW_OH_IDX REBUILD ONLINE
*
ERROR at line 1:
ORA-29858: error occurred in the execution of ODCIINDEXALTER routine
ORA-20000: Oracle Text error:
DRG-10561: index WWV_FLOW_OH_IDX is not valid for requested operation
ORA-06512: at "CTXSYS.DRUE", line 160
ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 614

Solution:
To implement the solution, please execute the following steps:

* if the indexes are required in your application drop and re-create them
* if the indexes are not used by your application then drop them

Resolution:
I have drop the index as I know it is not used by our application and one valid reason i.e ctxsys:The owner of Oracle text (formerly: interMedia text) and not used by our application.

Note: Before dropping WWV_FLOW_OH_IDX,please make sure it is not used by your application or else drop and recreate.

SQL> DROP INDEX APEX_040000.WWV_FLOW_OH_IDX;

Index dropped.

SQL> select index_name, status, domidx_status, domidx_opstatus
from dba_indexes
where index_type = 'DOMAIN'
and domidx_opstatus = 'FAILED'; 2 3 4

no rows selected

References:MY ORACLE SUPPORT,ORACLE FORUM

Hope it helps...


Best regards,

Rafi.

Wednesday, July 6, 2011

Resolving UDE-12545 and ORA-12545 error

Hi,
Oracle networking errors can test our patience some times.Below is one such error.Changing the host or domain name of a server can lead to UDE-12545 and
ORA-12545 error.Below I'm discussing the cause of this error,possible ways to diagnose and resolve this error in a very simple manner.

Error message:

UDE-12545: operation generated ORACLE error 12545
ORA-12545: Connect failed because target host or object does not exist

Diagnosis:
We can solve this error as follows:

1)Check the listener is up & running

[oracle@node1 Backup_Scipts]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 06-JUL-2011 10:55:48

Copyright (c) 1991, 2009, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=node1.in.company.com)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.1.0 - Production
Start Date 02-JUL-2011 11:00:36
Uptime 3 days 23 hr. 55 min. 13 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/oracle11g/product/11.2.0/dbhome_1/network/admin/listener.ora
Listener Log File /u01/oracle11g/diag/tnslsnr/node1/listener/alert/log.xml
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=node1.in.company.com)(PORT=1521)))
Services Summary...
Service "TESTDB" has 1 instance(s).
Instance "TESTDB", status UNKNOWN, has 1 handler(s) for this service...
Service "TESTDB2" has 1 instance(s).
Instance "TESTDB2", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

2)Use the tnsping to check the connection string(TNS entry) for Database is working fine or not,for checking network connectivity:

Here we will get the error as given below:

[oracle@node1 Backup_Scipts]$ tnsping TESTDB

TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 06-JUL-2011 10:55:56

Copyright (c) 1997, 2009, Oracle. All rights reserved.

Used parameter files:
/u01/oracle11g/product/11.2.0/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1.apac.company.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TESTDB)))
TNS-12545: Connect failed because target host or object does not exist

Note:Here,TESTDB is connection string(tns entry) present in tnsnames.ora file.

Resolving:
Now,We have to check our hostname and verify it with the connection string present in the tnsnames.ora file.

Verify the host details and connection string details and validate:

[oracle@node1 Backup_Scipts]$ hostname
node1.in.company.com
As we noticed we need to change the domain name from 'apac' to 'in' in the connetion string present
in tnsnames.ora file and save it and than try 'tnsping'

[oracle@node1 admin]$ tnsping TESTDB

TNS Ping Utility for Linux: Version 11.2.0.1.0 - Production on 06-JUL-2011 10:57:58

Copyright (c) 1997, 2009, Oracle. All rights reserved.

Used parameter files:
/u01/oracle11g/product/11.2.0/dbhome_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = node1.in.company.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = TESTDB)))
OK (10 msec)

Now,we can perform the tasks related to rman,datapump or other tasks dealing with the connection string it will work fine.

Hope it helps.

Best regards,

Rafi.

Friday, May 6, 2011

Troubleshooting ORA-39083 & ORA-02270

Hi,
When my checked my impdp job logfile I have seen the error ORA-39083 & ORA-02270,
You see such error when the primary key of table is disabled while doing export.
Below is such scenario and the possible solution:


Error details from logfile of impdp:
------------------------------------


ORA-02270: no matching unique or primary key for this column-list
Failing sql is:
ALTER TABLE "TESTOOL"."TEST_LANDSCAPE_DETAIL_QKIT" ADD CONSTRAINT "PRLD_PRLM_FK" FOREIGN KEY ("PRLD_PRLM_FK") REFERENCES "PI OOL"."TEST_LANDSCAPE_MASTER_QKIT" ("PRLM_PK") DISABLE
ORA-39083: Object type REF_CONSTRAINT failed to create with error:

Process of troubleshooting:
---------------------------


Process:
--------

Compare the source and target environment,Diagnose the issue and resolve it.

Step 1: Check the tables which are having primary key disabled
-------


SQL> select owner||' '||constraint_name||' '||constraint_type||' '||status from dba_constraints where table_name='TEST_LANDSCAPE_MASTER_QKIT';

OWNER||''||CONSTRAINT_NAME||''||CONSTRAINT_TYPE||''||STATUS
------------------------------------------------------------------------
TESTOOL PRLM_PK P DISABLED
TESTOOL SYS_C0015976 C DISABLED
TESTOOL SYS_C0015975 C DISABLED
TESTOOL SYS_C0015974 C DISABLED
TESTOOL SYS_C0015973 C DISABLED
TESTOOL SYS_C0015972 C DISABLED

6 rows selected.

SQL> select owner||' '||constraint_name||' '||constraint_type||' '||status from dba_constraints where table_name='TEST2_LINE_MASTER_QKIT';

OWNER||''||CONSTRAINT_NAME||''||CONSTRAINT_TYPE||''||STATUS
------------------------------------------------------------------------
TESTOOL PLM_PK P DISABLED
TESTOOL SYS_C0015986 C DISABLED
TESTOOL SYS_C0015985 C DISABLED
TESTOOL SYS_C0015984 C DISABLED
TESTOOL SYS_C0015983 C DISABLED
TESTOOL PLM_FK R DISABLED

6 rows selected.

Step 2:Enable the primary keys
------


SQL> ALTER TABLE TESTOOL.TEST_LANDSCAPE_MASTER_QKIT
2 enable CONSTRAINT PRLM_PK;

Table altered.

SQL> ALTER TABLE TESTOOL.TEST2_LINE_MASTER_QKIT
2 enable CONSTRAINT PLM_PK;

Table altered.

SQL> select owner||' '||constraint_name||' '||constraint_type||' '||status from dba_constraints where table_name='TEST_LANDSCAPE_MASTER_QKIT';

OWNER||''||CONSTRAINT_NAME||''||CONSTRAINT_TYPE||''||STATUS
------------------------------------------------------------------------
TESTOOL PRLM_PK P ENABLED
TESTOOL SYS_C0015976 C DISABLED
TESTOOL SYS_C0015975 C DISABLED
TESTOOL SYS_C0015974 C DISABLED
TESTOOL SYS_C0015973 C DISABLED
TESTOOL SYS_C0015972 C DISABLED

6 rows selected.


Step 3: Now do the export,copy the dump file and import in target Database.Remember to disable the primary keys at both source
and Target side.

In the final step copy the new dump file and import into target Database.This time you won't get ORA-02270.

Hope it helps,


Best regards,

Rafi.

Friday, March 18, 2011

Resolving ORA-39083,USER DEFINED TYPE FAILED WHILE IMPORT TO THE SAME DATABASE :

Hi,
We days back when I checked my log file after doing import,I have seen error
ORA-39083,after doing some research I resolved this error as given below,This error
can occur while using imp utility or impdp utility,I have mentioned the possible resolution for both the cases.

Issue:
User defined data type failed to create, when an import is performed to the same database (source and destination same)

Error:
ORA-39083: Object type TYPE failed to create with error:
ORA-02304: invalid object identifier literal

Cause:-
OID of the user defined type already exists.

Resolution:

1) With import utility:-
Use the parameter - “toid_novalidate”

2) With impdp Utility
Use the parameter - “TRANSFORM=oid:n”
Test case:-

1) Import utility:
---------------------

imp system fromuser=prod touser=dev log=test.log ignore=y file=exp.dmp toid_novalidate=dev.T_MERGE_KEY_COLUMN,dev.T_MERGE_CONFIGURATION,dev.T_MERGE_KEY_COLUMN_LIST,dev.T_MERGE_CONFIGURATION_LIST

Where the following are the tables, which contains user defined datatype.

dev.T_MERGE_KEY_COLUMN,
dev.T_MERGE_CONFIGURATION,
dev.T_MERGE_KEY_COLUMN_LIST,
dev.T_MERGE_CONFIGURATION_LIST

Import details shows as follows:-

Export file created by EXPORT:V09.02.00 via conventional path
import done in US7ASCII character set and UTF8 NCHAR character set
import server uses UTF8 character set (possible charset conversion)
. importing PROD's objects into DEV
. . skipping TOID validation on type DEV.T_MERGE_KEY_COLUMN
. . skipping TOID validation on type DEV.T_MERGE_KEY_COLUMN_LIST
. . skipping TOID validation on type DEV.T_MERGE_CONFIGURATION
. . skipping TOID validation on type DEV.T_MERGE_CONFIGURATION_LIST
. . importing table "CFG_ETL_MERGE_PROCESS" 62 rows imported
. . importing table "MERGE_PROCESS_CONFIGURATION" 1116 rows imported
. . importing table "MERGE_PROCESS_KEY_COLUMN" 66 rows imported
About to enable constraints...
Import terminated successfully without warnings.


2)Impdp Utility:
-----------------

impdp emptest/pwd REMAP_SCHEMA=emptest:emphp TRANSFORM=oid:n DIRECTORY=IMP_DP_EMPTEST NETWORK_LINK=emptest LOGFILE=emptestdata5_29122010.log

Here we used network impdp, we can use this for Normal impdp as well.

Note: The parameter TRANSFORM=oid:n is useful only in Oracle 10g rel2.

Hope it helps.


Best regards,

Rafi.

Tuesday, May 11, 2010

ORA-01555:SNAPSHOT TOO OLD ERROR BY TOM(IMPORTANT INTERVIEW QUESTION)

Tom,
Would you tell me what snapshot too old error. When does it happen? What's the possible
causes? How to fix it?

Thank you very much.

Jane


and we said...
I think support note covers this topic very well:

ORA-01555 "Snapshot too old" - Detailed Explanation
===================================================

Overview
~~~~~~~~

This article will discuss the circumstances under which a query can return the Oracle
error ORA-01555 "snapshot too old (rollback segment too small)". The article will then
proceed to discuss actions that can be taken to avoid the error and finally will provide
some simple PL/SQL scripts that illustrate the issues discussed.

Terminology
~~~~~~~~~~~

It is assumed that the reader is familiar with standard Oracle terminology such as
'rollback segment' and 'SCN'. If not, the reader should first read the Oracle Server
Concepts manual and related Oracle documentation.

In addition to this, two key concepts are briefly covered below which help in the
understanding of ORA-01555:

1. READ CONSISTENCY:
====================

This is documented in the Oracle Server Concepts manual and so will not be discussed
further. However, for the purposes of this article this should be read and understood if
not understood already.

Oracle Server has the ability to have multi-version read consistency which is invaluable
to you because it guarantees that you are seeing a consistent view of the data (no 'dirty
reads').


2. DELAYED BLOCK CLEANOUT:
==========================

This is best illustrated with an example: Consider a transaction that updates a million
row table. This obviously visits a large number of database blocks to make the change to
the data. When the user commits the transaction Oracle does NOT go back and revisit these
blocks to make the change permanent. It is left for the next transaction that visits any
block affected by the update to 'tidy up' the block (hence the term 'delayed block
cleanout').

Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in
the header of the data block which identifies the rollback segment used to hold the
rollback information for the changes made by the transaction. (This is required if the
user later elects to not commit the changes and wishes to 'undo' the changes made.)

Upon commit, the database simply marks the relevant rollback segment header entry as
committed. Now, when one of the changed blocks is revisited Oracle examines the header of
the data block which indicates that it has been changed at some point. The database needs
to confirm whether the change has been committed or whether it is currently uncommitted.
To do this, Oracle determines the rollback segment used for the previous transaction
(from the block's header) and then determines whether the rollback header indicates
whether it has been committed or not.

If it is found that the block is committed then the header of the data block is updated
so that subsequent accesses to the block do not incur this processing.

This behaviour is illustrated in a very simplified way below. Here we walk through the
stages involved in updating a data block.

STAGE 1 - No changes made

Description: This is the starting point. At the top of the
data block we have an area used to link active
transactions to a rollback
segment (the 'tx' part), and the rollback segment
header has a table that stores information upon
all the latest transactions
that have used that rollback segment.

In our example, we have two active transaction
slots (01 and 02)
and the next free slot is slot 03. (Since we are
free to overwrite committed transactions.)

Data Block 500 Rollback Segment Header 5
+----+--------------+ +----------------------+---------+
| tx | None | | transaction entry 01 |ACTIVE |
+----+--------------+ | transaction entry 02 |ACTIVE |
| row 1 | | transaction entry 03 |COMMITTED|
| row 2 | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+-------------------+ +--------------------------------+

STAGE 2 - Row 2 is updated

Description: We have now updated row 2 of block 500. Note that
the data block header is updated to point to the
rollback segment 5, transaction
slot 3 (5.3) and that it is marked uncommitted
(Active).

Data Block 500 Rollback Segment Header 5
+----+--------------+ +----------------------+---------+
| tx |5.3uncommitted|-+ | transaction entry 01 |ACTIVE |
+----+--------------+ | | transaction entry 02 |ACTIVE |
| row 1 | +-->| transaction entry 03 |ACTIVE |
| row 2 *changed* | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+------------------+ +--------------------------------+

STAGE 3 - The user issues a commit

Description: Next the user hits commit. Note that all that
this does is it
updates the rollback segment header's
corresponding transaction
slot as committed. It does *nothing* to the data
block.

Data Block 500 Rollback Segment Header 5
+----+--------------+ +----------------------+---------+
| tx |5.3uncommitted|--+ | transaction entry 01 |ACTIVE |
+----+--------------+ | | transaction entry 02 |ACTIVE |
| row 1 | +--->| transaction entry 03 |COMMITTED|
| row 2 *changed* | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+------------------+ +--------------------------------+

STAGE 4 - Another user selects data block 500

Description: Some time later another user (or the same user)
revisits data block 500. We can see that there
is an uncommitted change in the
data block according to the data block's header.

Oracle then uses the data block header to look up
the corresponding rollback segment transaction
table slot, sees that it has been committed, and
changes data block 500 to reflect the
true state of the datablock. (i.e. it performs
delayed cleanout).

Data Block 500 Rollback Segment Header 5
+----+--------------+ +----------------------+---------+
| tx | None | | transaction entry 01 |ACTIVE |
+----+--------------+ | transaction entry 02 |ACTIVE |
| row 1 | | transaction entry 03 |COMMITTED|
| row 2 | | transaction entry 04 |COMMITTED|
| ... .. | | ... ... .. | ... |
| row n | | transaction entry nn |COMMITTED|
+------------------+ +--------------------------------+


ORA-01555 Explanation
~~~~~~~~~~~~~~~~~~~~~

There are two fundamental causes of the error ORA-01555 that are a result of Oracle
trying to attain a 'read consistent' image. These are :

o The rollback information itself is overwritten so that Oracle is unable to rollback
the (committed) transaction entries to attain a sufficiently old enough version of the
block.

o The transaction slot in the rollback segment's transaction table (stored in the
rollback segment's header) is overwritten, and Oracle cannot rollback the transaction
header sufficiently to derive the original rollback segment transaction slot.

Both of these situations are discussed below with the series of steps that cause the
ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query
Environment', which can be thought of as the environment that existed when a query is
first started and to which Oracle is trying to attain a read consistent image. Associated
with this environment is the SCN
(System Change Number) at that time and hence, QENV 50 is the query environment with SCN
50.

CASE 1 - ROLLBACK OVERWRITTEN

This breaks down into two cases: another session overwriting the rollback that the
current session requires or the case where the current session overwrites the rollback
information that it requires. The latter is discussed in this article because this is
usually the harder one to understand.

Steps:

1. Session 1 starts query at time T1 and QENV 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 does some other work that generates rollback information.

5. Session 1 commits the changes made in steps '3' and '4'.
(Now other transactions are free to overwrite this rollback information)

6. Session 1 revisits the same block B1 (perhaps for a different row).

Now, Oracle can see from the block's header that it has been changed and it is
later than the required QENV (which was 50). Therefore we need to get an image of the
block as of this QENV.

If an old enough version of the block can be found in the buffer cache then we
will use this, otherwise we need to rollback the current block to generate another
version of the block as at the required QENV.

It is under this condition that Oracle may not be able to get the required
rollback information because Session 1's changes have generated rollback information that
has overwritten it and returns the ORA-1555 error.

CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN

1. Session 1 starts query at time T1 and QENV 50

2. Session 1 selects block B1 during this query

3. Session 1 updates the block at SCN 51

4. Session 1 commits the changes
(Now other transactions are free to overwrite this rollback information)

5. A session (Session 1, another session or a number of other sessions) then use the
same rollback segment for a series of committed transactions.

These transactions each consume a slot in the rollback segment transaction table
such that it eventually wraps around (the slots are written to in a circular fashion) and
overwrites all the slots. Note that Oracle is free to reuse these slots since all
transactions are committed.

6. Session 1's query then visits a block that has been changed since the initial QENV
was established. Oracle therefore needs to derive an image of the block as at that point
in time.

Next Oracle attempts to lookup the rollback segment header's transaction slot
pointed to by the top of the data block. It then realises that this has been overwritten
and attempts to rollback the changes made to the rollback segment header to get the
original transaction slot entry.

If it cannot rollback the rollback segment transaction table sufficiently it will
return ORA-1555 since Oracle can no longer derive the required version of the data block.


It is also possible to encounter a variant of the transaction slot being overwritten
when using block cleanout. This is briefly described below :

Session 1 starts a query at QENV 50. After this another process updates the blocks that
Session 1 will require. When Session 1 encounters these blocks it determines that the
blocks have changed and have not yet been cleaned out (via delayed block cleanout).
Session 1 must determine whether the rows in the block existed at QENV 50, were
subsequently changed,

In order to do this, Oracle must look at the relevant rollback segment transaction table
slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try
to construct an older version of the block and if it is before then the block just needs
clean out to be good enough for the QENV.

If the transaction slot has been overwritten and the transaction table cannot be rolled
back to a sufficiently old enough version then Oracle cannot derive the block image and
will return ORA-1555.

(Note: Normally Oracle can use an algorithm for determining a block's SCN during block
cleanout even when the rollback segment slot has been overwritten. But in this case
Oracle cannot guarantee that the version of the block has not changed since the start of
the query).

Solutions
~~~~~~~~~

This section lists some of the solutions that can be used to avoid the ORA-01555 problems
discussed in this article. It addresses the cases where rollback segment information is
overwritten by the same session and when the rollback segment transaction table entry is
overwritten.

It is worth highlighting that if a single session experiences the ORA-01555 and it is not
one of the special cases listed at the end of this article, then the session must be
using an Oracle extension whereby fetches across commits are tolerated. This does not
follow the ANSI model and in the rare cases where
ORA-01555 is returned one of the solutions below must be used.

CASE 1 - ROLLBACK OVERWRITTEN

1. Increase size of rollback segment which will reduce the likelihood of overwriting
rollback information that is needed.

2. Reduce the number of commits (same reason as 1).

3. Run the processing against a range of data rather than the whole table. (Same
reason as 1).

4. Add additional rollback segments. This will allow the updates etc. to be spread
across more rollback segments thereby reducing the chances of overwriting required
rollback information.

5. If fetching across commits, the code can be changed so that this is not done.

6. Ensure that the outer select does not revisit the same block at different times
during the processing. This can be achieved by :

- Using a full table scan rather than an index lookup
- Introducing a dummy sort so that we retrieve all the data, sort it and then
sequentially visit these data blocks.

CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN

1. Use any of the methods outlined above except for '6'. This will allow transactions
to spread their work across multiple rollback segments therefore reducing the likelihood
or rollback segment transaction table slots being consumed.

2. If it is suspected that the block cleanout variant is the cause, then force block
cleanout to occur prior to the transaction that returns the ORA-1555. This can be
achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :

alter session set optimizer_goal = rule;
select count(*) from table_name;

If indexes are being accessed then the problem may be an index block and clean out
can be forced by ensuring that all the index is traversed. Eg, if the index is on a
numeric column with a minimum value of 25 then the following query will force cleanout of
the index :

select index_column from table_name where index_column > 24;

Examples
~~~~~~~~

Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases
given above. Before these PL/SQL examples will return this error the database must be
configured as follows :

o Use a small buffer cache (db_block_buffers).

REASON: You do not want the session executing the script to be able to find old
versions of the block in the buffer cache which can be used to satisfy a block visit
without requiring the rollback information.

o Use one rollback segment other than SYSTEM.

REASON: You need to ensure that the work being done is generating rollback
information that will overwrite the rollback information required.

o Ensure that the rollback segment is small.

REASON: See the reason for using one rollback segment.

ROLLBACK OVERWRITTEN

rem * 1555_a.sql -
rem * Example of getting ora-1555 "Snapshot too old" by
rem * session overwriting the rollback information required
rem * by the same session.

drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));

drop table dummy1;
create table dummy1 (a varchar2(200));

rem * Populate the example tables.
begin
for i in 1..4000 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
insert into dummy1 values ('ssssssssssss');
commit;
end if;
end loop;
commit;
end;
/

rem * Ensure that table is 'cleaned out'.
select count(*) from bigemp;

declare
-- Must use a predicate so that we revisit a changed block at a different
-- time.

-- If another tx is updating the table then we may not need the predicate
cursor c1 is select rowid, bigemp.* from bigemp where a < 20;

begin
for c1rec in c1 loop

update dummy1 set a = 'aaaaaaaa';
update dummy1 set a = 'bbbbbbbb';
update dummy1 set a = 'cccccccc';
update bigemp set done='Y' where c1rec.rowid = rowid;
commit;
end loop;
end;
/

ROLLBACK TRANSACTION SLOT OVERWRITTEN

rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
rem * overwriting the transaction slot in the rollback
rem * segment header. This just uses one session.

drop table bigemp;
create table bigemp (a number, b varchar2(30), done char(1));

rem * Populate demo table.
begin
for i in 1..200 loop
insert into bigemp values (mod(i,20), to_char(i), 'N');
if mod(i,100) = 0 then
commit;
end if;
end loop;
commit;
end;
/

drop table mydual;
create table mydual (a number);
insert into mydual values (1);
commit;

rem * Cleanout demo table.
select count(*) from bigemp;

declare

cursor c1 is select * from bigemp;

begin

-- The following update is required to illustrate the problem if block
-- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
-- out then the update and commit statements can be commented and the
-- script will fail with ORA-1555 for the block cleanout variant.
update bigemp set b = 'aaaaa';
commit;

for c1rec in c1 loop
for i in 1..20 loop
update mydual set a=a;
commit;
end loop;
end loop;
end;
/

Special Cases
~~~~~~~~~~~~~

There are other special cases that may result in an ORA-01555. These are given below but
are rare and so not discussed in this article :

o Trusted Oracle can return this if configured in OS MAC mode. Decreasing
LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.

o If a query visits a data block that has been changed by using the Oracle discrete
transaction facility then it will return ORA-01555.

o It is feasible that a rollback segment created with the OPTIMAL clause maycause a
query to return ORA-01555 if it has shrunk during the life of the query causing rollback
segment information required to generate consistent read versions of blocks to be lost.

Summary
~~~~~~~

This article has discussed the reasons behind the error ORA-01555 "Snapshot too old", has
provided a list of possible methods to avoid the error when it is encountered, and has
provided simple PL/SQL scripts that illustrate the cases discussed.