Tuesday, December 15, 2009

Segment Comparison (structure) in Oracle11g

There are bunch third party products in the market which do a pretty good job at finding out the structual differences between objects or schemas. Toad comes to my mind which does an outstanding job...but it costs $$$! With Oracle11g, it's quite easy to figure out differences between two objects. Please note that use of this feature needs Change Management pack license. This is available with 11.1.0.7, but not documented in 11gR1 docs, but does have a mentioned in 11gR2 docs. The API or the package is DBMS_METADATA. Here is how it works:

Let's say we have two schemas in different databases and would like to compare the table structures:

SQL> Select dbms_metadata_diff.compare_alter('TABLE','APP_DATA','APP_DATA_TEMP', 'CHANDRA','CHANDRA',null,'BDBE01') DIFF from dual;

DIFF
--------------------------------------------------------------------------------
ALTER TABLE "CHANDRA"."APP_DATA" ADD ("CREATE_DATE" DATE)
ALTER TABLE "CHANDRA"."APP_DATA" ADD ("UPDATE_DATE" DATE)
ALTER TABLE "CHANDRA"."APP_DATA" RENAME TO "APP_DATA_TEMP"


Explanation for the argument used in the function call: Object_type, table_name, table_name, schema_owner, schema_owner, db_link_for_source (if null, then local), db_link_for_target.

If the order of table names, source and target are changed:

SQL> Select dbms_metadata_diff.compare_alter('TABLE','APP_DATA_TEMP','APP_DATA', 'CHANDRA','CHANDRA','BDBE01',null) DIFF from dual;

DIFF
--------------------------------------------------------------------------------
ALTER TABLE "CHANDRA"."APP_DATA_TEMP" DROP ("CREATE_DATE")
ALTER TABLE "CHANDRA"."APP_DATA_TEMP" DROP ("UPDATE_DATE")
ALTER TABLE "CHANDRA"."APP_DATA_TEMP" RENAME TO "APP_DATA"

Interesting, it even has an ALTER to rename the table to make both same in all respects.

I don't see an option to include or exclude various segment attributes such as storage, tablespace etc...may be because the intent of this API seems to be different. Thought this oracle provided utility would come in handy to quickly find out the structural differences.

Monday, September 14, 2009

Oracle11gR2: Deferred Segment Creation and export behavior!

I guess by now most of us know what Oracle11gR2's feature - Deferred Segment Creation really means. This post is about the impact this feature going to have while exporting a schema or tables using the traditional export. By default, any new tables created would have SEGMENT CREATION DEFERRED option set (unless you are using CTAS). This default behavior could be turned off by setting DEFERRED_SEGMENT_CREATION to false. The impact with this is - if you happen to export any of the tables, which do not have any rows (meaning empty tables without segments) - they will NOT be exported. This is only true with the legacy/old export and not with the export pump utility(10g and above). Something we have to remember if we can't get rid of the old habit of using old export! Of course, export is deprecated.

Oracle11gR2 Table / Tablespace Compression

As far as I remember, oracle introduced the basic compression feature with Oracle9i, but what's new with 11g is the ability to compress data generated as a result of OLTP transactions (almost all DMLs) as well - previously it was only available for bulk operations. Another neat feature of 11g is the ability to evaluate whether not a table is a right candidate for compression , if yes what would be compression benefits in regards space utilization. This is done using DBMS_COMPRESSION (using Compression Advisor). I wanted to use this procedure to see how accurately it could give us the space savings and also see what would be impact on performance, in general.

I am using a 18million record table for this example.

1. Space usage for the uncompressed table:


SQL> select segment_name, bytes, a.blocks, compression, compress_for from
2 dba_segments a, dba_tables b
3 where a.segment_name = b.table_name and a.segment_name='T1';

SEGMENT_NAME BYTES BLOCKS COMPRESS COMPRESS_FOR
------------------------------ ---------- ---------- -------- ------------
T1 1476395008 180224 DISABLED



2. Execute the DBMS_COMPRESSION procedure to get information on space savings:

(Note that this procedure example works only in 11gR2, the function names and the arguments are different in 11gR1.)

SQL> declare
2 lv_cmp_ratio number;
3 lv_comptype_str varchar2(300);
4 lv_BLKCNT_CMP number;
5 lv_BLKCNT_UNCMP number;
6 lv_ROW_CMP number;
7 lv_ROW_UNCMP number;
8 begin
9 dbms_compression.GET_COMPRESSION_RATIO(
10 SCRATCHTBSNAME=>'USERS',
11 OWNNAME=>'CHANDRA',
12 TABNAME=>'T1',
13 PARTNAME =>null,
14 COMPTYPE =>2, ---2 means OLTP
15 BLKCNT_CMP=>lv_BLKCNT_CMP,
16 BLKCNT_UNCMP=>lv_BLKCNT_UNCMP,
17 ROW_CMP =>lv_ROW_CMP,
18 ROW_UNCMP =>lv_ROW_UNCMP,
19 CMP_RATIO=>lv_cmp_ratio,
20 COMPTYPE_STR=>lv_COMPTYPE_STR);
21 dbms_output.put_line('====================================================');
22 dbms_output.put_line('1. Compression Ratio :'||lv_cmp_ratio);
23 dbms_output.put_line('2. Block Count :'||lv_blkcnt_cmp);
24 dbms_output.put_line('3. Compression Type :'||lv_comptype_str);
25 dbms_output.put_line('4. Blk Count Compressed :'||lv_BLKCNT_CMP);
26 dbms_output.put_line('5. Blk Count Un-compressed:'||lv_BLKCNT_UNCMP);
27 dbms_output.put_line('6. Row Count Compressed :'||lv_row_cmp);
28 dbms_output.put_line('4. Row Count Un-Compressed:'||lv_row_uncmp);
29 dbms_output.put_line('====================================================');
30 end;
31 /

====================================================
1. Compression Ratio :2.38995215311004784688995215311004784689
2. Block Count :836
3. Compression Type :"Compress For OLTP"
4. Blk Count Compressed :836
5. Blk Count Un-compressed:1998
6. Row Count Compressed :256
4. Row Count Un-Compressed:107
====================================================

PL/SQL procedure successfully completed.

According to the above information, if we look at the compressed (836 blocks) and uncompressed blocks(1998) counts (4 and 5 above), we should at least 50% gain!

3. Create a new tablespace with COMPRESS option. Unless I am missing something obvious, I couldn't get the correct syntax in both 11gR1 and R2 documentation. BTW, when I executed the same SQL on 11gR1 - I got - ORA-14464: Compression Type not specified

SQL> create tablespace USER_DATA_COMPRESSED default compress for OLTP datafile '/u01/oracle/oradata/DET01/user_data_compressed.dbf' size 10M autoextend on;

Tablespace created.

4. Create copy of T1 table (our example table) in COMPRESSED tablespace:

SQL> create table chandra.t1_compressed tablespace USER_DATA_COMPRESSED
2 as Select * from chandra.t1;

Table created.

5. Let's see how much space does the compressed table take when compared to the uncompressed/original table:

SQL> select segment_name, bytes, a.blocks, compression, compress_for from dba_segments a,
2 dba_tables b
3 where a.segment_name = b.table_name and a.segment_name in ('T1','T1_COMPRESSED');

SEGMENT_NAME BYTES BLOCKS COMPRESS COMPRESS_FOR
------------------------------ ---------- ---------- -------- ------------
T1_COMPRESSED 612368384 74752 ENABLED OLTP
T1 1476395008 180224 DISABLED


As theDBMS_COMPRESSION procedure indicated, we did see less than 50% space utilization with the OLTP compression, so far so good. How about performance? I performed such generic tests, involving SELECTS and other DMLs and here are the results:

Any activity involving Full Table Scan, the performance was 4 times faster on compressed tables compared with uncompressed table.
Inserts and Updates took close to twice longer on compressed table compared with uncompressed table and deletes about 10% more.

When we only talk about other benefits of compression - they would be lesser memory utilization, redo generation etc.. comes to my mind. Of course, at the cost of higher CPU consumption.

Friday, September 11, 2009

Oracle11gR2 ASM New Feature - Intelligent Data Placement

Intelligent Data Placement, a new feature of Oracle11gR2 ASM is aimed at optimizing data access. This is achieved by giving us the capability to place the data on those sectors of the hard drive where the I/O performance is high/efficient. These sectors are the outer sectors. Before going into the details of the ASM's feature, let's attempt to understand briefly how and why the outer sector's i/o performance is better compared to the inner sectors.

The above mentioned efficiencies come from a technique used in hard drives called "Zoned Bit Recording". This method or technique used to increase capacity and data access speeds on hard disks by improving the utilization of the larger, outer tracks of the disk. Outer tracks contain more sectors per track than the one before. This makes more efficient use of the larger tracks on the outside of the disk. The data transfer rate is higher when reading outside cylinders than reading the inside ones, since outer cylinders contain more data.

Coming back to the original topic - ASM now classifies reads/writes into two - COLD and HOT. As you guessed, HOT read/writes are those coming from the blocks which are classified as HOT - meaning placed in the outer sectors. COLD is the default - meaning data placed in inner sectors. Data blocks of most often accessed segments are candidates for placement in HOT region. This is done at the datafile level in an ASM disk.

You could use SQL*Plus or DBConsole to make these changes.




It's just matter of picking the appropriate radio-button depending on the needs/access patterns to switch between HOT and COLD regions.

We know what the feature means - but I wanted to put the feature to test to really see the benefit in regards to I/O rates/speeds using some general test cases. Here is how the I/O rates or performance was for a set of tests (comprising a mix of read/writes(deletes/inserts/updates) with the files being in the default zone (COLD). Then ran the same tests while the datafile was placed in the HOT region. Here are the test results. Note that when you move a file from one region to another, you have to initiate a re-balance to make the change effective.




With Intelligent Data Placement:

ALTER DISKGROUP ASMDISK01 MODIFY FILE '+ASMDISK01/BDBE01/DATAFILE/APP_DATA.256.697380955' ATTRIBUTES (HOT)





I executed the above mentioned test cases multiple times to make sure I see or notice consistent (and thus reliable and dependable) I/O performance. During my tests I saw that there was consistently an increase of about 10% to 15% in "Average Throughput" and about (-)3% to +5% in the "Average Response Times". Of course, we may have to perform more realistic tests with application specific I/O patterns.

I think we can say that with this new feature, Oracle (with ASM) will be able to successfully exploit the Zoned Bit Recording technique of hard drives for optimizing I/O performance. Again, not sure if this really going to make a huge impact given the large cache and disk sizes.

Wednesday, September 9, 2009

Oracle11g R2 - impressive and intelligent CRS installer (OUI)!

I was really impressed with the enhancements made to the installer, apart from the change in the look and feel. Prior to Oracle11gR2, installer would allow us to place the redundant copies of OCR/Voting devices/files unto the same file system (of course it doesn't make any sense though). But with 11gR2, it prevents us from placing the mirrors unto the same file system. There is no IGNORE option as well...either you have to place it on different file systems/locations or use a single copy. So the installer has "best practices" built into it.

Here is how I first attempt to use - placing all the three unto the same file system:


And here is the error message:



I ended up using a single copy for now:


BTW, the installer allows us to configure SSH, test SSH, option to use ASM or shared file system for OCR/Vote etc... The best of all, it now provides a script to fix any of the fix-able requirements such as kernel parameters etc..

Tuesday, September 8, 2009

ORA-03001 - while trying to rename tables owned by others - workaround

There could be situations where you may like to rename the tables owned by others, but do not have access to the password (of the schema owner) or don't want to change the password (using "by values" etc...). You could perform almost all the activities by using "ALTER SESSION SET CURRENT_SCHEMA=;", but rename table wouldn't work and following error message would be encountered:

chandra@DOE01:11g> show user
USER is "CHANDRA"
chandra@DOE01:11g> alter session set current_schema=CP_ADMIN;

Session altered.

chandra@DOE01:11g> rename APP_DATA to APP_DATA_BAK;
rename APP_DATA to APP_DATA_BAK
*
ERROR at line 1:
ORA-03001: unimplemented feature


chandra@DOE01:11g>

To get around this, we could create a simple procedure similar to the following:

create or replace procedure rename_others_tab (orig_tab_name in varchar2, new_tab_name in varchar2)
as
lv_sql_str varchar2(100);
lv_exists number(1);
begin
select count(*) into lv_exists from user_tables where table_name=orig_tab_name;
if lv_exists = 1 then
lv_sql_str := 'rename '||orig_tab_name||' to '||new_tab_name;
dbms_output.put_line (lv_sql_str);
execute immediate lv_sql_str;
else
dbms_output.put_line('ERROR:'||orig_tab_name||' does not exist!');
end if;
end;
/


Note that you would be creating the above procedure while your current_schema is set to the owner of the table.

Then:

chandra@DOE01:11g> exec rename_others_tab('APP_DATA','APP_DATA_BAK');

PL/SQL procedure successfully completed.

chandra@DOE01:11g>

The table name would be renamed....Of course, use the above procedure (work-around) with caution and when ABSOLUTELY needed.

Thought would be of use to others....

Friday, August 7, 2009

ORA-01620 while creating a RAC standby for non-RAC primary

Today I was trying to test some thing and had to create a RAC standby for a Non-RAC (single-instance) Primary database. I know, it doesn't make sense to have a RAC standby for a single-instance primary, but that's not the point. First instance was mounted without any problems, but when attempted to mount the second standby instance got the following error message:

SQL> alter database mount standby database;
alter database mount standby database
*
ERROR at line 1:
ORA-01620: no public threads are available for mounting


I couldn't find quick notes on the web to get around this issue and therefore had to find my own for this unique situation (having a RAC standby for SI-Primary). Here is what I did on Primary:

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 group 13 size 5M;

Database altered.

SQL> ALTER DATABASE ADD LOGFILE THREAD 2 group 14 size 5M;

Database altered.

SQL> alter database enable public thread 2;

Database altered.


Then created new copy of standby controlfile and shipped it over to standby and attempted to mount both the instances, it worked fine! Of course, had the primary been a RAC, we wouldn't have seen this in first place :-) BTW, this is on Oracle11g 11.1.0.7

Tuesday, August 4, 2009

Tablespace Point-in-time Recovery (TSPITR) using RMAN

Scenario: A logical corruption occurred due to an user error and have to quickly get back to the state just before the time of the error. Possible options: Database point-in-time recovery, but the downside is the entire database has to be unavailable and other transactions after the PITR would be lost. The other option would be Tablespace Point-in-time-Recovery (TSPITR) - the obvious choice here. I had known that RMAN could be used to perform TSPITR, but I realized just today how powerful, simple, automated, error-free and fast it could be to perform TSPITR using RMAN - it's really amazing. The intent of this post is just to high-light and appreciate the usefulness of RMAN to perform TSPITR.

Assuming we have the latest backup, just executing the following statement would do everything needed to get the entire tablespace to a point-in-time in the past. Believe me, you don't have to do anything other than just this statement!


recover tablespace APP_DATA until time
"to_date('2009-08-04 12:15:00’,’YYYY-MM-DD HH24:MI:SS')"
auxiliary destination ’/opt/oracle/temp’;

Once the above statement is executed, RMAN does the following for us:

  • Creates auxiliary instance (including the pfile etc..)
  • Mounts the auxiliary instance
  • Makes the candidate tablespace into OFFLINE
  • Restores the basic tablespace UNDO, SYTEM,SYSAUX and then the required tablespace
  • Applies archives (completes recovery)
  • Opens the database
  • Performs an export of the objects residing in the tablespace
  • Shutdown aux instance
  • Import the objects into the target database
  • Remove aux instance and cleanup
That's it....you now have all the objects back in the tablespace!.

You can view the output of the RMAN session by downloading this file.. which is quite self-explanatory and informative too!

Thursday, July 30, 2009

ORA-27090: Message 27090 not found on 10.2.0.4

Lately we have been noticing ORA-27090 on our 10.2.0.4 databases. Here is the complete error message, as appeared in the alert log file. This is observed on Suse Linux, Red hat and OEL 5.3:

===============================================================
Errors in file /tmp/test_dbw0_4400.trc:
ORA-27090: Message 27090 not found; product=RDBMS; facility=ORA
Additional information: 3
Additional information: 128
Additional information: 1100 #--This is the value for the parameter aio-max-nr
===============================================================
Looks like this error condition is only encountered or reported in databases using 10.2.0.4 oracle version. On research found that this has something to do with the kernel setting fs.aio-max-nr. aio-max-nr sets the systemwide maximum number of AIO requests. Apparently, when the number of requests reach this number the error is reported. I guess the question would be, how can we determine where we are at currently in relation to the max limit, right? This is answered by aio-nr, which maintains a cumulative/running total of number of aio events used.

I was able to consistently reproduce the error condition by playing around with the settings for aio-max-nr parameter. As such, no apparent problems are noticed on the database side, but did did notice that the standby databases would fail to continue with the managed recovery process.

Here is how it looked like before starting up an 10.2.0.4 instance:

#/sbin/sysctl -a |grep aio
fs.aio-max-nr = 1100 #---I reset it to this value from default.
fs.aio-nr = 0

When the instance was started, the value looked like this:
#/sbin/sysctl -a |grep aio
fs.aio-max-nr = 1100 #---I reset it to this value from default.
fs.aio-nr = 1060

As soon as it hit the ceiling, the above mentioned error was reported in the alert log. When I raised the value, the error message disappeared.

As soon as the instances are shutdown the value for fs.aio-nr gets reset.

Hope this information would come in handy.

Wednesday, July 15, 2009

How to check if CRS is set to auto-start on reboots

Sometime back, some one asked me a question - "how to confirm that CRS is set to restart on reboot?"

CRS can be configured to either to restart or not to restart on reboots. Disable restarts would be typically useful for troubleshooting CRS reboots or validating any system level changes.

To enable restarts (default):
$CRS_HOME/bin/crsctl enable crs

To disable restarts of crs on reboots:
$CRS_HOME/bin/crsctl disable crs

Note that the above needs access to root.

Whenever the above commands are run, it modifies the following file (on linux), which indicates whether or not the CRS is set to auto-restart on reboots:

/etc/oracle/scls_scr/node_name/root/crsstart

It contains either "enable" or "disable" keywords.

Saturday, June 20, 2009

ORA-29702 - Starting RAC instance in non-rac mode

Assume the following situation: Due to some OS or network related problem you are not able to start-up the CRS (this does happen!) before it impacts the database availability and subsequently your business. Your primary goal now is to make the database available as soon as possible irrespective of its mode (RAC or Non-RAC). The reason being you would like to reduce the impact to your business. If you attempt to startup the instance without CRS, you get the following error:

SQL> startup;
ORA-29702: error occurred in Cluster Group Service operation
SQL>

This error message indicates that it can't start the instance since CRS is not available. Here is the quick work-around to startup the instance. The option is to remove RAC option from your binaries and start the instance. Once you have resolved the CRS related issues, you can turn-on the RAC option back (of course, your instance needs to be down while to turn-on or turn-off the options). Here is how we turn-off:

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_off ioracle

Set cluster_database=false

You should now be able to startup the instance without CRS stack being up and running.

SQL> Select * from v$option where parameter like 'Real%';

PARAMETER VALUE
---------------------------------------- ----------
Real Application Clusters FALSE
Real Application Testing TRUE

Turn-on the RAC option with:

make -f ins_rdbms.mk rac_on ioracle

Alternatively, you can install a non-rac oracle binaries to quickly start the instance.


By the way, you turn off other options in similar way such as Partitioning (part_on/part_off), DB Vault (dv_on/dv_off) etc..

Tuesday, June 16, 2009

Oracle11g - Killing session in RAC (in remote instance)

Prior to Oracle11g, whenever you want to kill a session connected to a non-local (remote) instance, you had to make an explicit connection to that particular instance and then attempt to kill.

Example:

SQL> Select instance_number from v$instance;

INSTANCE_NUMBER
---------------
1 ---> I am connected to instance 1 (Local Instance)

1 row selected.

SQL> Select inst_id, sid, serial# from gv$session where username='CHANDRA';

INST_ID SID SERIAL#
---------- ---------- ----------
2 125 7526 ---> CHANDRA is connected to Inst# 2 (Non-local/remote Instance)

1 row selected.

With 11g, you could kill the session which is connected to instance 2, while you are connected to instance 1:

SQL> Alter system kill session '125,7526,@2' immediate; #--@2 indicates remote instance.

System altered.

SQL> Select inst_id, sid, serial# from gv$session where username='CHANDRA';

no rows selected

This really makes life easy - especially you are using SQL*Plus and not any other front-end tool.

Monday, May 25, 2009

Errors while installing Oracle9i in Redhat 5.3 (libstdc++-libc6.1-1.so.2)

Today, I was trying to install Oracle9i on Redhat 5.3 (2.6.18 kernel), got the following error when runInstaller was involved:

$ Initializing Java Virtual Machine from /tmp/OraInstall2009-05-09_06-44-18AM/jre/bin/java. Please wait...
/tmp/OraInstall2009-05-09_06-44-18AM/jre/bin/i386/native_threads/java: error while loading shared libraries: libstdc++-libc6.1-1.so.2: cannot open shared object file: No such file or directory

When check for libstdc++- library, I found the following to be installed by default on RH 5.3:

[root@alps tmp]# rpm -qa |grep libstdc++-
libstdc++-4.1.2-44.el5
libstdc++-devel-4.1.2-44.el5
compat-libstdc++-296-2.96-138
compat-libstdc++-33-3.2.3-61
[root@alps tmp]#

Looks like it is looking for libstdc++-lib6.1-1.

When I did the following the initial error message disappeared:

# cd /usr/lib
# ln -s libstdc++-3-libc6.2-2-2.10.0.so libstdc++-libc6.1-1.so.2

Then I got the following error:

$ ./runInstaller
$ Initializing Java Virtual Machine from /tmp/OraInstall2009-05-09_06-50-11AM/jre/bin/java. Please wait...
Error occurred during initialization of VM
Unable to load native library: /tmp/OraInstall2009-05-09_06-50-11AM/jre/lib/i386/libjava.so: symbol __libc_wait, version GLIBC_2.0 not defined in file libc.so.6 with link time reference


When applied the oracle patch: 3006854 - it fixed it:

[root@alps 3006854]# sh rhel3_pre_install.sh
Applying patch...
Ensuring permissions are correctly set...
Done.
Patch successfully applied
[root@alps 3006854]#

Tuesday, May 19, 2009

Adaptive/Intelligent Cursor Sharing in 11g - turning-off

Here is how the cursor sharing works at a high level with bind variables, especially in conjunction with Histograms (prior to oracle 11g):
  • Optimizer peeks value for the bind on initial parse
  • Initial value of the bind determines the plan
  • Same plan is used/shared regardless of future bind values
  • One plan not always appropriate and optimal for all bind values and results in following problems or side-effects:
- Performance problems
- Instability issues
- Unhappy end users

Adaptive Cursor Sharing address this problem. it shares execution plans ONLY when bind values are "equivalent" - means if it doesn't result in performance degradation. This is the default behavior in Oracle11g.

More information on adaptive cursor sharing can be found at: http://optimizermagic.blogspot.com/2009/04/update-on-adaptive-cursor-sharing.html - The intention of this post is not to talk about this feature - but how to turn-off this default behavior. I can't really think of a reason why anyone would like to turn this feature off, but it is possible, if at all we need it for some reason. This can be achieved by using NO_BIND_AWARE hint.

I believe this hint is meant to be an alternative for using the hidden parameter - _optim_peek_user_binds=false which is used to turn-off bind peeking.

Here is the example:

The default behavior:

SQL> variable lv_id number;

SQL> exec :lv_id :=100;

PL/SQL procedure successfully completed.

SQL> Select count(*) from sh.sales where prod_id = :lv_id;

COUNT(*)
----------
0

----if the bind values are peeked, we would see records in V$SQL_CS_STATISTICS:

SQL> Select address, child_number, peeked, executions from v$sql_cs_statistics;

ADDRESS CHILD_NUMBER P EXECUTIONS
-------- ------------ - ----------
34C54884 0 Y 1


Now, let's turn off bind peeking.
======================

SQL> alter system flush shared_pool;

System altered.

SQL> exec :lv_id :=1000;

PL/SQL procedure successfully completed.

SQL> Select /*+ NO_BIND_AWARE */ count(*) from sh.sales where prod_id = :lv_id;

COUNT(*)
----------
0
----With the above hint, the sql is not bind aware, therefore we don't see any records in v$SQL_CS_STATISTICS view.

SQL> Select address, child_number, peeked, executions from v$sql_cs_statistics;

no rows selected

SQL>

I guess, this is a better and elegant way to turn-off bind-peeking at the SQL level instead of turning-off bind-peeking at the instance level using the hidden parameter - _optim_peek_user_binds. Again, this parameter is typically used to prevent the side/ill-effects of bind-peeking and the same is now (with 11g) obviated with the introduction of Adaptive Cursor Sharing feature in 11g.

BTW, this hint is available in 11.1.0.7.


Monday, May 18, 2009

Oracle11g CRS new command

Just found out that the functionality of the "crsctl" command has been expanded in Oracle11g. Prior to Oracle11g, we did not have the ability to check the status of the cluster on other (non-local) nodes in the cluster using "crsctl" - but oracle11g makes it now possible:

[root@racnode01 bin]# ./crsctl check cluster
racnode01 ONLINE
racnode02 ONLINE
[root@racnode01 bin]#

Of course, we could very well use "crsstat" command to determine the status of CRS on non-local nodes - but not straight-forward and simple.

Wednesday, April 22, 2009

Oracle Support Bug Status Codes...

When you have an SR open with Oracle needing a bug fix, most of the times the status would be indicated in codes. Here is the list of codes and what it means...thought would come in handy:

10 - Description Phase: Development is requesting more information.
16 - Support bug screening: Bug is being reviewed by our Bug Diagnostics group.
11 - Code Bug (Response/Resolution): Bug is being worked by Development.
30 - Additional Information Requested: Bug is being worked by Support and/or more information was requested by Development.
37 - To Filer for Review/Merge Required: Bug has been fixed but the patch will be merged into the next patchset.
80 - Development to Q/A: Bug is being regression tested for future release.
81 - Q/A to Dev/Patch or Workaround Avble: Patch released via Metalink.
90 - Closed, Verified by Filer: Bug has been fixed and is closed.
91 - Closed, Could Not Reproduce: Bug is closed as not reproducible.
92 - Closed, Not a Bug: Bug is closed as not a bug (not reproducible or setup issue).
93 - Closed, Not Verified by Filer: Bug has been fixed and is closed.
95 - Closed, Vendor OS Problem: Bug is closed as an OS problem.
96 - Closed, Duplicate Bug: Bug is closed as a duplicate bug.

Friday, April 17, 2009

ORA-38856 - while opening a cloned RAC database

I was attempting to open a cloned RAC database and got the following error:

SYS@NETFRD1> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled

The fix was to use the undocumented parameter - _no_recovery_through_resetlogs=TRUE to open the database. I guess this is one of those scenarios where Oracle suggests using this undocumented/unsupported parameter. Here is what I did to get pass the problem:

1. Shutdown;
2. Startup mount (with the parameter _no_recovery_through_resetlogs=TRUE set in init.ora)
3. Open the database with RESETLOGS;

SYS@NETFRD1> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-38856: cannot mark instance UNNAMED_INSTANCE_2 (redo thread 2) as enabled


SYS@NETFRD1> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SYS@NETFRD1> !vi /tmp/init$ORACLE_SID.ora

SYS@NETFRD1> startup mount pfile=/tmp/init$ORACLE_SID.ora
ORACLE instance started.

Total System Global Area 1577058304 bytes
Fixed Size 2084232 bytes
Variable Size 385876600 bytes
Database Buffers 1174405120 bytes
Redo Buffers 14692352 bytes
Database mounted.
SYS@NETFRD1> alter database open resetlogs;

Database altered.

Monday, February 16, 2009

ORA-39006, ORA-39065,ORA-25306,ORA-39079 while exporting using EXPDP

Encountered the following error messages which attempting to do an export of a database using Data Pump utility with 10.2.0.4:

ORA-39006: internal error
ORA-39065: unexpected master process exception in DISPATCH
ORA-39079: unable to enqueue message DG,KUPC$S_3_20090213124234,MCP, ,1,Y
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPC$QUE_INT", line 924
ORA-25306: Cannot connect to buffered queue's owner instance

ORA-39097: Data Pump job encountered unexpected error -39079
ORA-39065: unexpected master process exception in KUPC$QUEUE_INT.PUT_STATUS
ORA-39079: unable to enqueue message DG,KUPC$S_3_20090213124234,MCP, ,1,Y
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPC$QUE_INT", line 924
ORA-25306: Cannot connect to buffered queue's

This seems to be a bug but couldn't find any published information in metalink. Setting the value for the parameters aq_tm_processes to 4 from a value of 0 worked. There are several other databases with the default setting of 0, but they don't seem to have problems with expdp. Hopefully, this will have a fix in 10.2.0.5.

Hope this helps others who encounter similar errors.

Sunday, February 8, 2009

Bypassing/ignoring or skipping ORA-1555 errors

I was researching a corruption issue using various Metalink link notes and stumbled upon a note/bug which provides a way to ignore or skip ORA-1555 errors for queries. Obviously, this will result in getting inconsistent data/result-set. As of now, I am not sure as to what type situations will have a practical use of this feature. This is available in Oracle 11g 11.1.0.7. This is done using a hint - SCN_ASCENDING.

Example:

In Session 1:

chandra@AEDB01:11g> Select Distinct phone from acs_emp;

PHONE
--------------------
972-500-8999

1 row selected.

In Session 2:

chandra@AEDB01:11g> Select /*+ scn_ascending */ * from acs_emp;

While this session is in progress, I have updated the phone column:

Back in Session 1:

chandra@AEDB01:11g> update acs_emp set phone='888-888-8888';

100000 rows updated.

chandra@AEDB01:11g> commit;

Commit complete.

Back to Session 2 where our select was in progress: You will notice that we got data which was NOT consistent with the data which existed at the point in time when the query was initiated. The SCN got changed in the middle - meaning inconsistent / questionable data!. Without the above mentioned HINT, it wouldn't have been possible to see such inconsistent result set. Hopefully, this hint is a well-intentioned one.

650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
650831 972-500-8999
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888
652088 888-888-8888