Monday, April 12, 2010

Converting time values in trace files into wall-clock time

There may be cases where you would like to know the exact wall-clock time when a particular wait-event occurred or the SQL was executed by looking at the trace file.
Oracle captures the wall-clock time in the trace files and represents it in the following format:

AIT #2: nam='SQL*Net message from client' ela= 16445143 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=1271092519750890

The value, we are interested in is the last one (
tim=1271092519750890).


The tim value can be converted into wall-clock time by using the following:

Take the first 10 digits, and then:

$date -d @1271092519 +"%m-%d-%Y %T"

Example:
tim=1271092519750890, first 10 characters - 1271092519

$date -d @1271092519 +"%m-%d-%Y %T"

04-12-2010 12:15:19


This could be useful in situations where you would like to find out when a user session completed execution, went back to the client to do some other processing and when it came back to database to execute subsequent SQLs.

BTW, there seems to be a difference in how this value is represented in 11g compared previous versions. I could not get the accurate wall-clock time for a tim value from 10g database.


Sunday, February 28, 2010

ORA-27504 while starting 11gR1 database with 11gR2 Grid Infrastructure

I have Oracle11g R2 (11.2.0.1) Grid infrastructure on a two node RAC. Installed 11.1.0.7 binaries and when attempting to create a database, encountered the following errors:

ORA-27504: IPC error creating OSD context
ORA-27300: OS system dependent operation:skgxnqtsz failed with

status: 0
ORA-27301: OS failure message: Error 0
ORA-27302: failure occurred at: SKGXN not av
clsssinit ret = 21
interconnect information is not available from OCR

On researching, found the following solution to fix those errors and was able to create the 11.1.0.7 database with 11gR2 grid infrastructure:

As root:

[root@eletrac01 bin]# ./olsnodes -t -n

eletrac01 1 Unpinned
eletrac02 2 Unpinned


[root@eletrac01 bin]# ./crsctl pin css -n eletrac01 eletrac02
CRS-4664: Node eletrac01 successfully pinned.
CRS-4664: Node eletrac02 successfully pinned.
[root@eletrac01 bin]#

[root@eletrac01 bin]# ./olsnodes -t -n
eletrac01 1 Pinned
eletrac02 2 Pinned
[root@eletrac01 bin]#

Saturday, January 16, 2010

Failed Switchover - Oracle11g

I recently had a situation where we had a failed switchover process. Was able to switch the Primary to Standby successfully, but encountered some issues while switching over standby to primary - therefore ended up having two standbys! The easiest alternative to get out of the situation would be to ACTIVATE one of the standbys and then rebuild the primary - but it's not always desirable (size, efforts, unprotected primary etc..).

We even attempted the oracle provided command to cancel the switchover, but encountered the following (On original primary):

SQL> ALTER DATABASE PREPARE TO SWITCHOVER CANCEL;
ALTER DATABASE PREPARE TO SWITCHOVER CANCEL
*
ERROR at line 1:
ORA-00600: internal error code, arguments: [adbdrv_switch], [384], [], [], [],
[], [], [], [], [], [], []

Here is I ended up doing:

All of these have been performed on original primary:
1. Startup Mount;
SQL> startup force nomount;
ORACLE instance started.

Total System Global Area 238530560 bytes
Fixed Size 1335724 bytes
Variable Size 150998612 bytes
Database Buffers 83886080 bytes
Redo Buffers

2. Recreate the controlfile using the trace (got from standby)
SQL> @scripts/ccf.sql

Control file created.

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SQL> alter database open resetlogs;

Database altered.

SQL>
3. That's it…that did it!

SQL> Select open_mode, database_role, switchover_status from V$database;

OPEN_MODE DATABASE_ROLE SWITCHOVER_STATUS
-------------------- ---------------- --------------------
READ WRITE PRIMARY TO STANDBY

SQL>

4. Here are the entries from standby - didn't have to do anything - it continued through RESETLOGS!

RFS[18]: New Archival REDO Branch(resetlogs_id): 708469623 Prior: 708454293
RFS[18]: Archival Activation ID: 0x6fece72a Current: 0x0
RFS[18]: Effect of primary database OPEN RESETLOGS
RFS[18]: Incarnation entry added for Branch(resetlogs_id): 708469623 (ETSI01)
Sat Jan 16 21:08:09 2010
Setting recovery target incarnation to 4
Sat Jan 16 21:08:09 2010
Archived Log entry 33 added for thread 1 sequence 1 ID 0x6fece72a dest 1:
Sat Jan 16 21:08:09 2010
RFS[19]: Assigned to RFS process 5022

Tuesday, December 15, 2009

Segment Comparison (structure) in Oracle11g

There are bunch third party products in the market which do a pretty good job at finding out the structual differences between objects or schemas. Toad comes to my mind which does an outstanding job...but it costs $$$! With Oracle11g, it's quite easy to figure out differences between two objects. Please note that use of this feature needs Change Management pack license. This is available with 11.1.0.7, but not documented in 11gR1 docs, but does have a mentioned in 11gR2 docs. The API or the package is DBMS_METADATA. Here is how it works:

Let's say we have two schemas in different databases and would like to compare the table structures:

SQL> Select dbms_metadata_diff.compare_alter('TABLE','APP_DATA','APP_DATA_TEMP', 'CHANDRA','CHANDRA',null,'BDBE01') DIFF from dual;

DIFF
--------------------------------------------------------------------------------
ALTER TABLE "CHANDRA"."APP_DATA" ADD ("CREATE_DATE" DATE)
ALTER TABLE "CHANDRA"."APP_DATA" ADD ("UPDATE_DATE" DATE)
ALTER TABLE "CHANDRA"."APP_DATA" RENAME TO "APP_DATA_TEMP"


Explanation for the argument used in the function call: Object_type, table_name, table_name, schema_owner, schema_owner, db_link_for_source (if null, then local), db_link_for_target.

If the order of table names, source and target are changed:

SQL> Select dbms_metadata_diff.compare_alter('TABLE','APP_DATA_TEMP','APP_DATA', 'CHANDRA','CHANDRA','BDBE01',null) DIFF from dual;

DIFF
--------------------------------------------------------------------------------
ALTER TABLE "CHANDRA"."APP_DATA_TEMP" DROP ("CREATE_DATE")
ALTER TABLE "CHANDRA"."APP_DATA_TEMP" DROP ("UPDATE_DATE")
ALTER TABLE "CHANDRA"."APP_DATA_TEMP" RENAME TO "APP_DATA"

Interesting, it even has an ALTER to rename the table to make both same in all respects.

I don't see an option to include or exclude various segment attributes such as storage, tablespace etc...may be because the intent of this API seems to be different. Thought this oracle provided utility would come in handy to quickly find out the structural differences.

Monday, September 14, 2009

Oracle11gR2: Deferred Segment Creation and export behavior!

I guess by now most of us know what Oracle11gR2's feature - Deferred Segment Creation really means. This post is about the impact this feature going to have while exporting a schema or tables using the traditional export. By default, any new tables created would have SEGMENT CREATION DEFERRED option set (unless you are using CTAS). This default behavior could be turned off by setting DEFERRED_SEGMENT_CREATION to false. The impact with this is - if you happen to export any of the tables, which do not have any rows (meaning empty tables without segments) - they will NOT be exported. This is only true with the legacy/old export and not with the export pump utility(10g and above). Something we have to remember if we can't get rid of the old habit of using old export! Of course, export is deprecated.

Oracle11gR2 Table / Tablespace Compression

As far as I remember, oracle introduced the basic compression feature with Oracle9i, but what's new with 11g is the ability to compress data generated as a result of OLTP transactions (almost all DMLs) as well - previously it was only available for bulk operations. Another neat feature of 11g is the ability to evaluate whether not a table is a right candidate for compression , if yes what would be compression benefits in regards space utilization. This is done using DBMS_COMPRESSION (using Compression Advisor). I wanted to use this procedure to see how accurately it could give us the space savings and also see what would be impact on performance, in general.

I am using a 18million record table for this example.

1. Space usage for the uncompressed table:


SQL> select segment_name, bytes, a.blocks, compression, compress_for from
2 dba_segments a, dba_tables b
3 where a.segment_name = b.table_name and a.segment_name='T1';

SEGMENT_NAME BYTES BLOCKS COMPRESS COMPRESS_FOR
------------------------------ ---------- ---------- -------- ------------
T1 1476395008 180224 DISABLED



2. Execute the DBMS_COMPRESSION procedure to get information on space savings:

(Note that this procedure example works only in 11gR2, the function names and the arguments are different in 11gR1.)

SQL> declare
2 lv_cmp_ratio number;
3 lv_comptype_str varchar2(300);
4 lv_BLKCNT_CMP number;
5 lv_BLKCNT_UNCMP number;
6 lv_ROW_CMP number;
7 lv_ROW_UNCMP number;
8 begin
9 dbms_compression.GET_COMPRESSION_RATIO(
10 SCRATCHTBSNAME=>'USERS',
11 OWNNAME=>'CHANDRA',
12 TABNAME=>'T1',
13 PARTNAME =>null,
14 COMPTYPE =>2, ---2 means OLTP
15 BLKCNT_CMP=>lv_BLKCNT_CMP,
16 BLKCNT_UNCMP=>lv_BLKCNT_UNCMP,
17 ROW_CMP =>lv_ROW_CMP,
18 ROW_UNCMP =>lv_ROW_UNCMP,
19 CMP_RATIO=>lv_cmp_ratio,
20 COMPTYPE_STR=>lv_COMPTYPE_STR);
21 dbms_output.put_line('====================================================');
22 dbms_output.put_line('1. Compression Ratio :'||lv_cmp_ratio);
23 dbms_output.put_line('2. Block Count :'||lv_blkcnt_cmp);
24 dbms_output.put_line('3. Compression Type :'||lv_comptype_str);
25 dbms_output.put_line('4. Blk Count Compressed :'||lv_BLKCNT_CMP);
26 dbms_output.put_line('5. Blk Count Un-compressed:'||lv_BLKCNT_UNCMP);
27 dbms_output.put_line('6. Row Count Compressed :'||lv_row_cmp);
28 dbms_output.put_line('4. Row Count Un-Compressed:'||lv_row_uncmp);
29 dbms_output.put_line('====================================================');
30 end;
31 /

====================================================
1. Compression Ratio :2.38995215311004784688995215311004784689
2. Block Count :836
3. Compression Type :"Compress For OLTP"
4. Blk Count Compressed :836
5. Blk Count Un-compressed:1998
6. Row Count Compressed :256
4. Row Count Un-Compressed:107
====================================================

PL/SQL procedure successfully completed.

According to the above information, if we look at the compressed (836 blocks) and uncompressed blocks(1998) counts (4 and 5 above), we should at least 50% gain!

3. Create a new tablespace with COMPRESS option. Unless I am missing something obvious, I couldn't get the correct syntax in both 11gR1 and R2 documentation. BTW, when I executed the same SQL on 11gR1 - I got - ORA-14464: Compression Type not specified

SQL> create tablespace USER_DATA_COMPRESSED default compress for OLTP datafile '/u01/oracle/oradata/DET01/user_data_compressed.dbf' size 10M autoextend on;

Tablespace created.

4. Create copy of T1 table (our example table) in COMPRESSED tablespace:

SQL> create table chandra.t1_compressed tablespace USER_DATA_COMPRESSED
2 as Select * from chandra.t1;

Table created.

5. Let's see how much space does the compressed table take when compared to the uncompressed/original table:

SQL> select segment_name, bytes, a.blocks, compression, compress_for from dba_segments a,
2 dba_tables b
3 where a.segment_name = b.table_name and a.segment_name in ('T1','T1_COMPRESSED');

SEGMENT_NAME BYTES BLOCKS COMPRESS COMPRESS_FOR
------------------------------ ---------- ---------- -------- ------------
T1_COMPRESSED 612368384 74752 ENABLED OLTP
T1 1476395008 180224 DISABLED


As theDBMS_COMPRESSION procedure indicated, we did see less than 50% space utilization with the OLTP compression, so far so good. How about performance? I performed such generic tests, involving SELECTS and other DMLs and here are the results:

Any activity involving Full Table Scan, the performance was 4 times faster on compressed tables compared with uncompressed table.
Inserts and Updates took close to twice longer on compressed table compared with uncompressed table and deletes about 10% more.

When we only talk about other benefits of compression - they would be lesser memory utilization, redo generation etc.. comes to my mind. Of course, at the cost of higher CPU consumption.

Friday, September 11, 2009

Oracle11gR2 ASM New Feature - Intelligent Data Placement

Intelligent Data Placement, a new feature of Oracle11gR2 ASM is aimed at optimizing data access. This is achieved by giving us the capability to place the data on those sectors of the hard drive where the I/O performance is high/efficient. These sectors are the outer sectors. Before going into the details of the ASM's feature, let's attempt to understand briefly how and why the outer sector's i/o performance is better compared to the inner sectors.

The above mentioned efficiencies come from a technique used in hard drives called "Zoned Bit Recording". This method or technique used to increase capacity and data access speeds on hard disks by improving the utilization of the larger, outer tracks of the disk. Outer tracks contain more sectors per track than the one before. This makes more efficient use of the larger tracks on the outside of the disk. The data transfer rate is higher when reading outside cylinders than reading the inside ones, since outer cylinders contain more data.

Coming back to the original topic - ASM now classifies reads/writes into two - COLD and HOT. As you guessed, HOT read/writes are those coming from the blocks which are classified as HOT - meaning placed in the outer sectors. COLD is the default - meaning data placed in inner sectors. Data blocks of most often accessed segments are candidates for placement in HOT region. This is done at the datafile level in an ASM disk.

You could use SQL*Plus or DBConsole to make these changes.




It's just matter of picking the appropriate radio-button depending on the needs/access patterns to switch between HOT and COLD regions.

We know what the feature means - but I wanted to put the feature to test to really see the benefit in regards to I/O rates/speeds using some general test cases. Here is how the I/O rates or performance was for a set of tests (comprising a mix of read/writes(deletes/inserts/updates) with the files being in the default zone (COLD). Then ran the same tests while the datafile was placed in the HOT region. Here are the test results. Note that when you move a file from one region to another, you have to initiate a re-balance to make the change effective.




With Intelligent Data Placement:

ALTER DISKGROUP ASMDISK01 MODIFY FILE '+ASMDISK01/BDBE01/DATAFILE/APP_DATA.256.697380955' ATTRIBUTES (HOT)





I executed the above mentioned test cases multiple times to make sure I see or notice consistent (and thus reliable and dependable) I/O performance. During my tests I saw that there was consistently an increase of about 10% to 15% in "Average Throughput" and about (-)3% to +5% in the "Average Response Times". Of course, we may have to perform more realistic tests with application specific I/O patterns.

I think we can say that with this new feature, Oracle (with ASM) will be able to successfully exploit the Zoned Bit Recording technique of hard drives for optimizing I/O performance. Again, not sure if this really going to make a huge impact given the large cache and disk sizes.