Monday, September 14, 2009

Oracle11gR2: Deferred Segment Creation and export behavior!

I guess by now most of us know what Oracle11gR2's feature - Deferred Segment Creation really means. This post is about the impact this feature going to have while exporting a schema or tables using the traditional export. By default, any new tables created would have SEGMENT CREATION DEFERRED option set (unless you are using CTAS). This default behavior could be turned off by setting DEFERRED_SEGMENT_CREATION to false. The impact with this is - if you happen to export any of the tables, which do not have any rows (meaning empty tables without segments) - they will NOT be exported. This is only true with the legacy/old export and not with the export pump utility(10g and above). Something we have to remember if we can't get rid of the old habit of using old export! Of course, export is deprecated.

Oracle11gR2 Table / Tablespace Compression

As far as I remember, oracle introduced the basic compression feature with Oracle9i, but what's new with 11g is the ability to compress data generated as a result of OLTP transactions (almost all DMLs) as well - previously it was only available for bulk operations. Another neat feature of 11g is the ability to evaluate whether not a table is a right candidate for compression , if yes what would be compression benefits in regards space utilization. This is done using DBMS_COMPRESSION (using Compression Advisor). I wanted to use this procedure to see how accurately it could give us the space savings and also see what would be impact on performance, in general.

I am using a 18million record table for this example.

1. Space usage for the uncompressed table:


SQL> select segment_name, bytes, a.blocks, compression, compress_for from
2 dba_segments a, dba_tables b
3 where a.segment_name = b.table_name and a.segment_name='T1';

SEGMENT_NAME BYTES BLOCKS COMPRESS COMPRESS_FOR
------------------------------ ---------- ---------- -------- ------------
T1 1476395008 180224 DISABLED



2. Execute the DBMS_COMPRESSION procedure to get information on space savings:

(Note that this procedure example works only in 11gR2, the function names and the arguments are different in 11gR1.)

SQL> declare
2 lv_cmp_ratio number;
3 lv_comptype_str varchar2(300);
4 lv_BLKCNT_CMP number;
5 lv_BLKCNT_UNCMP number;
6 lv_ROW_CMP number;
7 lv_ROW_UNCMP number;
8 begin
9 dbms_compression.GET_COMPRESSION_RATIO(
10 SCRATCHTBSNAME=>'USERS',
11 OWNNAME=>'CHANDRA',
12 TABNAME=>'T1',
13 PARTNAME =>null,
14 COMPTYPE =>2, ---2 means OLTP
15 BLKCNT_CMP=>lv_BLKCNT_CMP,
16 BLKCNT_UNCMP=>lv_BLKCNT_UNCMP,
17 ROW_CMP =>lv_ROW_CMP,
18 ROW_UNCMP =>lv_ROW_UNCMP,
19 CMP_RATIO=>lv_cmp_ratio,
20 COMPTYPE_STR=>lv_COMPTYPE_STR);
21 dbms_output.put_line('====================================================');
22 dbms_output.put_line('1. Compression Ratio :'||lv_cmp_ratio);
23 dbms_output.put_line('2. Block Count :'||lv_blkcnt_cmp);
24 dbms_output.put_line('3. Compression Type :'||lv_comptype_str);
25 dbms_output.put_line('4. Blk Count Compressed :'||lv_BLKCNT_CMP);
26 dbms_output.put_line('5. Blk Count Un-compressed:'||lv_BLKCNT_UNCMP);
27 dbms_output.put_line('6. Row Count Compressed :'||lv_row_cmp);
28 dbms_output.put_line('4. Row Count Un-Compressed:'||lv_row_uncmp);
29 dbms_output.put_line('====================================================');
30 end;
31 /

====================================================
1. Compression Ratio :2.38995215311004784688995215311004784689
2. Block Count :836
3. Compression Type :"Compress For OLTP"
4. Blk Count Compressed :836
5. Blk Count Un-compressed:1998
6. Row Count Compressed :256
4. Row Count Un-Compressed:107
====================================================

PL/SQL procedure successfully completed.

According to the above information, if we look at the compressed (836 blocks) and uncompressed blocks(1998) counts (4 and 5 above), we should at least 50% gain!

3. Create a new tablespace with COMPRESS option. Unless I am missing something obvious, I couldn't get the correct syntax in both 11gR1 and R2 documentation. BTW, when I executed the same SQL on 11gR1 - I got - ORA-14464: Compression Type not specified

SQL> create tablespace USER_DATA_COMPRESSED default compress for OLTP datafile '/u01/oracle/oradata/DET01/user_data_compressed.dbf' size 10M autoextend on;

Tablespace created.

4. Create copy of T1 table (our example table) in COMPRESSED tablespace:

SQL> create table chandra.t1_compressed tablespace USER_DATA_COMPRESSED
2 as Select * from chandra.t1;

Table created.

5. Let's see how much space does the compressed table take when compared to the uncompressed/original table:

SQL> select segment_name, bytes, a.blocks, compression, compress_for from dba_segments a,
2 dba_tables b
3 where a.segment_name = b.table_name and a.segment_name in ('T1','T1_COMPRESSED');

SEGMENT_NAME BYTES BLOCKS COMPRESS COMPRESS_FOR
------------------------------ ---------- ---------- -------- ------------
T1_COMPRESSED 612368384 74752 ENABLED OLTP
T1 1476395008 180224 DISABLED


As theDBMS_COMPRESSION procedure indicated, we did see less than 50% space utilization with the OLTP compression, so far so good. How about performance? I performed such generic tests, involving SELECTS and other DMLs and here are the results:

Any activity involving Full Table Scan, the performance was 4 times faster on compressed tables compared with uncompressed table.
Inserts and Updates took close to twice longer on compressed table compared with uncompressed table and deletes about 10% more.

When we only talk about other benefits of compression - they would be lesser memory utilization, redo generation etc.. comes to my mind. Of course, at the cost of higher CPU consumption.

Friday, September 11, 2009

Oracle11gR2 ASM New Feature - Intelligent Data Placement

Intelligent Data Placement, a new feature of Oracle11gR2 ASM is aimed at optimizing data access. This is achieved by giving us the capability to place the data on those sectors of the hard drive where the I/O performance is high/efficient. These sectors are the outer sectors. Before going into the details of the ASM's feature, let's attempt to understand briefly how and why the outer sector's i/o performance is better compared to the inner sectors.

The above mentioned efficiencies come from a technique used in hard drives called "Zoned Bit Recording". This method or technique used to increase capacity and data access speeds on hard disks by improving the utilization of the larger, outer tracks of the disk. Outer tracks contain more sectors per track than the one before. This makes more efficient use of the larger tracks on the outside of the disk. The data transfer rate is higher when reading outside cylinders than reading the inside ones, since outer cylinders contain more data.

Coming back to the original topic - ASM now classifies reads/writes into two - COLD and HOT. As you guessed, HOT read/writes are those coming from the blocks which are classified as HOT - meaning placed in the outer sectors. COLD is the default - meaning data placed in inner sectors. Data blocks of most often accessed segments are candidates for placement in HOT region. This is done at the datafile level in an ASM disk.

You could use SQL*Plus or DBConsole to make these changes.




It's just matter of picking the appropriate radio-button depending on the needs/access patterns to switch between HOT and COLD regions.

We know what the feature means - but I wanted to put the feature to test to really see the benefit in regards to I/O rates/speeds using some general test cases. Here is how the I/O rates or performance was for a set of tests (comprising a mix of read/writes(deletes/inserts/updates) with the files being in the default zone (COLD). Then ran the same tests while the datafile was placed in the HOT region. Here are the test results. Note that when you move a file from one region to another, you have to initiate a re-balance to make the change effective.




With Intelligent Data Placement:

ALTER DISKGROUP ASMDISK01 MODIFY FILE '+ASMDISK01/BDBE01/DATAFILE/APP_DATA.256.697380955' ATTRIBUTES (HOT)





I executed the above mentioned test cases multiple times to make sure I see or notice consistent (and thus reliable and dependable) I/O performance. During my tests I saw that there was consistently an increase of about 10% to 15% in "Average Throughput" and about (-)3% to +5% in the "Average Response Times". Of course, we may have to perform more realistic tests with application specific I/O patterns.

I think we can say that with this new feature, Oracle (with ASM) will be able to successfully exploit the Zoned Bit Recording technique of hard drives for optimizing I/O performance. Again, not sure if this really going to make a huge impact given the large cache and disk sizes.

Wednesday, September 9, 2009

Oracle11g R2 - impressive and intelligent CRS installer (OUI)!

I was really impressed with the enhancements made to the installer, apart from the change in the look and feel. Prior to Oracle11gR2, installer would allow us to place the redundant copies of OCR/Voting devices/files unto the same file system (of course it doesn't make any sense though). But with 11gR2, it prevents us from placing the mirrors unto the same file system. There is no IGNORE option as well...either you have to place it on different file systems/locations or use a single copy. So the installer has "best practices" built into it.

Here is how I first attempt to use - placing all the three unto the same file system:


And here is the error message:



I ended up using a single copy for now:


BTW, the installer allows us to configure SSH, test SSH, option to use ASM or shared file system for OCR/Vote etc... The best of all, it now provides a script to fix any of the fix-able requirements such as kernel parameters etc..

Tuesday, September 8, 2009

ORA-03001 - while trying to rename tables owned by others - workaround

There could be situations where you may like to rename the tables owned by others, but do not have access to the password (of the schema owner) or don't want to change the password (using "by values" etc...). You could perform almost all the activities by using "ALTER SESSION SET CURRENT_SCHEMA=;", but rename table wouldn't work and following error message would be encountered:

chandra@DOE01:11g> show user
USER is "CHANDRA"
chandra@DOE01:11g> alter session set current_schema=CP_ADMIN;

Session altered.

chandra@DOE01:11g> rename APP_DATA to APP_DATA_BAK;
rename APP_DATA to APP_DATA_BAK
*
ERROR at line 1:
ORA-03001: unimplemented feature


chandra@DOE01:11g>

To get around this, we could create a simple procedure similar to the following:

create or replace procedure rename_others_tab (orig_tab_name in varchar2, new_tab_name in varchar2)
as
lv_sql_str varchar2(100);
lv_exists number(1);
begin
select count(*) into lv_exists from user_tables where table_name=orig_tab_name;
if lv_exists = 1 then
lv_sql_str := 'rename '||orig_tab_name||' to '||new_tab_name;
dbms_output.put_line (lv_sql_str);
execute immediate lv_sql_str;
else
dbms_output.put_line('ERROR:'||orig_tab_name||' does not exist!');
end if;
end;
/


Note that you would be creating the above procedure while your current_schema is set to the owner of the table.

Then:

chandra@DOE01:11g> exec rename_others_tab('APP_DATA','APP_DATA_BAK');

PL/SQL procedure successfully completed.

chandra@DOE01:11g>

The table name would be renamed....Of course, use the above procedure (work-around) with caution and when ABSOLUTELY needed.

Thought would be of use to others....