Monday, August 13, 2007

Export/Import Pump - Copying Schemas

Let’s say that we have a Schema in Database A, which needs to be copied over to Database B. With the traditional export/import, we have to precreate the user account in the Database B, before we can import the schema. But with oracle10g’s Pump utility, it is just one simple step:

Here is an example. In this example, the schema CHANDRA exists in database A and we attempt to copy over the same schema as CHANDRA3 in a remote database (rhes01.myoracle.com).

$impdp system/******* schemas=chandra network_link=rhes01.myoracle.com remap_schema=chandra:chandra3

Import: Release 10.2.0.3.0 - Production on Saturday, 05 May, 2007 18:54:21

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production

With the Partitioning, OLAP and Data Mining options

Starting “SYSTEM”.”SYS_IMPORT_SCHEMA_01″: system/******** schemas=chandra network_link=rhes01.myoracle.com remap_schema=chandra:chandra3

Estimate in progress using BLOCKS method…

Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 0 KB

Processing object type SCHEMA_EXPORT/USER

Processing object type SCHEMA_EXPORT/SYSTEM_GRANT

Processing object type SCHEMA_EXPORT/ROLE_GRANT

Processing object type SCHEMA_EXPORT/DEFAULT_ROLE

Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA

Processing object type SCHEMA_EXPORT/TABLE/TABLE

Job “SYSTEM”.”SYS_IMPORT_SCHEMA_01″ successfully completed at 18:54:51

******************************************************************************************
This creates the user CHANDRA3 as well in the remote/target database.

Don’t forget to provide different filenames when PARALLEL is used with Export Dump.

If you use PARALLEL option with Export Dump and fail to provide file names equal to the number of PARALLEL, then it would perform the export in serial - meaning writes to a single file.

For example, if you have something like this: $ expdp dumpfile=expdat.dmp directory=dump_dir parallel=4
It would create just one file with the name expdat.dmp. Instead you can specify some thing like:

$ expdp dumpfile=expdat.%U.dmp directory=dump_dir parallel=4

This would create 4 files in parallel with names (expdat.01.dmp, expdat.02.dmp, expdat.03.dmp..).

*********************************************************************************

Oracle 11g RAC Install on Red Hat 5.0 (Linux) using NFS

Here is the procedure I followed to install Oracle 11g RAC on RHEL 5.0 and thought would share this information with everyone who is interested in checking out Oracle11g RAC on Linux. Most of the pre-installation steps should work for Oracle10g as well. This configuration is not for production systems and would be good for testing purposes only.

I have used my two existing two Intel based systems for the purpose, and in all it took about 8 hours to get the whole thing working (including the OS install – as a third bootable Operating System – the first one being Windows XP the second being RHEL 4.0 with 10gR2 RAC).....much better than the initial installation experiences with Oracle10g RAC.

Please note that I am not using Firewire or a third server here for provisioning the shared storage, but NFS - the details follow.


Hardware details:

  • Node 1: pepsi.pinnacle.com (Intel Pentium 4 2.80Ghz, 2G RAM with 80GB and 160GB hard drives, 2 NICs)
  • Node 2: coke.pinnacle.com (Intel Pentium 4 2.80Ghz, 2G RAM with 160GB, 2 NICs)

RHEL 5.0 Installation:

I have performed a complete/full installation – basically all/everything...so that all the RPMs are included therein. Kernel version used - 2.6.18-8.el5xen.

Installation Steps:

Here are high-level installation steps:

  1. Check OS/Packages requirements
  2. Configure public and private network.
  3. Create oracle user and dba group.
  4. Setup SSH for oracle user account between the two nodes.
  5. Set up shared storage using NFS (details follow)
  6. Oracle11g CRS Installation
  7. Oracle11g RDBMS Software
  8. Oracle11g RAC Database creation.
The details.......

1. Configure Operating System for Oracle11g RAC Install (Both nodes)

The following RPMs are required/recommended for Oracle11g RAC installation on RHEL 5.0.

  1. binutils-2.17.50.0.6-2.el5
  2. compat-libstdc++-33-3.2.3-61
  3. elfutils-libelf-0.97-5 – elfutils-libelf-devel-0.125-3.el5 was installed by default
  4. elfutils-libelf-devel-0.125
  5. glibc-2.5-12
  6. glibc-common-2.5-12
  7. glibc-devel-2.5-12
  8. gcc-4.1.1-52
  9. gcc-c++-4.1.1-52
  10. libaio-0.3.106
  11. libaio-devel-0.3.106 – not installed by default
  12. libgcc-4.1.1-52
  13. libstdc++-4.1.1
  14. libstdc++-devel-3.4.3-22.1 libstdc++-devel-4.1.1-52.el5 was installed by default.
  15. make-3.81-1.1
  16. sysstat-7.0.0 – not installed.
  17. UnixODBC-2.2.11 – not installed.
  18. UnixODBC-devel-2.2.11 – not installed.
All the above RPMs were installed and available by default (may be because I have done a complete install). Installed the following 4 missing RPMs from RHEL 5.0 CDs (using rpm -ivh ).
  1. libaio-devel-0.3.106
  2. sysstat-7.0.0
  3. UnixODBC-2.2.11
  4. UnixODBC-devel-2.2.11

Kernel Parameters:


  • Add the following lines to /etc/sysctl.conf

kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 262144

  • Run sysctl -p to read the values as indicated in /etc/sysctl.conf file.
CVUQDISK Package:

  • Install the cvuqdisk package – located in the CRS software distribution (/clusterware/rpm):

[root@coke rpm]# export CVUQDISK_GRP=dba
[root@coke rpm]# rpm -ivh cvuqdisk-1.0.1-1.rpm
Preparing... ########################################### [100%]
1:cvuqdisk ########################################### [100%]
[root@coke rpm]# pwd
/share/Oracle-Linux-Software/11g/clusterware/rpm
[root@coke rpm]#

  • Here is the information from the /etc/hosts file. The contents of this file are same on both the nodes.

$ cat /etc/hosts
#----Entries for Node1
192.168.1.160 pepsi.pinnacle.com pepsi
192.168.1.151 pepsi-vip.pinnacle.com pepsi-vip
172.1.1.160 pepsi-priv.pinnacle.com pepsi-priv
#---Entries for Node2
192.168.1.150 coke.pinnacle.com coke
192.168.1.161 coke-vip.pinnacle.com coke-vip
172.1.1.150 coke-priv.pinnacle.com coke-priv

Node 1:
[root@coke ~]# ifconfig -a eth0
eth0 Link encap:Ethernet HWaddr 00:16:76:4B:79:F1
inet addr:192.168.1.150 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:76ff:fe4b:79f1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:3507 errors:0 dropped:0 overruns:0 frame:0
TX packets:1675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:665921 (650.3 KiB) TX bytes:188926 (184.4 KiB)
[root@coke ~]# ifconfig -a eth1
eth1 Link encap:Ethernet HWaddr 00:09:5B:E1:CD:DD
inet addr:172.1.1.150 Bcast:172.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::209:5bff:fee1:cddd/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:310738 errors:0 dropped:0 overruns:0 frame:0
TX packets:254557 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:141179047 (134.6 MiB) TX bytes:345107330 (329.1 MiB)

Node 2:
[root@pepsi ~]# ifconfig -a eth0
eth0 Link encap:Ethernet HWaddr 00:16:76:38:D8:C5
inet addr:192.168.1.160 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::216:76ff:fe38:d8c5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:419 errors:0 dropped:0 overruns:0 frame:0
TX packets:244 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:80282 (78.4 KiB) TX bytes:27401 (26.7 KiB)
[root@pepsi ~]# ifconfig -a eth1
eth1 Link encap:Ethernet HWaddr 00:09:5B:BB:D8:F1
inet addr:172.1.1.160 Bcast:172.1.255.255 Mask:255.255.0.0
inet6 addr: fe80::209:5bff:febb:d8f1/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:361934 errors:0 dropped:0 overruns:0 frame:0
TX packets:326328 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:366491792 (349.5 MiB) TX bytes:148740977 (141.8 MiB)
Interrupt:17 Base address:0xe000


2. Configure SSH between two nodes

While logged in as oracle perform the following on each of the nodes of the RAC:

  1. cd $HOME
  2. mkdir ~/.ssh
  3. chmod 700 ~/.ssh
  4. /usr/bin/ssh-keygen -t rsa
  5. /usr/bin/ssh-keygen -t dsa

On Node 1:

  1. cd $HOME/.ssh
  2. cat id_rsa.pub >> authorized_keys
  3. cat id_dsa.pub >> authorized_keys
  4. Copy the authorized_keys file to the node 2. scp authorized_keys node2:/opt/oracle/.ssh

On Node 2:

  1. cd $HOME/.ssh
  2. cat id_rsa.pub >> authorized_keys
  3. cat id_dsa.pub >> authorized_keys
  4. scp authorized_keys node1:/opt/oracle/.ssh

  • Now perform a ssh between all the nodes including the node-priv hostnames as well. Check to make sure that ssh is configured well without prompting for the password (on both the nodes):
$ ssh coke.pinnacle.com date
Sun Aug 12 08:41:42 CDT 2007
$ ssh pepsi.pinnacle.com date
Sun Aug 12 08:42:23 CDT 2007
$ ssh coke-priv date
Sun Aug 12 08:42:45 CDT 2007
$ ssh pepsi-priv date
Sun Aug 12 08:43:22 CDT 2007


3. Configuring the Shared Storage

This is the one of the critical components required for RAC. There are couple of other storage options available for setting up Test environment at home, such as Firewire, NAS and/or NFS. I have installed Oracle RAC with both Oracle9i and Oracle10g using Firewire and this time around I wanted to see if I can use NFS instead. Please note that this is only for testing and experimentation only and will not be suitable for any kind of production use. Usually, we would be needing an separate server/node which can host the NFS file systems, but I have, instead of using a third server, I have configured one of the nodes to host the NFS file systems. Here is the procedure:

On Node 1 (coke):

1. Created three partitions on the local hard drive with a size of 10G each.

[root@coke ~]# fdisk /dev/sda
The number of cylinders for this disk is set to 9726.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2550 20482843+ c W95 FAT32 (LBA)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
e
Partition number (1-4): 2
First cylinder (2551-9726, default 2551):
Using default value 2551
Last cylinder or +size or +sizeM or +sizeK (2551-9726, default 9726):
Using default value 9726
Command (m for help): p
Disk /dev/sda: 80.0 GB, 80000000000 bytes
255 heads, 63 sectors/track, 9726 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2550 20482843+ c W95 FAT32 (LBA)
/dev/sda2 2551 9726 57641220 5 Extended
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (2551-9726, default 2551):
Using default value 2551
Last cylinder or +size or +sizeM or +sizeK (2551-9726, default 9726): +10G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (3768-9726, default 3768):
Using default value 3768
Last cylinder or +size or +sizeM or +sizeK (3768-9726, default 9726): +10G
Command (m for help): n
Command action
l logical (5 or over)
p primary partition (1-4)
l
First cylinder (4985-9726, default 4985):
Using default value 4985
Last cylinder or +size or +sizeM or +sizeK (4985-9726, default 9726): +10G
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Device Boot Start End Blocks Id System
/dev/sda1 * 1 2550 20482843+ c W95 FAT32 (LBA)
/dev/sda2 2551 9726 57641220 5 Extended
/dev/sda5 2551 3767 9775521 83 Linux
/dev/sda6 3768 4984 9775521 83 Linux
/dev/sda7 4985 6201 9775521 83 Linux

[root@coke ~]# parted /dev/sda print
Model: ATA WDC WD800JD-75MS (scsi)
Disk /dev/sda: 80.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 32.3kB 21.0GB 21.0GB primary fat32 boot, lba
2 21.0GB 80.0GB 59.0GB extended
5 21.0GB 31.0GB 10.0GB logical
6 31.0GB 41.0GB 10.0GB logical
7 41.0GB 51.0GB 10.0GB logical
Information: Don't forget to update /etc/fstab, if necessary.

  • Create file systems
[root@coke ~]# mkfs.ext3 /dev/sda5
[root@coke ~]# mkfs.ext3 /dev/sda6
[root@coke ~]# mkfs.ext3 /dev/sda7

  • Create local mount points
[root@coke ~]# mkdir /nfs1 /nfs2 /nfs3

  • Label the partitions/file systems
[root@coke ~]# e2label /dev/sda5 nfs1
[root@coke ~]# e2label /dev/sda6 nfs2
[root@coke ~]# e2label /dev/sda7 nfs3

  • Add the following lines to the /etc/fstab file on node 1(coke)

LABEL=nfs1 /nfs1 ext3 defaults 1 1
LABEL=nfs2 /nfs1 ext3 defaults 1 1
LABEL=nfs3 /nfs1 ext3 defaults 1 1

  • Mount the files systems:

[root@coke ~]# mount /nfs2
[root@coke ~]# mount /nfs2
[root@coke ~]# mount /nfs3
[root@coke ~]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 47611656 2938484 42215632 7% /
tmpfs 966124 0 966124 0% /dev/shm
/dev/sda5 9621848 152688 8980384 2% /nfs1
/dev/sda6 9621848 152688 8980384 2% /nfs2
/dev/sda7 9621848 152688 8980384 2% /nfs3

  • Add the following entries in the /etc/exports file:
/nfs1 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/nfs2 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)
/nfs3 *(rw,sync,no_wdelay,insecure_locks,no_root_squash)

  • Configure the NFS services:
[root@coke ~]# chkconfig nfs on
[root@coke ~]# service nfs restart
Shutting down NFS mountd: [ OK ]
Shutting down NFS daemon: [ OK ]
Shutting down NFS quotas: [ OK ]
Shutting down NFS services: [ OK ]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]
[root@coke ~]#

  • With the above, we have /nfs* file systems which are available for both the nodes. Peform the following steps on both the nodes:

  1. Create mount points to mount the nfs file systems
#mkdir /u01 /u02 /u03 on two nodes

2. Add the following lines to the /etc/fstab file:

coke-priv:/nfs1 /u01 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
coke-priv:/nfs2 /u02 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0
coke-priv:/nfs3 /u03 nfs
rw,bg,hard,nointr,tcp,vers=3,timeo=300,rsize=32768,wsize=32768,actimeo=0 0 0

3. Mount the nfs filesystem locally:

#mount /u01
#mount /u02
#mount /u03

4. Now we have all the three file systems shared among the two nodes and ready for use.

Here is how it will look like on Node 1:

$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda3 47611656 7029084 38125032 16% /
tmpfs 966124 270188 695936 28% /dev/shm
/dev/sda5 9621848 1925276 7207796 22% /nfs1
/dev/sda6 9621848 152700 8980372 2% /nfs2
/dev/sda7 9621848 152696 8980376 2% /nfs3
172.1.1.150:/nfs1 9621856 1925280 7207808 22% /u01
172.1.1.150:/nfs2 9621856 152704 8980384 2% /u02
172.1.1.150:/nfs3 9621856 152704 8980384 2% /u03

On Node 2:

$ df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 44639012 7245820 35089072 18% /
tmpfs 477484 266084 211400 56% /dev/shm
coke-priv:/nfs1 9621856 1925280 7207808 22% /u01
coke-priv:/nfs2 9621856 152704 8980384 2% /u02
coke-priv:/nfs3 9621856 152704 8980384 2% /u03

● Change the owner of the file systems /u01, /u02 and /u03 to oracle:dba.

4. CRS Installation

  • Set up the environment variables for oracle (such ORACLE_BASE, PATH etc..)

  • With this we have all the basic pre-installations tasks taken care of and are ready to invoke the Oracle11g OUI from the CRS software distribution, but before that, let's run the CVU:
$ ./runcluvfy.sh stage -pre crsinst -n coke,pepsi

  • Invoke the Installer:
















Running orainstRoot.sh and root.sh scripts on both the nodes:

[root@coke ~]# /opt/oracle/oraInventory/orainstRoot.sh
Changing permissions of /opt/oracle/oraInventory to 770.
Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete
[root@coke ~]# /opt/oracle/product/11.1.0/crs/root.sh
WARNING: directory '/opt/oracle/product/11.1.0' is not owned by root
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/opt/oracle/product/11.1.0' is not owned by root. Changing owner to root
The directory '/opt/oracle/product' is not owned by root. Changing owner to root
The directory '/opt/oracle' is not owned by root. Changing owner to root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: coke coke-priv coke
node 2: pepsi pepsi-priv pepsi
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /u01/oradb/ocr/vote
Format of 1 voting devices complete.
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
coke
Cluster Synchronization Services is inactive on these nodes.
pepsi
Local node checking complete. Run root.sh on remaining nodes to start CRS daemons.
[root@coke ~]# /opt/oracle/product/11.1.0/crs/root.sh
[root@pepsi ~]# /opt/oracle/oraInventory/orainstRoot.sh
Changing permissions of /opt/oracle/oraInventory to 770.
Changing groupname of /opt/oracle/oraInventory to dba.
The execution of the script is complete
[root@pepsi ~]# /opt/oracle/product/11.1.0/crs/root.sh
WARNING: directory '/opt/oracle/product/11.1.0' is not owned by root
WARNING: directory '/opt/oracle/product' is not owned by root
WARNING: directory '/opt/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up Network socket directories
Oracle Cluster Registry configuration upgraded successfully
The directory '/opt/oracle/product/11.1.0' is not owned by root. Changing owner
to root
The directory '/opt/oracle/product' is not owned by root. Changing owner to root
The directory '/opt/oracle' is not owned by root. Changing owner to root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: coke coke-priv coke
node 2: pepsi pepsi-priv pepsi
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
Cluster Synchronization Services is active on these nodes.
coke
pepsi
Cluster Synchronization Services is active on all the nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
Creating VIP application resource on (2) nodes...
Creating GSD application resource on (2) nodes...
Creating ONS application resource on (2) nodes...
Starting VIP application resource on (2) nodes...
Starting GSD application resource on (2) nodes...
Starting ONS application resource on (2) nodes...
Done.
[root@pepsi ~]#

$ crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
$
We have a up and running Oracle 11g CRS.

Note that you have to always bring up Node 1 first and then Node 2, since Node 1 is hosting the NFS file system required by Node 2.

More to follow......

Thanks to Tim Hall (oracle-base) for sharing the ideas/thoughts on NFS share/mount options.

Convert Non-Partitioned to Partitioned Table - using Online Re-definition feature (10g)

Here is a working example to convert a non-partitioned table to a partitioned table. With this example, we will not see how to convert a table, but will expose to the usage of Online Redefinition feature. There is one neat (apart from others) enhancement to the online redefinition feature, which we are going to talk about.

  • We have a non-partitioned table T1 with about 2 million records
  • It has a primary key, grants to various roles and public synonyms (dependencies)
  • Our goal is to partition the table using the primary key column (col1 number(100))

Some restrictions of Online Redefinition:

  • Cannot belong to SYS or SYSTEM Schema.
  • The table to be redefined online should not be using User-defined data types
  • Should not be a clustered table.
  • Should not have MV Logs or MVs defined

You cannot move a table from one schema to another using Online Redefinition feature.
1. Check to make sure that the table can use the online redefintion feature:

SQL> EXEC DBMS_REDEFINITION.CAN_REDEF_TABLE(’CHANDRA’,'T1′);

PL/SQL procedure successfully completed.

2. Create a partitioned table (add any new attributes you want to the new partitioned table)

Create table t1_part
( col1 number(10),
col2 varchar2(30),
col3 date default sysdate,
col4 varchar2(40))
partition by range (col1)
(Partition p1 values less than (100000),
Partition p2 values less than (200000),
Partition p3 values less than (300000),
Partition p4 values less than (400000),
Partition p5 values less than (500000),
Partition p6 values less than (600000),
Partition p7 values less than (700000),
Partition p8 values less than (800000),
Partition p9 values less than (900000),
Partition p10 values less than (1000000),
Partition p11 values less than (1100000),
Partition p12 values less than (1200000),
Partition p13 values less than (1300000),
Partition p14 values less than (1400000),
Partition p15 values less than (1500000),
Partition p16 values less than (1600000),
Partition p17 values less than (1700000),
Partition p18 values less than (1800000),
Partition p19 values less than (1900000),
Partition p20 values less than (maxvalue))
tablespace users;

3. Start the online redefinition processSQL> EXEC DBMS_REDEFINITION.START_REDEF_TABLE(’CHANDRA’,'T1′,’T1_PART’);

PL/SQL procedure successfully completed.

Elapsed: 00:00:38.53

4. Here is where the Oracle10g feature comes in handy. In Oracle9i, we were required create constraints, indexes, grants etc…on the staging table (t1_part in our example). If triggers, constraints, indexes and grants are defined on the table. These can be cloned on the staging table by executing DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS( ) Once redefinition is complete, the referential constraints and triggers involving the stage table (created disabled) are enabled, all statistics will be copied, and all cloned objects will be renamed to their pre-definition name.

SQL> VARIABLE NUM_ERRORS NUMBER;
SQL> EXEC DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS(’CHANDRA’,'T1′,’T1_PART’, 1,TRUE,TRUE,TRUE,FALSE,:NUM_ERRORS,FALSE);

PL/SQL procedure successfully completed.

Elapsed: 00:00:25.08
SQL> PRINT NUM_ERRORS

NUM_ERRORS
———-
0

5. Resync the tables

SQL> EXEC DBMS_REDEFINITION.SYNC_INTERIM_TABLE(’CHANDRA’,'T1′,’T1_PART’);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.23

6. Complete the online redefinition:

SQL> EXEC DBMS_REDEFINITION.FINISH_REDEF_TABLE(’CHANDRA’,'T1′,’T1_PART’);

PL/SQL procedure successfully completed.

Elapsed: 00:00:02.18

7. Validations:

SQL> Select partition_name, high_value from user_tab_partitions where table_name=’T1′
2 ;

PARTITION_NAME HIGH_VALUE
—————————— ——————————
P1 100000
P10 1000000
P11 1100000
P12 1200000
P13 1300000
P14 1400000
P15 1500000
P16 1600000
P17 1700000
P18 1800000
P19 1900000
P2 200000
P20 MAXVALUE
P3 300000
P4 400000
P5 500000
P6 600000
P7 700000
P8 800000
P9 900000

20 rows selected.

The privileges granted were are also preserved:

SQL> select privilege, grantee from dba_tab_privs where table_name=’T1′;

PRIVILEGE GRANTEE
—————————————- ——————————
ALTER READ_WRITE
DELETE READ_WRITE
INSERT READ_WRITE
SELECT READ_WRITE
SELECT READ_ONLY
UPDATE READ_WRITE
ON COMMIT REFRESH READ_WRITE
QUERY REWRITE READ_WRITE
DEBUG READ_WRITE
FLASHBACK READ_WRITE

10 rows selected.

The only problem I see here is the index is now global and we may like to have a local prefixed indexed. For this purpose we may have to drop the existing index and create a local index;

SQL> alter table t1 disable primary key;

Table altered.

Elapsed: 00:00:00.09
SQL> drop index t1_idx;

Index dropped.

Elapsed: 00:00:00.18
SQL> create index t1_idx on t1(col1) local tablespace users;

Index created.

Elapsed: 00:00:11.91
SQL> alter table t1 enable primary key;

Table altered.

Elapsed: 00:00:13.54
SQL> Select index_name, index_type, partitioned from user_indexes where table_name=’T1′;

INDEX_NAME INDEX_TYPE PAR
—————————— ————————— —
T1_IDX NORMAL YES

Elapsed: 00:00:00.07