Tuesday, January 11, 2011

Create DB Link in Other Schema

I have tried in several ways but failed and succeed using following set of sql (Thanks to Ritesh Raj Singh) :

SQL> grant create database link to schemauser ;

SQL> create or replace procedure schemauser.zcreatedblink as
begin

execute immediate 'create database link DBLINK_NAME connect to TARGET_USER identified by TARGET_PASSWORD using ''TARGET_TNS''';

end;
/

SQL> exec schemauser.zcreatedblink

SQL> revoke create database link from schemauser ;

SQL> drop procedure schemauser.createdblink;

Please inform if anybody has better idea.

Thursday, December 23, 2010

Sql loader: Control Commit Interval

Analyzing the ADDM report we have found below:

Wait class "Commit" was consuming significant database time

It was a DB where an application continuously loads data using sql loader (using default parameters). As a result, huge amount of commits are always going on.

Now we want to control the number of rows between commits.

We changed the sqlldr script by appending ROWS=1000 and found below on sql loader log:

Space allocated for bind array: 495360 bytes(40 rows)

So, commit is occurring after insertion of each 40 rows which uses BIND size of 495360 bytes.

Now we have to calculate the appropriate size for BIND. We calculated as [(495360/40)*1000] bytes.

So, we have to use a new parameter BINDSIZE and append below in the old sqlldr command:

BINDSIZE =12384000 rows=1000

But we have to control BINDSIZE so that it doesn't exceed the max permitted value for READSIZE (read buffer size) for the platform (for 9i database it is 20MB or 20971520 bytes)[http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch04.htm#1004855]

Now our sql loader logfile show as below:

Space allocated for bind array: 12384000 bytes(1000 rows)

Read buffer bytes:12384000

So, we are getting exactly 1000 rows interval for commits.

[Thanks to ssunda6: http://www.orafaq.com/forum/t/84903/2/]

Tuesday, December 21, 2010

Creating a User Defined Context Variable

Creating a user Defined Context Variable

SQL> conn / as sysdba;
SQL> CREATE OR REPLACE CONTEXT my_app_ctx using myuser.ctx_pkg;

Context created

SQL> select * from dba_context where namespace='MY_APP_CTX';

NAMESPACE SCHEMA PACKAGE TYPE
----------- ------- ------- ----------------
MY_APP_CTX MYUSER CTX_PKG ACCESSED LOCALLY

SQL> disc

SQL> conn myuser/password@DBNAME


SQL> CREATE OR REPLACE PACKAGE ctx_pkg IS
PROCEDURE set_ctx(p_name VARCHAR2, p_value VARCHAR2);
END;
/
Package created

SQL> CREATE OR REPLACE PACKAGE BODY ctx_pkg IS
PROCEDURE set_ctx(p_name VARCHAR2, p_value VARCHAR2) IS
BEGIN
dbms_session.set_context('my_app_ctx',p_name,p_value);
END set_ctx;
END;
/
Package body created

SQL> exec ctx_pkg.set_ctx(p_name => 'VAR1',p_value => 'VAL1');

PL/SQL procedure successfully completed

SQL> SELECT sys_context('my_app_ctx', 'VAR1') var1 FROM dual;

VAR1
------------
VAL1

Sunday, December 12, 2010

Configure RMAN for RAC (multiple instances)

On our exadata machine we were planning for RMAN backup satisfying below:

1. Parallelism 14
2. Use more tan 1 instance (instance 3 & 4)
3. Exclude some of the tablespaces [due to limitation]

In order to satisfy 1 & 2 below changes have beed done using rman:

CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 5 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 6 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 7 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 8 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 9 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 10 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 11 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 12 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 13 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 14 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 14;



Now we have our default configuration satisfied for 1 & 2, we are ready to deal with point 3. We can easily do this inside the RMAN script [above can also be done inside the RMAN backup script but in that case password of sys or system will become open]:


FULL LEVEL 0


configure exclude for tablespace 'USERS';
configure exclude for tablespace 'TBS1';
configure exclude for tablespace 'TBS2';
configure exclude for tablespace 'TBS3';
configure exclude for tablespace 'TBS4';

backup incremental level 0 database;

configure exclude for tablespace 'USERS' clear;
configure exclude for tablespace 'TBS1' clear;
configure exclude for tablespace 'TBS2' clear;
configure exclude for tablespace 'TBS3' clear;
configure exclude for tablespace 'TBS4' clear;

SQL 'alter system archive log current';

backup archivelog all DELETE INPUT;
backup current controlfile;

delete noprompt obsolete;

delete noprompt backup of archivelog until time 'sysdate-8';



LEVEL 1 DIFFERENTIAL


configure exclude for tablespace 'USERS';
configure exclude for tablespace 'TBS1';
configure exclude for tablespace 'TBS2';
configure exclude for tablespace 'TBS3';
configure exclude for tablespace 'TBS4';

backup incremental level 1 database;

configure exclude for tablespace 'USERS' clear;
configure exclude for tablespace 'TBS1' clear;
configure exclude for tablespace 'TBS2' clear;
configure exclude for tablespace 'TBS3' clear;
configure exclude for tablespace 'TBS4' clear;

SQL 'alter system archive log current';
backup archivelog all DELETE INPUT;
backup current controlfile;

delete noprompt obsolete;

delete noprompt backup of archivelog until time 'sysdate-8';



LEVEL 1 CUMULATIVE:

configure exclude for tablespace 'USERS';
configure exclude for tablespace 'TBS1';
configure exclude for tablespace 'TBS2';
configure exclude for tablespace 'TBS3';
configure exclude for tablespace 'TBS4';

backup incremental level 1 cumulative database;

configure exclude for tablespace 'USERS' clear;
configure exclude for tablespace 'TBS1' clear;
configure exclude for tablespace 'TBS2' clear;
configure exclude for tablespace 'TBS3' clear;
configure exclude for tablespace 'TBS4' clear;


SQL 'alter system archive log current';
backup archivelog all DELETE INPUT;
backup current controlfile;

delete noprompt obsolete;

delete noprompt backup of archivelog until time 'sysdate-8';


After configuration as above the result summery is as below:

Total Data Size: 6 TB
Excluded Data Size: 4 TB
Total Backed-up Data Size(on DB): 2 TB [90% data using EHCC or compressed for query high]
Backupset Size of one level 0 : 1.25 TB
Backupset Size of one level 1 : 100 GB
Total time for level 0: 6 hours
Total time for level 1: 60 minutes

In order to find backup piece size we have used below query:

select ctime "Date",
decode(backup_type, 'L', 'Archive Log', 'D', 'Full', 'Incremental') backup_type,
bsize "Size GB"
from (select trunc(bp.completion_time) ctime,
backup_type,
round(sum(bp.bytes / 1024 / 1024/1024), 2) bsize
from v$backup_set bs, v$backup_piece bp
where bs.set_stamp = bp.set_stamp
and bs.set_count = bp.set_count
and bp.status = 'A'
group by trunc(bp.completion_time), backup_type)
order by 1, 2;
[thanks to ebrian: http://forums.oracle.com/forums/thread.jspa?threadID=1094667]

Monday, November 22, 2010

Exadata : Shrink and Increase ASM Diskgroup

On our exadata half rack machine, two of our asm diskgroups were DATA2 with 3.5 TB and DATA1 with 11 TB.
We were planning to reduce DATA2 to 2 TB and DATA1 to 12.5 TB.

I followed below steps to accomplish the task:

[As we are using normal redundency about 50% of the size specified here will be usable]
step 1> resize from ASM. Resize DATA2 to 2 TB[4 TB raw] (form 88GB to 50 GB per GRIDDISK, in grididsk we should consider raw size)
SQL> ALTER DISKGROUP DATA2 RESIZE ALL SIZE 50 G REBALANCE POWER 11;

step 2> resize GRIDDISKs of DATA2_* on each cell for each griddisk run below command
CellCLI> ALTER GRIDDISK griddisk_name size=50G;
[
--create scripts using below query:
--as in exadata each cell is a failgroup use
--failgroup to generate script for a cell
select 'ALTER GRIDDISK ' || name || ' size=50G;'
from v$asm_disk
where name like 'DATA2_%' and failgroup='FAILGROUP';
]

step 3> check griddisk detail for DATA2 [size:]
$ cellcli -e "list griddisk where name like 'DATA2_.*' detail"|grep size:

step 4> check size of a DATA1 griddisk [size:]
$ cellcli -e "list griddisk where name like 'DATA1_.*' detail"|grep size:
-- let the size = CURR_GD [270G]

step 5> check free space on CELLDISKs [freeSpace:]
$ cellcli -e "list celldisk where name like 'CD_.*' detail"|grep freeSpace:
-- let free space = FREE_CD [38G]
-- we want to increase the size by FREE_CD
-- (here, FREE_CD has been freed by shirnking
-- each DATA2 gridisks to 25 GB)
-- so, new size of each griddisk of DATA1,
-- NEW_SIZE = CURR_GD+FREE_CD (in GB, raw size)
-- [308 G=270+38]

step 6> resize GRIDDISKs of DATA1_* on each cell for each griddisk run below command
CellCLI> ALTER GRIDDISK griddisk_name size=NEW_SIZE G;
[
--create scripts using below query:
--as in exadata each cell is a failgroup use
--failgroup to generate script for a cell
select 'ALTER GRIDDISK ' || name || ' size=NEW_SIZE G;'
from v$asm_disk
where name like 'DATA1_%' and failgroup='FAILGROUP';
]

step 7> resize from ASM
SQL> ALTER DISKGROUP DATA1 RESIZE ALL REBALANCE POWER 11; --no size specified to allocate maximum availabe space on related griddisks

step 8> check status of all asmdisks using "v$asm_disk"

step 9> check DB alert logs

step 10> check cell alert history on each cell
CellCLI>list ALERTHISTORY


Thursday, September 2, 2010

BUFFER SORT and Parallel query

One of our developer complained that he executed a query that failed because because of "ORA-1652: unable to extend temp segment by 128 in tablespace TEMP".
He also explained that there was no "sorting" or "group by" operation in that query.
Then I looked at the execution plan and found that there are some buffer sort in the execution plan which consumed space in temp.

After some googling I found that it the parallel query who in responsible for this.
http://www.orafaq.com/forum/t/125684/0/

If joining two tables, one has parallel degree more than 1 and other has parallel 1 or noparallel
then buffer sort will take place.€
To check this I performed below tests:

Buffer Sort:

alter table bs1 parallel 4;
alter table bs2 parallel 1;

explain plan for
select * from bs1 a,bs2 b
where a.id=b.id;

select
substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,50 ) "Operation",
object_name "Object"
from
plan_table
start with id = 0
connect by prior id=parent_id;

Operation Object
----------------------------------------------------------
SELECT STATEMENT ()
PX COORDINATOR ()
PX SEND (QC (RANDOM)) :TQ10002
HASH JOIN ()
PX RECEIVE ()
PX SEND (HASH) :TQ10001
PX BLOCK (ITERATOR)
TABLE ACCESS (STORAGE FULL) BS1
BUFFER (SORT)
PX RECEIVE ()
PX SEND (HASH) :TQ10000
TABLE ACCESS (STORAGE FULL) BS2



No Buffer Sort:


alter table bs1 parallel 4;
alter table bs2 parallel 2;

explain plan for
select * from bs1 a,bs2 b
where a.id=b.id;

select
substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,50 ) "Operation",
object_name "Object"
from
plan_table
start with id = 0
connect by prior id=parent_id;

Operation Object
--------------------------------------------------------
SELECT STATEMENT ()
PX COORDINATOR ()
PX SEND (QC (RANDOM)) :TQ10002
HASH JOIN (BUFFERED)
PX RECEIVE ()
PX SEND (HASH) :TQ10000
PX BLOCK (ITERATOR)
TABLE ACCESS (STORAGE FULL) BS1
PX RECEIVE ()
PX SEND (HASH) :TQ10001
PX BLOCK (ITERATOR)
TABLE ACCESS (STORAGE FULL) BS2

Block Corrouption

If database can be started in open mode then do below keeping DB open.
If DB is in noarchive mode or DB cannot be opened or situation is like, DB can be opened but after some time DB goes in recovery mode(may be frequent ORA-600[17182] & ORA-600[17114] in alert log) and stopped then starup in mound mode and follow steps below:€

1. Set $ORACLE_SID and $ORACLE_HOME

2. $rman target / nocatalog

3. from rman prompt validate all datafiles(user "check logical" as rman normally checks only physical corruption):

run {
allocate channel d1 type disk;
backup check logical validate database;
release channel d1;
}

Now v$database_block_corruption view is populeted with corroupted blocks.

4. If you have RMAN backup and related archive logs then perform block corruption recovery using rman and go to step 9.


5. If DB is in open mode and you can query dictionary (dba_%) tables then go to the next step otherwise folow as below:
- execute below query to find out datafiles with corroupted blocks

select a.FILE#, a.NAME, b.TS#, b.NAME
from v$datafile a, v$tablespace b
where a.FILE# in
(select FILE# from v$database_block_corruption group by FILE#)
and a.TS# = b.TS#;

- make datafiles offline which are got from above query
- alter database open
- goto next step(6)

6. Execute the script

create table segment_corrupted (owner varchar2(10), segment_name varchar2(50), segment_type varchar(20));
truncate table segment_corrupted;

declare
nrows number;
badsegs number;
begin
badsegs:=0;
nrows:=0;
for i in (select file#,block# from v$database_block_corruption) loop
begin
insert into segment_corrupted select owner, segment_name, segment_type from dba_extents where file_id=i.file# and i.block# between block_id and block_id + blocks -1;
commit;
end;
nrows:=nrows+1;
end loop;
commit;
end;
/

7. Drop the objects found in segment_corrupted table

8. Recover datafiles (from step 5) and make online as below:
recover datafile 'datafilename';
alter database datafile 'datafilename' online;

9. Bounce the DB and look in the alert log if there are any alerts

10. Take a full DB backup using rman