Thursday, December 23, 2010

Sql loader: Control Commit Interval

Analyzing the ADDM report we have found below:

Wait class "Commit" was consuming significant database time

It was a DB where an application continuously loads data using sql loader (using default parameters). As a result, huge amount of commits are always going on.

Now we want to control the number of rows between commits.

We changed the sqlldr script by appending ROWS=1000 and found below on sql loader log:

Space allocated for bind array: 495360 bytes(40 rows)

So, commit is occurring after insertion of each 40 rows which uses BIND size of 495360 bytes.

Now we have to calculate the appropriate size for BIND. We calculated as [(495360/40)*1000] bytes.

So, we have to use a new parameter BINDSIZE and append below in the old sqlldr command:

BINDSIZE =12384000 rows=1000

But we have to control BINDSIZE so that it doesn't exceed the max permitted value for READSIZE (read buffer size) for the platform (for 9i database it is 20MB or 20971520 bytes)[http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch04.htm#1004855]

Now our sql loader logfile show as below:

Space allocated for bind array: 12384000 bytes(1000 rows)

Read buffer bytes:12384000

So, we are getting exactly 1000 rows interval for commits.

[Thanks to ssunda6: http://www.orafaq.com/forum/t/84903/2/]

Tuesday, December 21, 2010

Creating a User Defined Context Variable

Creating a user Defined Context Variable

SQL> conn / as sysdba;
SQL> CREATE OR REPLACE CONTEXT my_app_ctx using myuser.ctx_pkg;

Context created

SQL> select * from dba_context where namespace='MY_APP_CTX';

NAMESPACE SCHEMA PACKAGE TYPE
----------- ------- ------- ----------------
MY_APP_CTX MYUSER CTX_PKG ACCESSED LOCALLY

SQL> disc

SQL> conn myuser/password@DBNAME


SQL> CREATE OR REPLACE PACKAGE ctx_pkg IS
PROCEDURE set_ctx(p_name VARCHAR2, p_value VARCHAR2);
END;
/
Package created

SQL> CREATE OR REPLACE PACKAGE BODY ctx_pkg IS
PROCEDURE set_ctx(p_name VARCHAR2, p_value VARCHAR2) IS
BEGIN
dbms_session.set_context('my_app_ctx',p_name,p_value);
END set_ctx;
END;
/
Package body created

SQL> exec ctx_pkg.set_ctx(p_name => 'VAR1',p_value => 'VAL1');

PL/SQL procedure successfully completed

SQL> SELECT sys_context('my_app_ctx', 'VAR1') var1 FROM dual;

VAR1
------------
VAL1

Sunday, December 12, 2010

Configure RMAN for RAC (multiple instances)

On our exadata machine we were planning for RMAN backup satisfying below:

1. Parallelism 14
2. Use more tan 1 instance (instance 3 & 4)
3. Exclude some of the tablespaces [due to limitation]

In order to satisfy 1 & 2 below changes have beed done using rman:

CONFIGURE CHANNEL 1 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 3 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 4 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 5 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 6 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 7 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 8 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 9 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 10 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 11 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 12 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE CHANNEL 13 DEVICE TYPE DISK CONNECT 'sys/password@dw3';
CONFIGURE CHANNEL 14 DEVICE TYPE DISK CONNECT 'sys/password@dw4';
CONFIGURE DEVICE TYPE DISK BACKUP TYPE TO COMPRESSED BACKUPSET PARALLELISM 14;



Now we have our default configuration satisfied for 1 & 2, we are ready to deal with point 3. We can easily do this inside the RMAN script [above can also be done inside the RMAN backup script but in that case password of sys or system will become open]:


FULL LEVEL 0


configure exclude for tablespace 'USERS';
configure exclude for tablespace 'TBS1';
configure exclude for tablespace 'TBS2';
configure exclude for tablespace 'TBS3';
configure exclude for tablespace 'TBS4';

backup incremental level 0 database;

configure exclude for tablespace 'USERS' clear;
configure exclude for tablespace 'TBS1' clear;
configure exclude for tablespace 'TBS2' clear;
configure exclude for tablespace 'TBS3' clear;
configure exclude for tablespace 'TBS4' clear;

SQL 'alter system archive log current';

backup archivelog all DELETE INPUT;
backup current controlfile;

delete noprompt obsolete;

delete noprompt backup of archivelog until time 'sysdate-8';



LEVEL 1 DIFFERENTIAL


configure exclude for tablespace 'USERS';
configure exclude for tablespace 'TBS1';
configure exclude for tablespace 'TBS2';
configure exclude for tablespace 'TBS3';
configure exclude for tablespace 'TBS4';

backup incremental level 1 database;

configure exclude for tablespace 'USERS' clear;
configure exclude for tablespace 'TBS1' clear;
configure exclude for tablespace 'TBS2' clear;
configure exclude for tablespace 'TBS3' clear;
configure exclude for tablespace 'TBS4' clear;

SQL 'alter system archive log current';
backup archivelog all DELETE INPUT;
backup current controlfile;

delete noprompt obsolete;

delete noprompt backup of archivelog until time 'sysdate-8';



LEVEL 1 CUMULATIVE:

configure exclude for tablespace 'USERS';
configure exclude for tablespace 'TBS1';
configure exclude for tablespace 'TBS2';
configure exclude for tablespace 'TBS3';
configure exclude for tablespace 'TBS4';

backup incremental level 1 cumulative database;

configure exclude for tablespace 'USERS' clear;
configure exclude for tablespace 'TBS1' clear;
configure exclude for tablespace 'TBS2' clear;
configure exclude for tablespace 'TBS3' clear;
configure exclude for tablespace 'TBS4' clear;


SQL 'alter system archive log current';
backup archivelog all DELETE INPUT;
backup current controlfile;

delete noprompt obsolete;

delete noprompt backup of archivelog until time 'sysdate-8';


After configuration as above the result summery is as below:

Total Data Size: 6 TB
Excluded Data Size: 4 TB
Total Backed-up Data Size(on DB): 2 TB [90% data using EHCC or compressed for query high]
Backupset Size of one level 0 : 1.25 TB
Backupset Size of one level 1 : 100 GB
Total time for level 0: 6 hours
Total time for level 1: 60 minutes

In order to find backup piece size we have used below query:

select ctime "Date",
decode(backup_type, 'L', 'Archive Log', 'D', 'Full', 'Incremental') backup_type,
bsize "Size GB"
from (select trunc(bp.completion_time) ctime,
backup_type,
round(sum(bp.bytes / 1024 / 1024/1024), 2) bsize
from v$backup_set bs, v$backup_piece bp
where bs.set_stamp = bp.set_stamp
and bs.set_count = bp.set_count
and bp.status = 'A'
group by trunc(bp.completion_time), backup_type)
order by 1, 2;
[thanks to ebrian: http://forums.oracle.com/forums/thread.jspa?threadID=1094667]

Monday, November 22, 2010

Exadata : Shrink and Increase ASM Diskgroup

On our exadata half rack machine, two of our asm diskgroups were DATA2 with 3.5 TB and DATA1 with 11 TB.
We were planning to reduce DATA2 to 2 TB and DATA1 to 12.5 TB.

I followed below steps to accomplish the task:

[As we are using normal redundency about 50% of the size specified here will be usable]
step 1> resize from ASM. Resize DATA2 to 2 TB[4 TB raw] (form 88GB to 50 GB per GRIDDISK, in grididsk we should consider raw size)
SQL> ALTER DISKGROUP DATA2 RESIZE ALL SIZE 50 G REBALANCE POWER 11;

step 2> resize GRIDDISKs of DATA2_* on each cell for each griddisk run below command
CellCLI> ALTER GRIDDISK griddisk_name size=50G;
[
--create scripts using below query:
--as in exadata each cell is a failgroup use
--failgroup to generate script for a cell
select 'ALTER GRIDDISK ' || name || ' size=50G;'
from v$asm_disk
where name like 'DATA2_%' and failgroup='FAILGROUP';
]

step 3> check griddisk detail for DATA2 [size:]
$ cellcli -e "list griddisk where name like 'DATA2_.*' detail"|grep size:

step 4> check size of a DATA1 griddisk [size:]
$ cellcli -e "list griddisk where name like 'DATA1_.*' detail"|grep size:
-- let the size = CURR_GD [270G]

step 5> check free space on CELLDISKs [freeSpace:]
$ cellcli -e "list celldisk where name like 'CD_.*' detail"|grep freeSpace:
-- let free space = FREE_CD [38G]
-- we want to increase the size by FREE_CD
-- (here, FREE_CD has been freed by shirnking
-- each DATA2 gridisks to 25 GB)
-- so, new size of each griddisk of DATA1,
-- NEW_SIZE = CURR_GD+FREE_CD (in GB, raw size)
-- [308 G=270+38]

step 6> resize GRIDDISKs of DATA1_* on each cell for each griddisk run below command
CellCLI> ALTER GRIDDISK griddisk_name size=NEW_SIZE G;
[
--create scripts using below query:
--as in exadata each cell is a failgroup use
--failgroup to generate script for a cell
select 'ALTER GRIDDISK ' || name || ' size=NEW_SIZE G;'
from v$asm_disk
where name like 'DATA1_%' and failgroup='FAILGROUP';
]

step 7> resize from ASM
SQL> ALTER DISKGROUP DATA1 RESIZE ALL REBALANCE POWER 11; --no size specified to allocate maximum availabe space on related griddisks

step 8> check status of all asmdisks using "v$asm_disk"

step 9> check DB alert logs

step 10> check cell alert history on each cell
CellCLI>list ALERTHISTORY


Thursday, September 2, 2010

BUFFER SORT and Parallel query

One of our developer complained that he executed a query that failed because because of "ORA-1652: unable to extend temp segment by 128 in tablespace TEMP".
He also explained that there was no "sorting" or "group by" operation in that query.
Then I looked at the execution plan and found that there are some buffer sort in the execution plan which consumed space in temp.

After some googling I found that it the parallel query who in responsible for this.
http://www.orafaq.com/forum/t/125684/0/

If joining two tables, one has parallel degree more than 1 and other has parallel 1 or noparallel
then buffer sort will take place.€
To check this I performed below tests:

Buffer Sort:

alter table bs1 parallel 4;
alter table bs2 parallel 1;

explain plan for
select * from bs1 a,bs2 b
where a.id=b.id;

select
substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,50 ) "Operation",
object_name "Object"
from
plan_table
start with id = 0
connect by prior id=parent_id;

Operation Object
----------------------------------------------------------
SELECT STATEMENT ()
PX COORDINATOR ()
PX SEND (QC (RANDOM)) :TQ10002
HASH JOIN ()
PX RECEIVE ()
PX SEND (HASH) :TQ10001
PX BLOCK (ITERATOR)
TABLE ACCESS (STORAGE FULL) BS1
BUFFER (SORT)
PX RECEIVE ()
PX SEND (HASH) :TQ10000
TABLE ACCESS (STORAGE FULL) BS2



No Buffer Sort:


alter table bs1 parallel 4;
alter table bs2 parallel 2;

explain plan for
select * from bs1 a,bs2 b
where a.id=b.id;

select
substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,50 ) "Operation",
object_name "Object"
from
plan_table
start with id = 0
connect by prior id=parent_id;

Operation Object
--------------------------------------------------------
SELECT STATEMENT ()
PX COORDINATOR ()
PX SEND (QC (RANDOM)) :TQ10002
HASH JOIN (BUFFERED)
PX RECEIVE ()
PX SEND (HASH) :TQ10000
PX BLOCK (ITERATOR)
TABLE ACCESS (STORAGE FULL) BS1
PX RECEIVE ()
PX SEND (HASH) :TQ10001
PX BLOCK (ITERATOR)
TABLE ACCESS (STORAGE FULL) BS2

Block Corrouption

If database can be started in open mode then do below keeping DB open.
If DB is in noarchive mode or DB cannot be opened or situation is like, DB can be opened but after some time DB goes in recovery mode(may be frequent ORA-600[17182] & ORA-600[17114] in alert log) and stopped then starup in mound mode and follow steps below:€

1. Set $ORACLE_SID and $ORACLE_HOME

2. $rman target / nocatalog

3. from rman prompt validate all datafiles(user "check logical" as rman normally checks only physical corruption):

run {
allocate channel d1 type disk;
backup check logical validate database;
release channel d1;
}

Now v$database_block_corruption view is populeted with corroupted blocks.

4. If you have RMAN backup and related archive logs then perform block corruption recovery using rman and go to step 9.


5. If DB is in open mode and you can query dictionary (dba_%) tables then go to the next step otherwise folow as below:
- execute below query to find out datafiles with corroupted blocks

select a.FILE#, a.NAME, b.TS#, b.NAME
from v$datafile a, v$tablespace b
where a.FILE# in
(select FILE# from v$database_block_corruption group by FILE#)
and a.TS# = b.TS#;

- make datafiles offline which are got from above query
- alter database open
- goto next step(6)

6. Execute the script

create table segment_corrupted (owner varchar2(10), segment_name varchar2(50), segment_type varchar(20));
truncate table segment_corrupted;

declare
nrows number;
badsegs number;
begin
badsegs:=0;
nrows:=0;
for i in (select file#,block# from v$database_block_corruption) loop
begin
insert into segment_corrupted select owner, segment_name, segment_type from dba_extents where file_id=i.file# and i.block# between block_id and block_id + blocks -1;
commit;
end;
nrows:=nrows+1;
end loop;
commit;
end;
/

7. Drop the objects found in segment_corrupted table

8. Recover datafiles (from step 5) and make online as below:
recover datafile 'datafilename';
alter database datafile 'datafilename' online;

9. Bounce the DB and look in the alert log if there are any alerts

10. Take a full DB backup using rman

Changing spfile location of RAC DATABASE

Last week we made some changes in the configuration (SGA & PGA) of our RAC DB using pfile. After open the each instance using pfile we executed below commands pfile with new configuration:
create spfile from pfile='.....';

As a result now each instance has different spfile in their default location ($ORACLE_HOME/dbs).
So we were planning to shift back spfile to a common location in ASM.

To do this we followed steps bellow:

1. Take backup of pfile & spfile

2. login to a instance(my case instance "dw1" of DB "dw") as sysdba

3. SQL> create pfile='/home/oracle/pfileaug31aug2010.ora' from spfile;

4. SQL> create spfile='+DATA1/dw/spfiledw.ora' from pfile='/home/oracle/pfileaug31aug2010.ora';

5. Then create a pfile in the default location($ORACLE_HOME/dbs/initSID.ora) having only the spfile location:
echo "SPFILE='+DATA1/axdw/spfiledw.ora'" > $ORACLE_HOME/dbs/initdw1.ora

6. delete the spfile in default location($ORACLE_HOME/dbs)

7. restart the current instance

8. Now repeat steps 5,6 & 7 for all other instances

9. Now while a instance starts
- it will look for spfile in the default location
- as no spfile is there it will look for pfile
- in pfile it will find the location of spfile and load init params for it

ORA-01548: active rollback segment '_SYSSMU39_115105166$' found

I tried to drop a undo tablespace that contains active rollback segments and got ORA-01548 error.

To resolve this problem follow below steps:
1. SQL>Create PFILE from SPFILE;

2. Edit pfile and set undo management to manual.
undo_management = manual

3. Enter the rollback segment using below parameter:

_offline_rollback_segments=('_SYSSMU39_115105166$')

4. SQL>startup mount pfile='\....\pfile'

5. alter Database datafile 'filename' offline drop; [if the datafile is corroupted]

6. SQL>alter Database Open;

7. SQL>drop Rollback Segment "_SYSSMU39_115105166$";

8. SQL>Drop Tablespace old__undo_tablespace_name Including Contents and datafiles;

9. create new undo tablespace.

10. Shutdown Immediate;

11. Change the pfile parameters
Undo_management = AUTO
parameter Undo_tablespace=new_undo_tablespace_name
and remove the _offline_rollback_segments parameter

12. Startup the Database

http://www.oracle-base.com/forums/viewtopic.php?f=1&t=9775&p=20714

Sunday, July 4, 2010

Session status is KILLED but no cleanup

Today morning I killed some sessions which were INACTIVE for very long time (select * from v$session where LAST_CALL_ET/(60*60*24) > 1000) which was demanded by our application admin.

I killed those session and those sessions were marked KILLED (STATUS field of v$session). After 5 hours i checked those session again and found that those are still in v$session and status='KILLED'.

Then I used a query, joining v$session(paddr) with v$process(addr) to find OS process id(v$process.spid), but no rows were returned.
Then, checked v$transaction view to check whether any rollback is going on-

select s.sid,
s.program,
t.status as transaction_status,
s.status as session_status,
s.lockwait,
s.pq_status,
t.used_ublk as undo_blocks_used,
decode(bitand(t.flag, 128), 0, 'NO', 'YES') rolling_back
from v$session s, v$transaction t
where s.taddr = t.addr;

But there was no rollback going on.
After that I used below query to find status of waiting session-

select *
from v$session_wait
where sid in (select sid
from v$session
where username = 'USERNAME'
and status = 'KILLED');

This time I found my sessions in waiting state with wait_time=0, so, oracle is waithing for instructions from user/clients and I have to stop/kill client program/process.
But there was no corresponding row for those killed session in v$process (as paddr in v$session has been changed after those session and sessions were in waiting state).

So, I used below query to find and kill any unwanted process which don't have any corresponding entry in v$session-

SELECT spid
FROM v$process a
WHERE NOT EXISTS (SELECT 1 FROM v$session b WHERE b.paddr = a.addr);

I knew that I am using DEDICATED server, killing one process no impact on other (as no one is sharing that session). I also knew that those session were connected using TNS from remote machine.
So, before killing each process, I checked with `ps -ef|grep ` and if I found LOCAL=NO, then I killed the process.
After killing every process I waited for a while and checked in v$session fir one of KILLED session had been gone.

http://www.lazydba.com/oracle/0__4714.html
http://oracleunix.wordpress.com/2006/08/06/alter-system-kill-session-marked-for-killed-forever/
http://forums.oracle.com/forums/thread.jspa?threadID=695018

Tuesday, February 23, 2010

Using "sys_context" function

We were planning to restrict an DB user so that it can
a. only create session
b. have insert only into a table MYTABLE
c. Can insert on that table(MYTABLE) only from several selected servers

Point a & b are easy to resolve. But for pint c we had to do some this extra.
In order to implement point c we created a trigger on table MYTABLE which used function sys_context to collect & IP from the session and use these information for authentication. My trigger was as below:

create or replace trigger mytrg
before insert on MYTABLE
for each row
declare
var_ip varchar2(50);
var_count_ip number;
var_user varchar2(50);
begin
select sys_context('USERENV', 'IP_ADDRESS') into var_ip from dual;
select count(*) into var_count_ip from servers where ip = var_ip;
select user into var_user from dual;
if var_user = 'ALERTSENDER' and (var_ip is null or var_count_ip <>
RAISE_APPLICATION_ERROR(-20001,
'ERROR:' || var_ip ||
':NOT AN AUTHENTICATED IP');
else
--OTHER NORMAL OPERATIONS
end if;
end;


Detailed List of Locked Object

I order to find details of all locked object at any moment we can use below query:

select c.USERNAME,b.owner,b.object_name,b.object_type,
decode(a.LOCKED_MODE,
0,
'none',
1,'null',
2,'Row- Share',
3,'Row-exclusive',
4,'Share',
5,'Share /Row- exclusive',
6,'exclusive') LOCKED_MODE,
c.SID, c.SERIAL#,d.SPID, c.OSUSER,c.PROGRAM,c.MACHINE
from v$locked_object a, dba_objects b, v$session c, v$process d
where a.OBJECT_ID = b.object_id
and a.SESSION_ID = c.SID
and c.PADDR = d.ADDR;

Tuesday, February 16, 2010

Foreign Key Constraints Referencing to a Table

We had a table to purge with primary key.
Problem is we need to backup purged data which includes data from all the child tables referencing with foreign keys to that table.
Now I had to find all the child tables and their foreign key columns. I used below query to resolve this problem:

select a.owner,
a.constraint_name,
a.constraint_type,
a.table_name,
b.column_name,
a.r_owner,
a.r_constraint_name
from dba_constraints a, dba_cons_columns b
where a.owner = 'MYUSER'
and a.owner = b.owner
and a.constraint_name = b.constraint_name
and a.r_constraint_name in
(select constraint_name
from dba_constraints
where table_name = 'MYTABLE'
and owner = 'MYUSER');


http://bytes.com/topic/oracle/answers/644008-query-find-primary-foreign-keys
http://www.databasejournal.com/features/oracle/article.php/3665591/Finding-Foreign-Key-Constraints-in-Oracle.htm

Monday, February 15, 2010

Execution Plan & Estimate Temporary Tablespace Usages

One of our developer was informing that he need extra space on temporary tablespace as one of his adhoc query is running for 2 hours (and may run for 10 hours) and consuming huge amount of temp space.

Database was Oracle 11gR2 on RHEL5.
Now problem is how can I estimate that how much space should I add ? I tried with execution plan
as below:

sqlplus user/pass@DB

sql> delete from myuser.plan_table;

sql> commit;

sql> explain plan for
select * from .....


sql> select * from myuser.plan_table;


But I could not find any field that said about Temp Usage.
Then searching the web I became know that Older version of plan table(I found default) don't have that field. It should be created manually and than explain the a query:

cd /ORAIN/oracle/product/11.2.0/dbhome_1/rdbms/admin/


sqlplus user/pass@DB

sql> drop myuser.plan_table purge;

sql> @utlxplan.sql

Table created.

sql> delete from myuser.plan_table;

sql> commit;

sql> explain plan for
select * from .....


sql> select * from myuser.plan_table;

Here TEMP_SPACE column will show the estimated space usage. Then I added the extra Temp space according to the value [added more than estimated for safety purpose] showed by column TEMP_SPACE and it worked.


Recent optimizer statistics should be gathered for Tables and Indexes used the query for which plan is being generated in order to get more accurate result.

http://oradbatips.blogspot.com/2008/02/tip-69-estimate-temp-usage-without.html
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:12519780526076

Wednesday, January 20, 2010

LOCAL LISTENER & ORA-00119 & ORA-00132

Oracle's PMON by default checks for listeners on the default port 1521.
If any listener on port 1521 is running PMON will register the database with tnat listener.Otherwise it will try every possible port.

In order to help PMON to find LISTENER(if not running on 1521) and initialization parameter needs to be configured. The parameter is "local_listener".
If "local_listener" has any value then an entry in tnsnames.ora is needed specifying that local listener port. Please follow below:

if local_listener is set as below:

SQL> show parameter local_listener

NAME TYPE VALUE
-------------- ------- ---------------------
local_listener string LISTENER_MYDB

An entry in tnsnames.ora will be as below (PORT should be the listener port created to serve the target DB):

LISTENER_MYDB =
(ADDRESS = (PROTOCOL = TCP)(HOST = myservername_or_ip)(PORT = 1522))

Now if init parameter local_listener is set (not null) and there is no corresponding enrtry in tnsnames.ora then below will be shown:

ORA-00119: invalid specification for system parameter LOCAL_LISTENER
ORA-00132: syntax error or unresolved network name 'LISTENER_MYDB'

http://www.orafaq.com/node/30
http://www.shutdownabort.com/errors/ORA-00119.php

Tuesday, January 19, 2010

WFMLRSVCApp.ear file missing 11g Installation

Last week I was installing oracle 11gR2.

There were row zip files:

linux_11gR2_database_1of2.zip
linux_11gR2_database_2of2.zip

First time they were extracted in two different folder Disk1 and Diks2.
Then all the contents of one folder were copied to another folder.
Under this condition I started my installation.
After some time i got an error like


File not found .......... WFMLRSVCApp.ear


I stopped installation and tried as below:

Without unzipping into two different folders I copied those two installation zip files to a single folder and unzip them in the same folder as below:

unzip linux_11gR2_database_1of2.zip
unzip linux_11gR2_database_2of2.zip

At this time my installation was running perfectly.
thanks to link



sqlplus Permission denied!!

Last week I installed a new 11gR2 database on RHEL5.4
Installation was completed and I was about to run sqlplus for the first time.

On a terminal I typed sqlplus /nolog
but I got error like below:
error while loading shared libraries: /lib/lib.....so: cannot restore segment prot after reloc: Permission denied

The reason is that SELinux is running and it is in enforcing mode
To solve this problem please request your system admin
or if you have root password then do below:

# getenforce (to see the current mode)
Enforcing

# setenforce 0 (to change the mode to )

# getenforce (to see the current mode)
Permissive


You will find more info here

db_32K_cache_size

A few days ago I was asked to create a replica of a tablespace on production database.
OS of production was HP-UX and my test environment was Redhat 5.4(x86).
I created a database there and collect the exact tablespace creation script.
I run below tablespace creation script:

create tablespace MYTBS datafile '/data01/oradata/MYTBS/mytbs01.dbf' size 50M blocksize 32k;

and got below error:

ORA-00382: 32768 not a valid block size, valid range [2048..16384]

I thought that above error occurred as i didn't had 32K cache configured.

I i tried to configure as below:

SQL> alter system set db_32K_cache_size=5M;
alter system set db_32K_cache_size=5M
*
ERROR at line 1: ORA-32017: failure in updating SPFILE
ORA-00382: 32768 not a valid block size, valid range [..]


Then I search for it and found that not all cache size can be configured on all system.
It is Operating System dependent. As my OS was RHEL5.1 it was not possible to create db_32K_cache_size.

Please read metalink/MOS doc id 138242.1 for details



Sunday, January 10, 2010

Who is locking an Object?

Today I had to update a column of a configuration table. So I executed below query to update that column:

update admuser.cfg_control
set IMP_DIR='/system/cfg/'
where FUNCTION_ID = 121;

After waiting some time there was no result query was still going on. This is unusual, because there are only about 250 records in that table. That means, there was some one who hat locked the table. In order to find out the session locking an object I executed below query:

select *
from v$session
where sid in
( select session_id
from v$locked_object
where object_id in
( select object_id
from dba_objects
where object_name = 'CFG_CONTROL'
and owner = 'ADMUSER')
);

After that I knew that who was locking that table and I could kill the session or could tell the specific user to logout and then execute the update.
Table v$locked_object has a column locked_mode which describes in number which type of lock is on that table.

locked_mode in table v$locked_object:
0 = none
1 = null
2 = Row- Share
3 = Row- exclusive
4 = Share
5 = Share /Row- exclusive
6 = exclusive