Showing posts with label DATABASE. Show all posts
Showing posts with label DATABASE. Show all posts

Thursday, July 14, 2011

Install Oracle Workflow Server on 11gR2 Database

We have just installed Oracle Workflow Server on 11gR2 Database using below steps:

1. Go to $ORACLE_HOME\owb\wf\install

2. run below script:
Linux: $ ./wfinstall.csh
Windows: wfinstall.bat

3. insert values as below on the form displayed with the above command:
Install Option : Server Only
Workflow Account : owf_mgr
Workflow Password : Specify a password for owf_mgr
SYS Password : Enter SYS password for the database on which we are installing Oracle Workflow.
TNS Connect Descriptor : Descriotion of TNS of Target Database
[ (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = HOSTNAME/IP)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MYDB)))]
LDAP Parameters : [If needed]
Mailer Parameters : [If needed]
Tablespace : Default tablespace for worlflow server

4. Submit to start

5. After Successful installation a completion message will be displayed

Thanks to below links:
http://download.oracle.com/docs/cd/B28359_01/owb.111/b31280/install_opts05.htm#i1008852
http://forums.oracle.com/forums/thread.jspa?threadID=663891

Tuesday, March 29, 2011

Exadata: Implementing Database Machine Bundle Patch (8)

According to the readme and companion documents we have prepared below action plan:
(We need to monitor Oracle MOS Note 888828.1 for Exadata supported patch)

** This patch is RAC Rolling Installable for ourcase, as BP5 9870547 is currently installed

-- -------------
-- Prerequisites
-- -------------

1. Inatall latest version of OPatch and dismount all DBFS:

** Logon to each(4) DB Machine and follow below steps:

For GI HOME:
------------
$ echo $ORACLE_HOME
/u01/app/11.2.0/grid

$ mv /u01/app/11.2.0/grid/OPatch /u01/app/11.2.0/grid/OPatch_old
$ unzip p6880880_112000_Linux-x86-64.zip -d /u01/app/11.2.0/grid/
$ cd $ORACLE_HOME/OPatch
$ ./opatch version [to check that the version is 11.2.0.1.4]

For DB HOME:
------------
$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1

$ mv /u01/app/oracle/product/11.2.0/dbhome_1/OPatch /u01/app/oracle/product/11.2.0/dbhome_1/OPatch_old
$ unzip p6880880_112000_Linux-x86-64.zip -d /u01/app/oracle/product/11.2.0/dbhome_1/
$ cd $ORACLE_HOME/OPatch
$ ./opatch version [to check that the version is 11.2.0.1.4]


2. Verify the OUI Inventory:

** Logon to each(4) DB Machine and follow below steps:

For GI HOME:
------------
$ echo $ORACLE_HOME
/u01/app/11.2.0/grid

$cd $ORACLE_HOME/OPatch
$./opatch lsinventory

For DB HOME:
------------
$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/dbhome_1

$ cd $ORACLE_HOME/OPatch
$ ./opatch lsinventory

[If the command errors out, contact Oracle Support]

3. Create a location PATCH_TOP

** Logon All 4 DB Machine and follow below steps:

$ mkdir /u01/app/patch/p10389035_112010_Linux-x86-64
$ export PATCH_TOP=/u01/app/patch/p10389035_112010_Linux-x86-64
$ unzip -d $PATCH_TOP p10389035_112010_Linux-x86-64.zip

4. Determine whether any currently installed one-off patches conflict with the DBM bundle patch8 10389035:
** For Both RDBMS & GI_HOME on all 4 nodes, check below

$ $ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir $PATCH_TOP/10389035
- Conflicts with a patch already applied to the ORACLE_HOME - In this case, stop the patch installation and contact Oracle Support Services
- Conflicts with subset patch already applied to the ORACLE_HOME - In this case, continue with the patch installation because as the new patch

-- ------------
-- Installation
-- ------------
**for all 4 nodes repeat below steps [steps 5 to 7]:

5. Install Patch on GI_HOME
- As oracle run below:
$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
$ $ORACLE_HOME/bin/srvctl stop instance -d db_unique_name -n node_name

- As root run below:
# /u01/app/11.2.0/grid/bin/crsctl stop crs
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -unlock

- As oracle run below:
$ export ORACLE_HOME=/u01/app/11.2.0/grid
$ export PATH=$PATH:$ORACLE_HOME/OPatch
$ which opatch
/u01/app/11.2.0/grid/OPatch/opatch
$ export PATCH_TOP=/u01/app/patch/p10389035_112010_Linux-x86-64
$ cd $PATCH_TOP/10389035
$ opatch apply -local

- As root run below:
# /u01/app/11.2.0/grid/crs/install/rootcrs.pl -patch
# /u01/app/11.2.0/grid/bin/crsctl start crs

- As oracle run below:
$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
$ $ORACLE_HOME/bin/srvctl start instance -d db_unique_name -n node_name

6. Install Patch on DB_HOME [Start a new session to unset ENV variable in step:5]
- As oracle run below:
$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
$ $ORACLE_HOME/bin/srvctl stop home -o /u01/app/oracle/product/11.2.0/dbhome_1 -s STAT_FILE_LOCATION -n NODE_NAME
$ export PATH=$PATH:$ORACLE_HOME/OPatch
$ which opatch
/u01/app/oracle/product/11.2.0/dbhome_1/OPatch/opatch
$ export PATCH_TOP=/u01/app/patch/p10389035_112010_Linux-x86-64
$ cd $PATCH_TOP/10389035
$ opatch apply -local
$ $ORACLE_HOME/bin/srvctl start home -o /u01/app/oracle/product/11.2.0/dbhome_1 -s STAT_FILE_LOCATION -n NODE_NAME

-- ---------------------------------------------
-- Postinstallation only on a ***single instance
-- --------------------------------------------

7. Reload the packages into the database:

$ sqlplus /nolog
SQL> connect / as sysdba
SQL> @?/rdbms/admin/catbundle.sql exa apply

8. Navigate to the $ORACLE_HOME/cfgtoollogs/catbundle directory
(if $ORACLE_BASE is defined then the logs will be created under $ORACLE_BASE/cfgtoollogs/catbundle),
and check the following log files for any errors like "grep ^ORA | sort -u".
If there are errors, refer to "Known Issues". Here, the format of the is YYYYMMMDD_HH_MM_SS.

catbundle_EXA__APPLY_.log
catbundle_EXA__GENERATE_.log

-- --------------
-- Deinstallation
-- --------------

9. **for all 4 nodes repeat steps 5 to 7 except use "opatch rollback -id 10389035 -local" inetead of "opatch apply -local"

Thursday, September 2, 2010

Changing spfile location of RAC DATABASE

Last week we made some changes in the configuration (SGA & PGA) of our RAC DB using pfile. After open the each instance using pfile we executed below commands pfile with new configuration:
create spfile from pfile='.....';

As a result now each instance has different spfile in their default location ($ORACLE_HOME/dbs).
So we were planning to shift back spfile to a common location in ASM.

To do this we followed steps bellow:

1. Take backup of pfile & spfile

2. login to a instance(my case instance "dw1" of DB "dw") as sysdba

3. SQL> create pfile='/home/oracle/pfileaug31aug2010.ora' from spfile;

4. SQL> create spfile='+DATA1/dw/spfiledw.ora' from pfile='/home/oracle/pfileaug31aug2010.ora';

5. Then create a pfile in the default location($ORACLE_HOME/dbs/initSID.ora) having only the spfile location:
echo "SPFILE='+DATA1/axdw/spfiledw.ora'" > $ORACLE_HOME/dbs/initdw1.ora

6. delete the spfile in default location($ORACLE_HOME/dbs)

7. restart the current instance

8. Now repeat steps 5,6 & 7 for all other instances

9. Now while a instance starts
- it will look for spfile in the default location
- as no spfile is there it will look for pfile
- in pfile it will find the location of spfile and load init params for it