While installing an Oracle RAC on around March last year, I prepared below steps on notepad to follow during installation.
I am sharing this as it is and hope it will help others.
Oracle Version : 11.2.0.3
OS : RHEL 5.5 64 Bit
1. Network Check List:
- The SCAN addresses need to be on the same subnet as the VIP addresses for nodes in the cluster
- Each node must have at least two network interface cards (NIC), or network adapters
- Public interface names must be the same for all nodes. If the public interface on one node uses the network adapter eth0, then you must configure eth0 as the public interface on all nodes
- Private interface names should be the same for all nodes as well. If eth1 is the private interface name for the first node, then eth1 should be the private interface name for your second node
- The network adapter for the private interface must support the user datagram protocol (UDP) using high-speed network adapters and a network switch that supports TCP/IP (Gigabit Ethernet or better)
- The host name of each node must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed
2. IP Address Requirements:
- One public IP address for each node
- One virtual IP address for each node
- Three single client access name (SCAN) addresses for the cluster
- 1 private IP address for each machine
3. Prepare the cluster nodes for Oracle RAC:
NOTE: We recommend different users for the installation of the Grid Infrastructure (GI) and the Oracle
RDBMS home. The GI will be installed in a separate Oracle base, owned by user 'grid.' After the grid install
the GI home will be owned by root, and inaccessible to unauthorized users.
# configure NTP
----------------
# configure DNS in /etc/resolve.conf
------------------------------------
a. Create OS groups using the command below. Enter these commands as the 'root' user:
/usr/sbin/groupadd -g 503 oinstall
/usr/sbin/groupadd -g 504 dba
/usr/sbin/groupadd -g 505 asmadmin
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/groupadd -g 507 asmoper
b. Create the users that will own the Oracle software using the commands:
#/usr/sbin/useradd -u 503 -g oinstall -G dba,asmadmin,asmdba,asmoper oracle
c. Set the password for the oracle account using the following command. Replace password with your own password
4. Networking
a. Determine your cluster name.
b. Determine the public host name for each node in the cluster. For the public host name, use the primary host name of each node.
In other words, use the name displayed by the hostname command for example: racnode1
c. Determine the public virtual hostname for each node in the cluster. The virtual host name is a public node
name that is used to reroute client requests sent to the node if the node is down. Oracle recommends that you
provide a name in the format
meet the following requirements:
- The virtual IP address and the network name must not be currently in use.
- The virtual IP address must be on the same subnet as your public IP address.
- The virtual host name for each node should be registered with your DNS.
d. Determine the private hostname for each node in the cluster. This private hostname does not need to be
resolvable through DNS and should be entered in the /etc/hosts file. A common naming convention for the
private hostname is
- The private IP should NOT be accessable to servers not participating in the local cluster.
- The private network should be on standalone dedicated switch(es).
- The private network should NOT be part of a larger overall network topology.
- The private network should be deployed on Gigabit Ethernet or better.
It is recommended that redundant NICs are configured with the Linux bonding driver.
Active/passive is the preferred bonding method due to its simplistic configuration.
e. Define a SCAN DNS name for the cluster that resolves to three IP addresses (round-robin).
SCAN IPs must NOT be in the /etc/hosts file, the SCAN name must be resolved by DNS.
f. Sample /etc/hosts file will be as below:
127.0.0.1 localhost.localdomain localhost #dont add anyhing extra in this line
#eth0 - PUBLIC
10.101.5.25 oradb1.robi.com.bd oradb1
10.101.5.26 oradb2.robi.com.bd oradb2
#VIP
10.101.5.24 oradb1-vip.robi.com.bd oradb1-vip
10.101.5.31 oradb2-vip.robi.com.bd oradb2-vip
#eth1 - PRIVATE
172.16.5.25 oradb1-pvt
172.16.5.26 oradb2-pvt
5. Configuring Kernel Parameters
#vi /etc/sysctl.conf
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 6553600
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
NOTE: The latest information on kernel parameter settings for Linux can be found in My Oracle Support ExtNote:169706.1
#/sbin/sysctl -p
6. Set shell limits for the oracle user
a. Add the following lines to the /etc/security/limits.conf file:
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
b. Add or edit the following line in the /etc/pam.d/login file, if it does not already exist:
session required pam_limits.so
c. Make the following changes to the default shell startup file, add the following lines to the /etc/profile file:
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
umask 022
fi
7. Create the Oracle Inventory Directory
# mkdir -p /oracle/oraInventory
# chown -R oracle:oinstall /oracle/oraInventory
# chmod -R 775 /oracle/orabase/oraInventory
8. Creating the Oracle Grid Infrastructure Home Directory
# mkdir -p /oracle/11.2.0/grid
# chown -R oracle:oinstall /oracle/11.2.0/grid
# chmod -R 775 /oracle/11.2.0/grid
vi oracrs.sh
export ORACLE_SID=+ASM1/+ASM2
export ORACLE_HOME=/u01/app/11.2.0/grid
export PATH=$ORACLE_HOME/bin:$PATH
9. Creating the Oracle Base Directory
# mkdir -p /oracle/orabase/product/11.2.0/dbhome_1
# chown -R oracle:oinstall /oracle/orabase
# chmod -R 775 /oracle/orabase
vi oradb.sh
export ORACLE_SID=ORADB1/ORADB2
export ORACLE_HOME=/oracle/orabase/product/11.2.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH
10. Check OS Software Requirements
binutils-2.15.92.0.2
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.97
elfutils-libelf-devel-0.97
expat-1.95.7
gcc-3.4.6
gcc-c++-3.4.6
glibc-2.3.4-2.41
glibc-2.3.4-2.41 (32 bit)
glibc-common-2.3.4
glibc-devel-2.3.4
glibc-headers-2.3.4
libaio-0.3.105
libaio-0.3.105 (32 bit)
libaio-devel-0.3.105
libaio-devel-0.3.105 (32 bit)
libgcc-3.4.6
libgcc-3.4.6 (32-bit)
libstdc++-3.4.6
libstdc++-3.4.6 (32 bit)
libstdc++-devel 3.4.6
make-3.80
pdksh-5.2.14
sysstat-5.0.5
unixODBC-2.2.11
unixODBC-2.2.11 (32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11 (32 bit)
The following command can be run on the system to list the currently installed packages:
rpm -q --qf '%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})\n' binutils \
compat-libstdc++-33 \
elfutils-libelf \
elfutils-libelf-devel \
gcc \
gcc-c++ \
glibc \
glibc-common \
glibc-devel \
glibc-headers \
ksh \
libaio \
libaio-devel \
libgcc \
libstdc++ \
libstdc++-devel \
make \
sysstat \
unixODBC \
unixODBC-devel
NOTE: Be sure to check on all nodes that the Linux Firewall and SE Linux is disabled.
11. Prepare the shared storage for Oracle RAC
- All of the devices in an Automatic Storage Management diskgroup should be the same size and have the same performance characteristics
- A diskgroup should not contain more than one partition on a single physical disk device.
- Using logical volumes as a device in an Automatic Storage Management diskgroup is not supported with Oracle RAC
- The user account with which you perform the installation (typically, 'oracle') must have write permissions to create the files in the path that you specify.
12. Shared Storage
Block Device ASMlib Name Size Comments
DATA01 200G
SYSTEMDG 100G
RECO 200G
13. Partition the Shared Disks
Once the LUNs have been presented from the SAN to ALL servers in the cluster, "partition the LUNs from one node only",
run fdisk to create a single whole-disk partition with exactly 1 MB offset on each LUN to be used as ASM Disk.
Tip: From the fdisk prompt, type "u" to switch the display unit from cylinder to sector. Then create a single
primary partition starting on sector 2048 (1MB offset assuming sectors of 512 bytes per unit). See below example for /dev/sda:
fdisk /dev/dm-2
Command (m for help): u
Changing display/entry units to sectors
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (61-1048575, default 61):
Last sector or +size or +sizeM or +sizeK (2048-1048575, default 1048575):
Using default value 1048575
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
#partprobe
14.Installing and Configuring ASMLib
Download the following packages from the ASMLib OTN page.
NOTE: The ASMLib kernel driver MUST match the kernel revision number, the kernel revision number of
your system can be identified by running the "uname -r" command. Also, be sure to download the set of
RPMs which pertain to your platform architecture, in our case this is x86_64.
oracleasm-support-2.1.3-1.el5x86_64.rpm
oracleasmlib-2.0.4-1.el5.x86_64.rpm
oracleasm-2.6.18-92.1.17.0.2.el5-2.0.5-1.el5.x86_64.rpm
15. Configure ASMLib by running the following as the root user
a. NOTE: If using user and group separation for the installation (as documented here), the ASMLib driver
interface owner is 'grid' and the group to own the driver interface is 'asmadmin'. These groups were created in
section 2.1. If a more simplistic installation using only the Oracle user is performed, the owner will be 'oracle'
and the group owner will be 'dba'.
#/etc/init.d/oracleasm configure
Default user to own the driver interface []: oracle
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks: [ OK ]
b.# Using ASMLib to Mark the Shared Disks as Candidate Disks [*** from single node]
#/usr/sbin/oracleasm createdisk disk_name device_partition_name
In this command, disk_name is the name you choose for the ASM disk. The name you choose must contain
only ASCII capital letters, numbers, or underscores, and the disk name must start with a letter, for example,
DISK1 or VOL1, or RAC_FILE1. The name of the disk partition to mark as an ASM disk is the
device_partition_name. For example:
# /usr/sbin/oracleasm createdisk DATA01_01 /dev/dm-2 #after linux partition in the above [step 13], dev/dm-2 became /dev/dm-2p1, but hare we need to use dev/dm-2
# /usr/sbin/oracleasm createdisk DATA01_02 /dev/dm-3
# /usr/sbin/oracleasm createdisk SYSTEMDG_01 /dev/dm-4
# /usr/sbin/oracleasm createdisk RECO_01 /dev/dm-5
# /usr/sbin/oracleasm createdisk RECO_02 /dev/dm-6
**Repeat step 1 for each disk that will be used by Oracle ASM
If you need to unmark a disk that was used in a createdisk command, you can use the following syntax as the root user:
# service oracleasm stop
# /usr/sbin/oracleasm deletedisk disk_name
c. *** On all the other nodes in the cluster, use the scandisks command as the root user to pickup the newly created ASM disks.
You do not need to create the ASM disks on each node, only on one node in the cluster.
# /usr/sbin/oracleasm scandisks [all nodes]
Scanning system for ASM disks [ OK ]
d. After scanning for ASM disks, display the available ASM disks on each node to verify their availability:
# /usr/sbin/oracleasm listdisks
DATA01_01
DATA01_02
SYSTEMDG_01
RECO_01
RECO_01
[root@PBORADB1 ~]# /usr/sbin/oracleasm querydisk RECO_02
Disk "RECO_02" is a valid ASM disk
[root@PBORADB1 ~]# /usr/sbin/oracleasm querydisk /dev/dm-6
Device "/dev/dm-6" is marked an ASM disk with the label "RECO_02"
[root@PBORADB1 ~]#
16. Oracle Grid Infrastructure Install
a. unzip the gridinfra software zip file [on both node] [as grid owner]
cd grip/rpm
rpm -Uvh cvuqdisk*
b. As the grid user (Grid Infrastructure software owner) start the installer by running "runInstaller" from the staged installation media
c. Choose Skip Software Updates
d. Select radio button 'Install and Configure Grid Infrastructure for a Cluster' and click ' Next> '
e. Select radio button 'Advanced Installation' and click ' Next>
f. Accept 'English' as language' and click Next>
g. Specify your cluster name and the SCAN name you want to use and click Next>
h. Uncheck Configure GNS
i. Use the Edit and Add buttons to specify the node names and virtual IP addresses you configured previously in your /etc/hosts file.
Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes.
j. Type in the OS password for the user 'oracle' and press 'Setup' and test and ok 'Next>'
k. Click on 'Interface Type' next to the Interfaces you want to use for your cluster and select the correct values for 'Public', 'Private' and 'Do Not Use'
When finished click ' Next> ' [we can use multiple NIC as private or public]
l. Select radio button Automatic Storage Management (ASM) and click ' Next> '
m. Select the 'DiskGroup Name' specify the 'Redundancy' and tick the disks you want to use, when done click ' Next> '
[if no candidate disks found, then change discovery path to /dev/oracleasm/disks* or ORCL:*]
NOTE: The number of voting disks that will be created depend on the redundancy level you specify:
EXTERNAL will create 1 voting disk, NORMAL will create 3 voting disks, HIGH will create 5 voting disks
n. Specify and conform the password you want to use and click ' Next> '
o. Select NOT to use IPMI and click ' Next> '
p. Assign groups as below:
Oracle ASM DBA: asmdba
Oracle ASM Operator: asmoper
Oracle ASM Administrator: asmadmin
q. Specify the locations for your ORACLE_BASE and for the Software location and click ' Next> '
q. Specify the locations for your Inventory directory and click ' Next>'
r. Check that status of all checks is Succeeded and click ' Next> '
s. Wait for the OUI to complete its tasks
t. At this point you may need to run oraInstRoot.sh on all cluster nodes (if this is the first installation of an Oracle product on this system)
Run them on the first node alone then go for other nodes
Then press ok
NOTE: root.sh should be run on one node at a time.
if root.sh fails, then resolve it and run again
u. Wait for the OUI to finish the cluster configuration.
check OCR,Voting & OLR :
ocrcheck
crsctl query css votedisk
grid_home/cdata/
17. RDBMS Software Install
a. #su - oracle
change into the directory where you staged the RDBMS software
./runInstaller
b. Provide your e-mail address, tick the check box and provide your Oracle Support Password
if you want to receive Security Updates from Oracle Support and click ' Next> '
c. Select the option 'Install Database software only' and click ' Next> '
d. Select 'Real Application Clusters database installation', and select all nodes.
Use the 'SSH Connectivity' button to configure/test the passwordless SSH connectivity between your nodes
e. Type in the OS password for the oracle user and click 'Setup'
f. To confirm English as selected language click ' Next> '
g. Make sure radio button 'Enterprise Edition'/'Standard Edition' is ticked, click ' Next> '
h. Specify path to your Oracle Base and below to the location where you want to store the software (Oraclehome). Click ' Next> '
i. Specify groups:
Database Administrator: dba
Database Operator:oinstall
j. Oracle Universal Installer performs prerequisite checks.
k. Check that the status of all checks is 'Succeeded' and click ' Next> '
l. Log in to a terminal window as root user and run the root.sh script on the first node.
When finished do the same for all other nodes in your cluster as well. When finished click 'OK'
NOTE: root.sh should be run on one node at a time.
m. Click ' Close ' to finish the installation of the RDBMS Software.
18. Run ASMCA to create diskgroups
a. #su - grid
cd /u01/11.2.0/grid/bin
./asmca
b. Click 'Create' to create a new diskgroup
c. Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box for the disks you want to assign to the new diskgroup
d. Click 'OK'
e. Click 'Create' to create the diskgroup for the flash recovery area
f. Type in a name for the diskgroup, select the redundancy you want to provide and mark the tick box for the disks you want to assign to the new diskgroup.
g. Click 'OK'
h. Click 'Exit'
19. It is Oracles Best Practice to have an OCR mirror stored in a second diskgroup.
To follow this recommendation add an OCR mirror. Mind that you can only have one OCR in a diskgroup.
Action:
a. To add OCR mirror to an Oracle ASM diskgroup, ensure that the Oracle Clusterware stack is running and
run the following command as root from the GridInfrastructureHome? /bin directory:
b. # ocrconfig -add +ORADATA
c. # ocrcheck
20. Run DBCA to create the database
a. #su - oracle
cd /u01/app/oracle/product/11.2.0/db_1/bin
./dbca
b. Select 'Oracle Real Application Clusters database' and click 'Next'
c. choose option 'Create a Database' and click 'Next'
d. Select the database template that you want to use for your database and click 'Next'
e. Type in the name you want to use for your database and select all nodes before you click 'Next'
f. select the options you want to use to manage your database and click 'Next'
g. Type in the passwords you want to use and click 'Next'
h. Select the diskgroup you created for the database files and click 'Multiplex Redo Logs and Control Files'.
In the popup window define the diskgroup that should contain controlfiles and
redo logfiles and the diskgroup that should contain the mirrored files.
When all file destinations are correct click 'Next'
i. Specify the diskgroup that was created for the flash recovery area and define the size.
If the size is smaller than recommended a warning will popup.
j. Select if you want to have sample schemas created in your database and click 'Next'
k. Review and change the settings for memory allocation, characterset etc. according to your needs and click 'Next'
l. Review the database storage settings and click 'Next'
m. Make sure the tickbox 'Create Database' is ticked and click 'Finish'
n. Review the database configuration details again and click 'OK'
o. The database is now created, you can either change or unlock your passwords or just click Exit to finish the database creation
Useful references:
http://www.oracleangels.com/2011/05/grid-infrasturuer-redundant.html
https://forums.oracle.com/forums/thread.jspa?threadID=2126077
http://logicalshift.blogspot.com/2010/05/linux-udev-and-multipath.html
MOS Doc:INS-20802 PRVF-9802 PRVF-5184 PRVF-5186 After Successful Upgrade to 11gR2 Grid Infrastructure [ID 974481.1]
MOS Doc: RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) [ID 810394.1]
https://forums.oracle.com/forums/thread.jspa?threadID=2156504
http://www.oracle.com/technetwork/topics/linux/asmlib/index-101839.html
No comments:
Post a Comment