Sunday, February 13, 2011

Installation of Oracle RAC 10g R2 (10.2.0.4) --with OCFS2 as cluster file system.

This article is intended as a brief guide to installing Oracle Database 10g (10.2.0.4) Real Application Clusters (RAC) on RedHat Enterprise Linux X86_64.

Environment:

Each node requires at least Two 2 NIC cards one for Public IP and other for Private interconnect.


Node1 : racnode1.ukatru.com
Public IP Address : 192.168.2.52(racnode1.ukatru.com)
Private IP Address : 192.168.1.52(racnode1-priv.ukatru.com)
Virtual IP Address : 192.168.2.54(racnode1-vip.ukatru.com)


Node2:racnode2.ukatru.com

Public IP Address : 192.168.2.53(racnode2.ukatru.com)
Private IP Address : 192.168.1.53(racnode2-priv.ukatru.com)
Virtual IP Address : 192.168.2.55(racnode2-vip.ukatru.com)

We need 15GB disk on both nodes to install oracle Cluster software,ASM home and Database home.

Cluster File system : OCFS2 as cluster file system to store OCR and Voting disks for CRS.


Adding following text to .profile on both nodes for oracle user:

export VISUAL=vi
export EDITOR=/usr/bin/vi
ENV=$HOME/.kshrc
export ENV
umask 022
stty erase ^?
export HOST=`hostname`
export PS1='$HOST:$PWD>'
export PS2="$HOST:`pwd`>>"
export PS3="$HOST:`pwd`=="
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.4/db_1
export ASM_HOME=/u01/app/oracle/product/10.2.0/asm
export CRS_HOME=/u01/app/root/product/crs
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
unalias ls

Execute following commands on both nodes to create directories on /u01 file system.

mkdir -p /u01/app/oracle
mkdir -p /u01/app/root


Create rsa keys on both nodes and setup keyless authentication between two rac nodes for oracle user.
 racnode1.ukatru.com:/home/oracle>ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
b6:12:04:58:c2:43:95:56:71:cc:60:73:8c:3d:08:91 oracle@racnode1.ukatru.com

racnode2.ukatru.com:/home/oracle>ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory '/home/oracle/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
87:96:89:8e:82:c2:8f:6c:ae:d0:ab:13:56:55:9d:e2 oracle@racnode2.ukatru.com


racnode1.ukatru.com:/home/oracle/.ssh>cat id_rsa.pub > authorized_keys
racnode1.ukatru.com:/home/oracle/.ssh>chmod 600 authorized_keys


racnode2.ukatru.com:/home/oracle/.ssh>cat id_rsa.pub > authorized_keys
racnode2.ukatru.com:/home/oracle/.ssh>chmod 600 authorized_keys

racnode1.ukatru.com:/home/oracle/.ssh>ssh racnode2
The authenticity of host 'racnode2 (192.168.2.53)' can't be established.
RSA key fingerprint is fc:74:38:f0:d8:f1:97:62:e8:6b:05:69:3d:2c:9b:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode2,192.168.2.53' (RSA) to the list of known hosts.
Last login: Sat Feb 12 18:32:03 2011 from 192.168.2.128


racnode2.ukatru.com:/home/oracle>ssh racnode1
The authenticity of host 'racnode1 (192.168.2.52)' can't be established.
RSA key fingerprint is fc:74:38:f0:d8:f1:97:62:e8:6b:05:69:3d:2c:9b:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'racnode1,192.168.2.52' (RSA) to the list of known hosts.
Last login: Sat Feb 12 18:45:20 2011 from 192.168.2.128

############################


Set Kernel Parameters

Add the following lines to the /etc/sysctl.conf file:

fs.file-max=327679
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65536
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4294967295
net.core.rmem_default = 262144
net.core.rmem_max=2097152
net.core.wmem_default = 262144
net.core.wmem_max=262144
fs.aio-max-nr = 3145728
net.ipv4.ip_local_port_range=1024 65000

[root@oral01 ~]# sysctl -p

Add the following lines to the /etc/security/limits.conf file:
oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    3500000
oracle   hard   memlock    3500000

Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows:
SELINUX=disabled

We are using Openfiler as our NAS/SAN appliance for shared disks.

Configure the iSCSI (Initiator) Service:
rpm -Uvh iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing...                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]


root@racnode1 Server]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@racnode1 Server]# chkconfig iscsid on
[root@racnode1 Server]# chkconfig iscsi on

[root@racnode2 Server]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@racnode2 Server]# chkconfig iscsid on
[root@racnode2 Server]# chkconfig iscsi on

[root@racnode1 Server]# iscsiadm -m discovery -t sendtargets -p sanl001
192.168.2.11:3260,1 iqn.2006-01.com.openfiler:crs_racnode
[root@racnode2 Server]# iscsiadm -m discovery -t sendtargets -p sanl001
192.168.2.11:3260,1 iqn.2006-01.com.openfiler:crs_racnode

Manually Login to iSCSI Target(s)
[root@racnode2 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs_racnode -p 192.168.2.11 --login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]: successful

[root@racnode1 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs_racnode -p 192.168.2.11 --login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]: successful

[root@racnode2 Server]# fdisk -l

Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes

Disk /dev/sdc doesn't contain a valid partition table


[root@racnode1 Server]# fdisk -l

Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes

Disk /dev/sdc doesn't contain a valid partition table

installing and configuring OCFS2:

    * ocfs2-tools
    * ocfs2
    * ocfs2console (optional)

Now the node1 and node2 has a shared iSCSI disk configured, and you can see that by issuing a “fdisk -l”. Let’s configure the OCFS2 part.

Create the file /etc/ocfs2/cluster.conf:
mkdir -p /etc/ocfs2
vi /etc/ocfs2/cluster.conf
Add following text to cluster.conf file:
cluster:
        node_count = 2
        name = ocfs2

node:
        ip_port = 7777
        ip_address = 192.168.2.52
        number = 1
        name = racnode1
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 192.168.2.53
        number = 2
        name = racnode2
        cluster = ocfs2

If you execute /etc/init.d/o2cb configure you’ll get:
[root@racnode2 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets ('[]').  Hitting
<ENTER> without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter "none" to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem "configfs": OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem "ocfs2_dlmfs": OK
Creating directory '/dlm': OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

Execute same on other node also to configure ocfs2:

[root@racnode2 ~]# /etc/init.d/o2cb status
Driver for "configfs": Loaded
Filesystem "configfs": Mounted
Driver for "ocfs2_dlmfs": Loaded
Filesystem "ocfs2_dlmfs": Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

Create ocfs2 file system on /dev/sdc1 :
N -- means number of nodes in the cluster.

[root@racnode2 openfiler:crs_racnode]# mkfs.ocfs2 -N 6 /dev/sdc1
mkfs.ocfs2 1.4.2
Cluster stack: classic o2cb
Filesystem label=
Block size=2048 (bits=11)
Cluster size=4096 (bits=12)
Volume size=1073537024 (262094 clusters) (524188 blocks)
17 cluster groups (tail covers 8142 clusters, rest cover 15872 clusters)
Journal size=33554432
Initial number of node slots: 6
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)

execute  mkdir /crs on both nodes.

Add following entry in /etc/fstab on both nodes to make sure the file system is mounted after reboots.

/dev/sdc1     /crs   ocfs2   _netdev,datavolume     0 0

[root@racnode2 openfiler:crs_racnode]# mount | grep ocfs2
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdc1 on /crs type ocfs2 (rw,_netdev,datavolume,heartbeat=local)

[root@racnode1 ~]#  mount | grep ocfs2
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdc1 on /crs type ocfs2 (rw,_netdev,datavolume,heartbeat=local)

chown -R oracle:oinstall /crs

Now we have ocf2 file system is configured and mounted on both nodes.

Configure oracleasm :
[root@racnode1 tmp]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Create ASM Disks(You need to do this on one node only).

[root@racnode1 ~]# oracleasm createdisk ASM1 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasm createdisk ASM2 /dev/sde1
Writing disk header: done
Instantiating disk: done

Logonto racnode2 and execute the following command to scan asm disks.

[root@racnode2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racnode2 ~]# /etc/init.d/oracleasm listdisks
ASM1
ASM2

CRS Installation:

racnode1.ukatru.com:/u01/clusterware/cluvfy>./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -r 10gR2 -verbose

you can ignore following error from the above command:

ERROR:
Could not find a suitable set of interfaces for VIPs.

Result: Node connectivity check failed.

Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable:
DISPLAY=<machine-name>:0.0; export DISPLAY
 
Start the Oracle Universal Installer (OUI) by issuing the following command in the ./clusterware

./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
                                      Passed


All installer requirements met.

Preparing to launch Oracle Universal Installer from 
/tmp/OraInstall2011-02-12_08-49-59PM. Please wait ...
racnode1.ukatru.com:/u01>Oracle Universal Installer, 
Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved. 




Execute roo.sh on both nodes one after another:Here is the output from both nodes.
[root@racnode1 ~]# /u01/app/root/product/10.2.0/crs/root.sh
WARNING: directory '/u01/app/root/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/root/product' is not owned by root
WARNING: directory '/u01/app/root' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/root/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/root/product' is not owned by root
WARNING: directory '/u01/app/root' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Now formatting voting device: /crs/voting_disk1
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
CSS is inactive on these nodes.
        racnode2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

***************************************************************
[root@racnode2 ~]# /u01/app/root/product/10.2.0/crs/root.sh
WARNING: directory '/u01/app/root/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/root/product' is not owned by root
WARNING: directory '/u01/app/root' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/root/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/root/product' is not owned by root
WARNING: directory '/u01/app/root' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
        racnode2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Apply 10.2.0.4 patch set to the newly installed CRS home:

racnode1.ukatru.com:/u01/Disk1>./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, SuSE-10, redhat-4, redhat-5, UnitedLinux-1.0, asianux-1, asianux-2 or asianux-3
                                      Passed


All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-02-12_09-16-49PM. Please wait ...racnode1.ukatru.com:/u01/Disk1>Oracle Universal Installer, Version 10.2.0.4.0 Production
Copyright (C) 1999, 2008, Oracle. All rights reserved
The installer has detected that your Cluster Ready Services (CRS) installation is distributed across the following nodes:

    racnode1
    racnode2

Because the software consists of local identical copies distributed across each of the nodes in the cluster, it is possible to patch your CRS installation in a rolling manner, one node at a time.

To complete the installation of this patchset, you must perform the following tasks on each node:

    1.    Log in as the root user.
    2.    As the root user, perform the following tasks:

        a.    Shutdown the CRS daemons by issuing the following command:
                /u01/app/root/product/10.2.0/crs/bin/crsctl stop crs
        b.    Run the shell script located at:
                /u01/app/root/product/10.2.0/crs/install/root102.sh
            This script will automatically start the CRS daemons on the
            patched node upon completion.

    3.    After completing this procedure, proceed to the next node and repeat.

[root@racnode1 ~]# /u01/app/root/product/10.2.0/crs/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/root/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/root/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/root/product' is not owned by root
WARNING: directory '/u01/app/root' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10204 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: racnode1 racnode1-priv racnode1
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
************************************************
[root@racnode2 ~]# /u01/app/root/product/10.2.0/crs/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@racnode2 ~]# /u01/app/root/product/10.2.0/crs/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/root/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory '/u01/app/root/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/root/product' is not owned by root
WARNING: directory '/u01/app/root' is not owned by root
WARNING: directory '/u01/app' is not owned by root
WARNING: directory '/u01' is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10204 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: racnode2 racnode2-priv racnode2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
*************************************************
Login to racnode1 as root and run vipca command.

[root@racnode1 bin]# pwd
/u01/app/root/product/10.2.0/crs/bin
[root@racnode1 bin]# ./vipca &
[1] 17871

racnode1.ukatru.com:/u01>cd /u01/app/root/product/10.2.0/crs/bin
racnode1.ukatru.com:/u01/app/root/product/10.2.0/crs/bin>./crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....de1.gsd application    ONLINE    ONLINE    racnode1
ora....de1.ons application    ONLINE    ONLINE    racnode1
ora....de1.vip application    ONLINE    ONLINE    racnode1
ora....de2.gsd application    ONLINE    ONLINE    racnode2
ora....de2.ons application    ONLINE    ONLINE    racnode2
ora....de2.vip application    ONLINE    ONLINE    racnode2

ASM Home Installation :

Start the Oracle Universal Installer (OUI) by issuing the following command in the database software dir.
racnode1.ukatru.com:/u01/database>./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, redhat-4,
UnitedLinux-1.0, asianux-1 or asianux-2
                                      Passed


All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-02-12_09-40-57PM.
Please wait ...racnode1.ukatru.com:/u01/database>Oracle Universal Installer,
Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved
[root@racnode1 bin]# /u01/app/oracle/product/10.2.0/asm/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/asm

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

[root@racnode2 ~]# /u01/app/oracle/product/10.2.0/asm/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/asm

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

********************************************
Apply 10.2.0.4 patch to the ASM home:

Create ASM Instance:

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0/asm>export ORACLE_HOME=/u01/app/oracle/product/10.2.0/asm

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0/asm/bin>./dbca &
[1]     8643


[root@racnode1 bin]# ./crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    racnode1
ora....E1.lsnr application    ONLINE    ONLINE    racnode1
ora....de1.gsd application    ONLINE    ONLINE    racnode1
ora....de1.ons application    ONLINE    ONLINE    racnode1
ora....de1.vip application    ONLINE    ONLINE    racnode1
ora....SM2.asm application    ONLINE    ONLINE    racnode2
ora....E2.lsnr application    ONLINE    ONLINE    racnode2
ora....de2.gsd application    ONLINE    ONLINE    racnode2
ora....de2.ons application    ONLINE    ONLINE    racnode2
ora....de2.vip application    ONLINE    ONLINE    racnode2



DB Home Installation :

racnode1.ukatru.com:/u01/database>./runInstaller


Apply 10.2.0.4 Patch to the newly installed Database home:

racnode1.ukatru.com:/u01/Disk1>./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...
Database Creation:

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0.4/db_1>export ORACLE_HOME=/u01/app/oracle/product/10.2.0.4/db_1
racnode1.ukatru.com:/u01/app/oracle/product/10.2.0.4/db_1>export ORACLE_SID=oradv1

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0.4/db_1/bin>./dbca &
[1]     32718


racnode1.ukatru.com:/u01/app/root/product/10.2.0/crs/bin>./crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.oradv1.db  application    ONLINE    ONLINE    racnode2
ora....11.inst application    ONLINE    ONLINE    racnode1
ora....12.inst application    ONLINE    ONLINE    racnode2
ora....SM1.asm application    ONLINE    ONLINE    racnode1
ora....E1.lsnr application    ONLINE    ONLINE    racnode1
ora....de1.gsd application    ONLINE    ONLINE    racnode1
ora....de1.ons application    ONLINE    ONLINE    racnode1
ora....de1.vip application    ONLINE    ONLINE    racnode1
ora....SM2.asm application    ONLINE    ONLINE    racnode2
ora....E2.lsnr application    ONLINE    ONLINE    racnode2
ora....de2.gsd application    ONLINE    ONLINE    racnode2
ora....de2.ons application    ONLINE    ONLINE    racnode2
ora....de2.vip application    ONLINE    ONLINE    racnode2

racnode1.ukatru.com:/u01/app/root/product/10.2.0/crs/bin>./cemutlo -n
crsracnode

No comments:

Post a Comment