Install Oracle RAC on Bare Metal Solution
This page shows how to install Oracle Real Application Clusters (RAC) on your Bare Metal Solution server.
Deployment
In this guide, we create a deployment that includes Oracle RAC 19c installed on a Bare Metal Solution server. We build a two node RAC with a database.
Before you begin
- Ensure that you provision the infrastructure required for RAC. This includes two nodes, shared storage, and dedicated volumes to host the Oracle home directory.
- In this guide, we use the GUI install process. If you too want to use the GUI install process, configure Virtual Network Computing (VNC) on one of the nodes by following your Oracle VNC setup guide.
- Create a local mount point
/apps
to host the Oracle home and Grid home directories.
Install RAC
Installing RAC on Bare Metal Solution involves the following steps:
- Prepare the nodes for RAC install.
- Configure Oracle ASM.
- Perform pre-installation checks.
- Install Oracle RAC.
- Perform post-installation steps.
Prepare the nodes for RAC install
Perform the following steps on both nodes, unless specified otherwise:
Update the
/etc/hosts
file with the following IP addresses for RAC:- Public IP addresses
- Private IP addresses
- Virtual IP addresses
- SCAN virtual IP addresses
Following is an example of the modified
/etc/hosts
file:10.*.*.* bms-jumphost # public IP addresses for Oracle RAC 192.*.*.* at-2811641-svr001 at-2811641-svr001.localdomain 192.*.*.* at-2811641-svr002 at-2811641-svr002.localdomain # private IP addresses for Oracle RAC 172.*.*.* at-2811641-svr001-priv at-2811641-svr001-priv.localdomain 172.*.*.* at-2811641-svr002-priv at-2811641-svr002-priv.localdomain # virtual IP addresses for Oracle RAC 192.*.*.* at-2811641-svr001-vip at-2811641-svr001-vip.localdomain 192.*.*.* at-2811641-svr002-vip at-2811641-svr001-vip.localdomain # SCAN virtual IP addresses for Oracle RAC 192.*.*.* psoracle-scan psoracle-scan.localdomain 192.*.*.* psoracle-scann psoracle-scan.localdomain 192.*.*.* psoracle-scan psoracle-scan.localdomain
Make DNS resolution for the virtual IP addresses and SCAN virtual IP addresses.
We use Cloud DNS to resolve virtual IP addresses and SCAN virtual IP addresses. For instructions, see Configure SCAN with Cloud DNS.
Update the
/etc/sysctl.conf
file with the kernel parameters.vi /etc/sysctl.conf # Added for Oracle fs.file-max = 6815744 kernel.sem = 250 32000 100 128 kernel.shmmni = 4096 kernel.shmall = 1073741824 kernel.shmmax = 4398046511104 kernel.panic_on_oops = 1 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 net.ipv4.conf.all.rp_filter = 2 net.ipv4.conf.default.rp_filter = 2 fs.aio-max-nr = 1048576 net.ipv4.ip_local_port_range = 9000 65500
Apply the kernel parameters updates.
/sbin/sysctl -p
In the
/etc/security/limits.conf
file, add the resource limits.vi /etc/security/limits.conf ## Added for Oracle grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 2047 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft nproc 2047 oracle hard nproc 16384 oracle soft stack 10240 oracle hard stack 32768
Add the necessary users and groups.
groupadd -g 54321 oinstall groupadd -g 54322 dba groupadd -g 54323 oper groupadd -g 54327 asmdba groupadd -g 54328 asmoper groupadd -g 54329 asmadmin useradd -u 54321 -g oinstall -G dba,oper,asmdba,asmoper,asmadmin,oracle
Install the Oracle preinstall package.
This guide assumes that you have setup the Oracle repository.
yum install oracle-database-preinstall-19c
The
oracle-database-preinstall-19c
package installs the necessary packages and creates theoracle
user and an entry in/etc/sysctl.conf
file.Reset the password for the
oracle
user.passwd oracle
Install the following packages for Oracle ASM:
oracleasm-support
kmod-oracleasm
yum install oracleasm-support yum install kmod-oracleasm
Disable
SELinux
by updating the/etc/selinux/config
file.vi /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUX=disabled # SELINUXTYPE= can take one of three values: # targeted - Targeted processes are protected, # minimum - Modification of targeted policy. Only selected processes are protected. # mls - Multi Level Security protection. SELINUXTYPE=targeted
Stop and disable the firewall.
systemctl stop firewalld.service systemctl disable firewalld
Configure NTP.
Install the NTP package.
yum install ntp
Start the
ntpd
service.systemctl start ntpd
Update the
/etc/ntp.conf
file.In this case, we sync with our bastion host which is
10.x.x.x
. It can be your internal NTP server too.192.x.x.x
is the IP address of your Bare Metal Solution server.vi /etv/ntp.conf restrict 192.x.x.x mask 255.255.255.0 nomodify notrap server 10.x.x.x prefer
Update the NTP server to start syncing.
ntpdate -qu SERVER_NAME
If
avahi-daemon
is running, stop and disable it.systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2021-08-09 07:51:26 UTC; 23h ago Main PID: 2543 (avahi-daemon) Status: "avahi-daemon 0.6.31 starting up." Tasks: 2 CGroup: /system.slice/avahi-daemon.service ├─2543 avahi-daemon: running [at-2811641-svr001.local] └─2546 avahi-daemon: chroot helper systemctl stop avahi-daemon Warning: Stopping avahi-daemon.service, but it can still be activated by: avahi-daemon.socket [root@at-2811641-svr001 ~]# systemctl disable avahi-daemon Removed symlink /etc/systemd/system/multi-user.target.wants/avahi-daemon.service. Removed symlink /etc/systemd/system/sockets.target.wants/avahi-daemon.socket. Removed symlink /etc/systemd/system/dbus-org.freedesktop.Avahi.service. systemctl status avahi-daemon ● avahi-daemon.service - Avahi mDNS/DNS-SD Stack Loaded: loaded (/usr/lib/systemd/system/avahi-daemon.service; disabled; vendor preset: enabled) Active: inactive (dead) since Tue 2021-08-10 07:16:28 UTC; 36s ago Main PID: 2543 (code=exited, status=0/SUCCESS)
Create the Oracle home and Grid home directories.
mkdir -p /apps/grid/19.3.0/gridhome_1 mkdir -p /apps/grid/gridbase/ mkdir -p /apps/oracle/product/19.3.0/dbhome_1 chmod -R 775 /apps/ chown -R oracle:oinstall /apps/
On the first node, do the following:
- Download the Oracle Grid Infrastructure image files.
- As
Oracle
user, copy the downloaded files to the Grid home directory. - As
Oracle
user, extract the Oracle Grid Infrastructure image files.
cp LINUX.X64_193000_grid_home.zip /apps/grid/19.3.0/gridhome_1/ cd /apps/grid/19.3.0/gridhome_1/ unzip LINUX.X64_193000_grid_home.zip
On either node, set up passwordless SSH for Oracle user.
cd /apps/grid/19.3.0/gridhome_1/deinstall ./sshUserSetup.sh -user oracle -hosts "at-2811641-svr001 at-2811641-svr002" -noPromptPassphrase -confirm -advanced
On either node, as the root user, install the
cvuqdisk
RPM package from the Grid home directory.rpm -qa cvuqdisk cd /apps/grid/19.3.0/gridhome_1/cv/rpm/ rpm -iv cvuqdisk-1.0.10-1.rpm Preparing packages... Using default group oinstall to install package cvuqdisk-1.0.10-1.x86_64 rpm -qa cvuqdisk cvuqdisk-1.0.10-1.x86_64
Configure Oracle ASM
To configure Oracle ASM, perform the following steps as the root user:
On either node, identify all the shared LUNS that you are going to use for Oracle ASM.
multipath -ll
On either node, create partitions in all the shared devices using
fdisk
orgdisk
.fdisk /dev/mapper/DISK_WWID
Replace the following:
- DISK_WWID: WWID of the disk.
A sample output is as follows:
fdisk /dev/mapper/3600a098038314343753f4f723154594e Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table Building a new DOS disklabel with disk identifier 0xf7f8fd23. The device presents a logical sector size that is smaller than the physical sector size. Aligning to a physical sector (or optimal I/O) size boundary is recommended, or performance may be impacted. Command (m for help): n Partition type: p primary (0 primary, 0 extended, 4 free) e extended Select (default p): p Partition number (1-4, default 1): First sector (2048-2147518463, default 2048): Using default value 2048 Last sector, +sectors or +size{K,M,G} (2048-2147518463, default 2147518463): Using default value 2147518463 Partition 1 of type Linux and of size 1 TiB is set Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. WARNING: Re-reading the partition table failed with error 22: Invalid argument. The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or kpartx(8) Syncing disks. fdisk -l /dev/mapper/3600a098038314343753f4f723154594e Disk /dev/mapper/3600a098038314343753f4f723154594e: 1099.5 GB, 1099529453568 bytes,2147518464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 65536 bytes Disk label type: dos Disk identifier: 0xf7f8fd23 Device Boot Start End Blocks Id System /dev/mapper/3600a098038314343753f4f723154594e1 2048 2147518463 1073758208 83 Linux
Run
partprobe
to make all new partitions visible.partprobe
On both nodes, configure Oracle ASM.
oracleasm configure -i Configuring the Oracle ASM library driver. This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting <ENTER> without typing an answer will keep that current value. Ctrl-C will abort. Default user to own the driver interface []: oracle Default group to own the driver interface []: oinstall Start Oracle ASM library driver on boot (y/n) [n]: y Scan for Oracle ASM disks on boot (y/n) [y]: y Writing Oracle ASM library driver configuration: done
On both nodes, set the value of parameter
ORACLEASM_SCANORDER
todm
and the value ofORACLEASM_SCANEXCLUDE
tosd
as the Bare Metal Solution servers use multipath devices.vi /etc/sysconfig/oracleasm ORACLEASM_SCANORDER="dm" ORACLEASM_SCANEXCLUDE="sd"
On both nodes, verify the Oracle ASM configuration.
oracleasm configure ORACLEASM_ENABLED=true ORACLEASM_UID=oracle ORACLEASM_GID=oinstall ORACLEASM_SCANBOOT=true ORACLEASM_SCANORDER="dm" ORACLEASM_SCANEXCLUDE="sd" ORACLEASM_SCAN_DIRECTORIES="" ORACLEASM_USE_LOGICAL_BLOCK_SIZE="false"
On both nodes, start
oracleasm
.oracleasm init Creating /dev/oracleasm mount point: /dev/oracleasm Loading module "oracleasm": oracleasm Configuring "oracleasm" to use device physical block size Mounting ASMlib driver filesystem: /dev/oracleasm
On both nodes, check the status of
oracleasm
.oracleasm status Checking if ASM is loaded: yes Checking if /dev/oracleasm is mounted: yes
On either node, create Oracle ASM disks on the multipath partitioned devices.
oracleasm createdisk OCR /dev/mapper/DISK_WWID
Replace the following:
- DISK_WWID: WWID of the disk.
A sample output is as follows:
oracleasm createdisk OCR /dev/mapper/3600a098038314343753f4f723154594e1 Writing disk header: done Instantiating disk: done
On the first node, list the disk names.
oracleasm listdisks
A sample output is as follows:
DATA01 DATA02 DATA03 DATA04 DATA05 DATA06 FRA01 FRA02 FRA03 OCR
On the second node, list the disks that have been marked as ASMLIB disks on the first node.
oracleasm scandisks
A sample output is as follows:
Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... Instantiating disk "DATA01" Instantiating disk "DATA06" Instantiating disk "DATA04" Instantiating disk "OCR" Instantiating disk "DATA05" Instantiating disk "FRA01" Instantiating disk "DATA03" Instantiating disk "FRA03" Instantiating disk "FRA02" Instantiating disk "DATA02"
On the second node, list the disk names.
oracleasm listdisks
A sample output is as follows:
oracleasm listdisks DATA01 DATA02 DATA03 DATA04 DATA05 DATA06 FRA01 FRA02 FRA03 OCR
If the disks don't appear in the second node after scanning, then run the
partprobe
command to make the partitions visible.
Perform pre-installation checks
Check if all the prerequisites for the Oracle Grid Infrastructure installation
are met. On the first node, run the runcluvfy.sh
script.
./runcluvfy.sh stage -pre crsinst -n at-2811641-svr001,at-2811641-svr002 -verbose
If any of the prerequisites fail, fix them before proceeding with the installation.
Install Oracle RAC
On the first node on which you downloaded and extracted the Oracle Grid Infrastructure image files, perform the following steps:
- Connect to your Bare Metal Solution server through the VNC Viewer.
- Run the
gridSetup.sh
script. - On the Select configuration option page, select Oracle Grid Infrastructure for a new cluster and then click Next.
- On the Select cluster configuration page, select Configure an Oracle standalone cluster and then click Next.
On the Grid plug and play information page, do the following:
- Select Create local SCAN.
- Enter a cluster name. Ensure that it is less than 15 characters.
- Enter a SCAN name.
- Enter a SCAN port.
- Click Next.
To add the second node, click Add and do the following:
- Enter public hostname of the second node.
- Enter virtual hostname of the second node.
- Click OK.
On the Specify network interface usage page, do the following:
- For each interface, in the Use for column, select the usage type. For the private interface select ASM & Private.
- Click Next.
On the Storage option information page, select Oracle Flex ASM for storage for storing OCR files and voting disk files in Oracle ASM, and then click Next.
On the Create Grid Infrastructure Management Repository, select Yes, and then click Next.
This step creates the Grid Infrastructure Management Repository (GIMR).
On the Grid Infrastructure Management Repository option page, select No as we don't want to create a separate disk group for GIMR.
On the Create ASM disk group page, do the following:
- Enter the name of the disk group.
Click Change discovery path and enter the correct discovery path to Oracle ASM disks.
Ensure that the devices are mapped to multipath, so your Oracle ASM disks are up even after one path goes down.
On the Specify ASM password page, do the following:
- Select Use same passwords for these accounts.
- Enter and confirm password.
- Click Next.
On the Failure isolation support page, select Do not use Intelligent Platform Management Interface (IPMI), and the click Next.
On the Specify management options page, click Next as we are not configuring the Oracle Grid Infrastructure with Oracle Enterprise Manager.
On the Privileged operating system groups page, do the following:
- For all Oracle ASM privileges, select oinstall. You can also use asmdba and asmoper.
- Click Next.
On the Specify installation location page, specify the Oracle base and click Next.
Execution of the root scripts can be done automatically if you have root password or if the
oracle
user has sudo privilege. However, in this guide, we run the scripts manually. Therefore, on Root script execution page, click Next.On the Perform prerequisite checks page, do the following:
Fix any failures that are shown.
If you had fixed all failures you found on executing
runcluvfy.sh
script as described in section Perform pre-installation checks, then you can expect a clean installation.After you have fixed the failures, click Check again to update the status of the failures as Succeeded.
Click Next.
Review the summary and click Install.
The installation prompt asks you to run the root scripts on both nodes. First, run the scripts on the first node, and then on the second node.
/apps/grid/oraInventory/orainstRoot.sh /apps/grid/19.3.0/gridhome_11/grid/root.sh
Once the Finish window appears, click Close.
Perform post-installation steps
After Oracle Grid Infrastructure installation is complete, follow these steps:
Install and configure the Oracle database software.
For RAC install, choose the Software only installation option.
For instructions to install the Oracle database software, see Blog: Step by step installing And configuring Oracle 19c Binaries for RAC database.
Create Oracle ASM disk groups.
Create an entry for the Oracle ASM instance in
/etc/oratab
file.vi /etc/oratab +ASM1:/apps/grid/19.3.0/gridhome_1:N
Set the
ORACLE_SID
andORACLE_HOME
variables for the Oracle ASM instance and connect to it.. oraenv ORACLE_SID = [oracle] ? +ASM1 The Oracle base has been set to /apps/grid/gridbase sqlplus / as sysasm SQL> select INSTANCE_NAME,STATUS from v$instance; INSTANCE_NAME STATUS ---------------- ------------ +ASM1 STARTED
Check that the disks to be added have a
HEADER_STATUS
ofPROVISIONED
.SQL> select HEADER_STATUS,STATE,TOTAL_MB/1024,path from v$asm_disk; HEADER_STATUS. STATE PATH ------------ -------- ------------- ------------------------- PROVISIONED NORMAL /dev/oracleasm/disks/DAT01 PROVISIONED NORMAL /dev/oracleasm/disks/DATA02 MEMBER NORMAL /dev/oracleasm/disks/OCR
Create the disk group from one node.
SQL> create diskgroup DATA external redundancy disk '/dev/oracleasm/disks/DATA01','/dev/oracleasm/disks/DATA02'; Diskgroup created. SQL> select name, state, type from v$asm_diskgroup; NAME STATE TYPE ------------------------------ OCR MOUNTED EXTERN DATA MOUNTED EXTERN
On the other node, log in to the Oracle ASM instance and mount the disk group.
SQL> select name, state, type from v$asm_diskgroup; NAME STATE TYPE -------------------- ----------- ------ DATA DISMOUNTED OCR MOUNTED EXTERN SQL> alter diskgroup DATA mount; Diskgroup altered. SQL> select name, state, type from v$asm_diskgroup; NAME STATE TYPE ------------------------------ ----------- ------ OCR MOUNTED EXTERN DATA MOUNTED EXTERN
Create a RAC database using Oracle Database Configuration Assistant (DBCA).
Ensure that you choose Database type as Real Application Cluster Database.
For instructions, see Oracle documentation.
RAC database installation is complete and is ready to use.
What's next
- Learn about Best practices for Oracle on Bare Metal Solution.