Monday 19 June 2017

ORACLE RAC 11gR2 INSTALLATION


ORACLE RAC 11gR2 INSTALLATION ON VMWARE WORKSTATION

********************************************************************************

For installing 11gR2 RAC in your laptop or desktop computer has minimum 8 GB or more of RAM required, and  vmware workstation.

   Our aim is to configure RAC On RedHat Enterprise Linux 5.4 with two working nodes,with DNS configuration i am configuring DNS on my first node

Using NetApp simulator.

STEPS:-
*********

1)
make installation of linux 5.4 server for node1 and node2 respectively with all packages.
In My case Node1 and Node2 are present.
Node1 configuration :-
************************
  I) On node one Configure three Eithernate card
          eth0 :- for using public IP address 
                     172.168.0.2 with subnet mask 255.255.0.0
          eth1:- for private IP address 
                    192.168.0.3 with subnet mask 255.255.2.55.0
          eth2:- for DNS with public IP address
(This eithernate card is (eth2) virtual for NETAPP configuration with IP address 172.18.0.100 how to assign this IP this will done while installing NETAPP simulator )
 II) On second node configure two eithernate card
Eth0 for public with IP address 172.168.0.4 with subnet mask 255.255.0.0
Eth1 for private with IP address 192.168.0.5 with subnet mask 255.255.255.0  
 
2)Configuring DNS on a Linux Host using chroot (172.168.0.1)
1) Install the BIND software
rpm -ivh bind-*.rpm


Check that we have a bind-chroot installed
rpm -q bind-chroot


2) Check the service status (it should be down/off/not running)
service named status
3) Configure DNS setting 
 
A] Global DNS Settings
cat /var/named/chroot/etc/named.conf
options
{
directory "/var/named";
listen-on port 53 { any; };
// Forwarder: Anything this DNS can't resolve gets forwarded to my ISPs DNS.
//forwarders { 194.168.4.100; 194.168.8.100; };
};
zone "demo.com"
{
type master;
file "demo.com.fwd.zone";
};
zone "localhost"
{
type master;
file "localhost.fwd.zone";
};
zone "0.168.172.in-addr.arpa"
{
type master;
file "172.168.0.rev.zone";
};
zone "0.0.127.in-addr.arpa"
{
type master;
file "localhost.rev.zone";
};
zone "." in {
type hint;
file "/dev/null";
};


B] Domain Specific Settings for forward lookup zone


cat /var/named/chroot/var/named/demo.com.fwd.zone


$TTL 1D
@ IN SOA dns.demo.com. root.localhost (
2011071500 ; serial
8H ; refresh
4H ; retry
1W ; expiry
1D ) ; minimum
@ IN NS dns.demo.com.
localhost IN A 127.0.0.1
dns IN A 172.168.0.1
netapp IN A 172.168.0.100
node1 IN A 172.168.0.2
node2 IN A 172.168.0.4
node3 IN A 172.168.0.6
node1-vip IN A 172.168.0.3
node2-vip IN A 172.168.0.5
node3-vip IN A 172.168.0.7
group00-scan IN A 172.168.0.10
IN A 172.168.0.20
IN A 172.168.0.30


$ORIGIN group00grid.demo.com.
@ IN NS group00-gns.group00grid.demo.com.
IN NS dns.demo.com.
group00-gns IN A 172.168.0.40; glue record






C] Domain Specific Settings for reverse lookup zone


cat /var/named/chroot/var/named/172.168.0.rev.zone


$TTL 1D
@ IN SOA dns.demo.com. root.localhost. (
2011071500 ; serial
8H ; refresh
4H ; retry
1W ; expiry
1D ) ; minimum
@ IN NS dns.demo.com.
1 IN PTR dns.demo.com.
100 IN PTR netapp.demo.com.
2 IN PTR node1.demo.com.
4 IN PTR node2.demo.com.
6 IN PTR node3.demo.com.
3 IN PTR node1-vip.demo.com.
5 IN PTR node2-vip.demo.com.
7 IN PTR node3-vip.demo.com.




D] Domain Specific Settings for forward lookup zone
cat /var/named/chroot/var/named/localhost.fwd.zone
$TTL 1D
@ IN SOA @ root (
2011071500 ; serial
8H ; refresh
4H ; retry
1W ; expiry
1D ) ; minimum
IN NS @
IN A 127.0.0.1




E] Domain Specific Settings for reverse lookup zone
cat /var/named/chroot/var/named/localhost.rev.zone
$TTL 1D
@ IN SOA localhost. root.localhost. (
2011071500 ; serial
8H ; refresh
4H ; retry
1W ; expiry
1D ) ; minimum
IN NS localhost.
1 IN PTR localhost.
F] Configure Resolution for the DNS Server
cat /etc/resolv.conf
search demo.com
nameserver 172.168.0.1
options attempts: 3
options timeout: 3
G] Disable firewall and allow clients to connect to the DNS server
# Stop auto restart of Linux Firewall after reboot
chkconfig --levels 0123456 iptables off
# Stop the Linux Firewall if it is running.
service iptables stop
# Start the DNS Server now
service named start
# Ensure it auto starts after subsequent reboots.
chkconfig --levels 12345 named on




H] Perform some forward lookup tests
nslookup node1
nslookup node1.demo.com
nslookup node1-vip
nslookup node2
nslookup node2.demo.com
nslookup group00-scan
nslookup group00-scan.demo.com
I] Perform some reverse lookup tests
nslookup <IP-Address-Of-node1>
nslookup <IP-Address-Of-node2>
J] Take a DIG into the configuration
dig group00-scan.demo.com
***************************for Installing NetApp simulator visit*************

http://rafik-dba.blogspot.in/2017/06/installing-netapp-filer-simulator.html
******************************************************************************

--------------11gR2 Grid installation prerequisite---------------------------------------------------

*   Preparing  RAC Nodes *
1. Hostname and IP Checks
hostname
ifconfig eth0 and ifconfig eth1
neat to change the settings
 
2. OS Version and Package Checks
cat /etc/redhat-release
uname -a
rpm -qa|grep libaio
 
3. Kernel Parameter, Limits, /etc/hosts and DNS Client Setup


Defer Kernel Parameter Checks - /etc/sysctl.conf
Defer OS Limits Checks - /etc/security/limits.conf
/etc/hosts
/etc/resolv.conf


4. Time Synchronization Checks
Either use NTP or some other Time Sync technique
On RAC1, RAC2 and RAC3 edit the file : /etc/ntp.conf
server 172.168.0.1 # This is our intranet NTP Server
Server 127.0.0.1 # Its already there
Comment all other servers # Anyways you can't reach to those servers and it requires internet access
Edit file : /etc/sysconfig/ntpd and it shows
# Drop root to id 'ntp:ntp' by default.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
start the ntpd service on RAC1 and RAC2 as root : service ntpd start
ps -ef|grep ntp and verify it shows "-x"
chkconfig ntpd on
5. Group and User Checks - for all RAC1/RAC2/RAC3
groupadd oinstall ; groupadd dba ; useradd -g oinstall -G dba oracle ; passwd oracle
id oracle
 =============================
6. SSH User Equivalence Checks
ON NODE-1
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
ON NODE-2
/usr/bin/ssh-keygen -t rsa
/usr/bin/ssh-keygen -t dsa
ON NODE-1
ssh node1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh node1 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp /home/oracle/.ssh/authorized_keys oracle@node2:~/.ssh/
ON NODE-2
ssh node2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh node2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
scp /home/oracle/.ssh/authorized_keys oracle@node1:~/.ssh/


--Confirm ssh on both Nodes.
node1 $ssh node2 date
node1 $ssh node1 date
7. iSCSI and SAN Access Checks
a. Check using the Netapp URL : http://172.168.70.100/na_admin which iscsi initiator IQNs the LUNs are mapped to
you can also use netapp command : "igroup show" to find out the IQNs of the initiators
b
iqn.2017-01.com.demo:node3
iqn.2017-01.com.demo:node2
iqn.2017-01.com.demo:node1
b1.




c. Edit /etc/iscsi/initiatorname.iscsi and add the correct IQNs in this file.
--Clear previous entries since your machine is cloned from my machine. Do it on node11,node2,node3
# service iscsi stop
# iscsiadm -m node -T iqn.1992-08.com.netapp:sn.99927627 -o delete
# rm -fr /var/lib/iscsi/send_targets/172.168.209.100,3260/*
d. Restart the iscsi daemon on node1 and node2 (service iscsi restart)
e. Discover the iscsi Target's IQN (from node1/node2/node3)
# iscsiadm --mode discovery --type sendtargets --portal 172.168.0.100
f. Login to the discovered target IQN (from node 1/ node 2/node3)
# iscsiadm --mode node --targetname iqn.1992-08.com.netapp:sn.99927627 --portal 172.168.0.100:3260 --login
g. Verify the LUNs are visible using (fdisk -l and lsscsi)
8. ASMLib Checks (only applicable for Linux)
ASMLib is OS Version Specific (kernel version specific). Redflag.
a. Copy and Install the software on node 1 and node 2 (cd /root/ASMLib; rpm -ivh *.rpm)
rpm -ivh *
b. After the ASMLib software is installed you need to run one-time configuration (/etc/init.d/oracleasm configure) as root on node 1/ node 2
Your responses during the configure phase are recorded in file : /etc/sysconfig/oracleasm
cd /etc/sysconfig/
cp oracleasm-_dev_oracleasm oracleasm (this is not required for fresh install)
/etc/init.d/oracleasm restart
lsmod |grep oracleasm


c. Check if the Oracle ASMLib software is started : /etc/init.d/oracleasm status and lsmod|grep oracleasm
d. Now provision the ASMLib Disks using the NetApp Luns discovered earlier. (STAMP the LUNS)
a2. fdisk /dev/sdb (p, n, p, 1, default, default, p, w)
a3. partprobe /dev/sd[b-e]
a4. ls -l /dev/sd[b-e]*
a5. Now stamp the partitioned device using ASMLib
/etc/init.d/oracleasm createdisk ASMDISK1 /dev/sdb1
/etc/init.d/oracleasm createdisk ASMDISK2 /dev/sdc1
/etc/init.d/oracleasm createdisk ASMDISK3 /dev/sdd1
/etc/init.d/oracleasm createdisk ASMDISK4 /dev/sde1
a6. /etc/init.d/oracleasm listdisks or ls -l /dev/oracleasm/disks
a7. /etc/init.d/oracleasm scandisks from node 2 and every subsequent node from your cluster


9. File-system and ownership Checks
ORACLE_BASE=/u002/app/oracle
GRID_HOME=/u002/grid
ORACLE_HOME/u002/app/oracle/product/11.2.0
mkdir -p /u001/grid /u001/app ; chown -R oracle:oinstall /u001


10. Cluster Pre-verify Checks as oracle@node1
mkdir -p /u002/app/stage
cd /u002/app/stage/grid
./runcluvfy.sh stage -pre crsinst -n node1,node2 -r 11gR2 -verbose > /tmp/prechecks.log


11. Shutdown Unwanted Daemons




12. using vncserver run the ./runInstaller




 


INSTALLING THE NETAPP FILER SIMULATOR



INSTALLING THE NETAPP FILER SIMULATOR ON LINUX MACHINE (on the dns machine)


A] PHASE I : Installing the Netapp Filer Simulator Software
1. Copy the software (7.3.5-sim-cdrom-image-v22.iso) from local machine to the "DNS Server" using winscp
From the windows terminal server upload the file 7.3.5-sim-cdrom-image-v22.iso using winscp to your DNS server.
2. Mount the ISO image on the DNS Server
mkdir -p /virtual-cd
mount -t iso9660 -o loop /root/7.3.5-sim-cdrom-image-v22.iso /virtual-cd
3. Run the installer /virtual-cd/setup.sh and install the software under /netapp/sim1
Your response :
Where to install to? [/sim]: /u001/netapp/
Would you like to install as a cluster? [no]:
Would you like full HTML/PDF FilerView documentation to be installed [yes]:
Continue with installation? [no]: yes
Your simulator has 3 disk(s). How many more would you like to add? [0]: 25
What disk size would you like to use? [a]: f
Disk adapter to put disks on? [0]:
Use DHCP on first boot? [yes]: no
Ask for floppy boot? [no]: no
Which network interface should the simulator use? [default]: eth2
How much memory would you like the simulator to use? [512]:
Create a new log for each session? [no]:
Overwrite the single log each time? [yes]:
------------INSTALLATION ENDS HERE------------------
B] PHASE II : First-time configuration of the Netapp Filer Simulator
1. Start the simulator using /u001/netapp/runsim.sh
Your response :
Please enter the new hostname : netapp
Do you want to enable IPV6? : n
Do you want to configure the virtual network interfaces? : n
Please enter the IP address for the Network Interface ns0 : 172.168.0.100
Please enter the netmask for the Network Interface ns0 : 255.255.0.0
Please enter media type : auto
Please enter the IP address for the Network Interface ns1 : <ENTER>
Would you like to continue the setup using the web interface : y
It shows this URL for web setup : http://172.168.X.100/api
Name or IP of IPV4 default gateway : <ENTER>
Name or IP of IPV4 administration host : <ENTER>
Please enter the timezone : GMT
Where is the filer located : Pune, India
What language would be used for multi protocol files : en_US
Do you want to run DNS resolver : n
Do you want to run NIS client : n
Do you want to use Alternate Shelf Management feature :n
Password :********
confirm :********
Is it visible to WINS : n
Multi-protocol or NTFS only : 1 which means multi-protocol
Password : ********
confirm :********
Would you like to change the name netapp01 : n
Method of authentication is via /etc/passwd file : 4
Which workgroup : WORKGROUP
Launch using the local browser the URL : http://172.168.X.100/api(This won't work from your machine)
Login as root/welcome1
Change the timezone, SMTP mail host to localhost and provide email address
------------First-time configuration ENDS HERE------------------
Go to the Web Admin Interface using browser to URL : 172.168.0.100/na_admin (For Web-based Administration)


C] PHASE III : Configuring Aggregates, Volumes and LUNs


Multiple Physical Disks connected to the SAN are grouped together in RAID-X Group
From the RAID-Group you allocate space for LUNs. These LUNs can now be accessed from the masked hosts.
Masking is a process of making a LUN visible to a particular host either by
a. using the WWN number of the HBA in case FC-SAN
b. using the IQN number of the iSCSI HBA in case of IP-SAN
c. using the IQN number of the iSCSI initiator software in case of IP-SAN but no HBAs installed on the host. (This is our model)


Aggregate : Is a collection of physical disks (we have 25 spare) with some "RAID" level
Volume : Is a separate disk space created inside an aggregate. Volumes can have its own "RAID" level
LUN : Logical Unit Number is a logical disk space presented to a given host and the host treats this as a hard disk.
LUNs are created inside a VOLUME.
Depending on the SAN implementation, a LUN could be FC-LUN and iSCSI-LUN
FC-SAN uses : Fibre Channel Protocol (FCP) - Real Life
FC-SAN =======> SAN-SWITCH =======> HOST (FC-HBA has WWN number)
IP-SAN uses : iSCSI Protocol (Lab Setup)
IP-SAN =======> NETWORK-SWITCH/FCoE Switch =======> HOST (NIC has MAC address)
NetApp Filer Simulator ===> Virtual Switch/Software Switch ===> Host-VM (vNIC has MAC address)
1. Create Aggregate
aggr create racaggr -t raid4 25
-- only 2 RAID levels are supported by the Simulator - RAID-4 and RAID-DP
2. Create a Volume
vol create oravol racaggr 17g
3. Create LUNs
lun setup
Your response :
Do you want to create LUN? y
Multiprotocol type of LUN : linux
Enter LUN path : /vol/oravol/lun0
Do you want to create a space reserved LUN : y
Enter the LUN size : 3g
Enter comment :
Enter the name of the initiator group : RACGRP
Type of initiator group : iSCSI
Enter comma separated node names : iqn.2017-01.com.demo:node1
iqn.2017-01.com.demo:node2
iqn.2017-01.com.demo:node3
<ENTER>
OS type for the initiator group RACGRP : linux
Create 4 LUNs of 3GB each. (run above command for 4 times)
4. Verify the luns : lun show (rw/online/mapped)
5. Start the iSCSI service and verify the status
iscsi status ===> Down/Not Running
iscsi start ===> Start the iSCSI service on the Netapp Filer Simulator
iscsi status ===> Up/Running.