Thursday, 7 February 2019

ORACLE RAC STARTUP SEQUENCE


                                                
 Now we are  going to see the startup sequence of ORACLE RAC , How it invoke the RAC daemon internally let’s start with it.Below is the diagram how each and every process get start and how those are depend on each other .
















Now we will see one by one
1)      INIT:-  
          INIT daemon is very first daemon to be start  at the time of the operating system boot, we can say it  is system and service manager for the oracle linuxor any linux flavor. This services are start and stop through script which is stored in the /etc/init.d directory .
2)      OHASD:-
Oracle High Availability Daemon   is the introduced in oracle 11g which is manage all clusterware resources , it is  responsible to start and monitor all local oracle clusterware daemon. We can say it is anchor for the all the process .
3)       OHASD orarootagent:-
                                      This process is responsible for starting the following process
1)      Osysmond :-  The system monitor service (osysmond)  monitoring and operating system metric collection service that sends the data to the cluster logger service. Service run on all node in the cluster. This process send the  all metric data to the cluster logger service  (ologgerd)
2)      ologgerd :- Cluster logger service  receives the os metric collection  information from all the nodes and kept it in the Cluster  Health Monitor (CHM) repository , which received from  the osysmond  service.  These is only one cluster logger service (loggerd) run on entire cluster  any other node is selected as standby for the  master logger service.  Master process manage the OS metrics data which is in CHM repository and manage the replica of the metrics information on the standby.
3)      Crsd:-  Cluster Ready service (process)  is primary  program for  managing high availability operation in the clusterware .  The CRS daemon (crsd) manage cluster resource based on the configuration information stored in the OCR for each resource. Is responsiblefor the start,stop  monitor and failover of resource it maintain OCR and also restart the resource when the failover occurs , this applicable for RAC system . For standalone configuration OHASD is  used
4)      Diskmon :- Disk monitor  daemon continuously run when  ocssd starts.  And it monitors and performs I/O  fencing for Exadata storage server (This server is called as cell as per Exadata). This process will run since the ocssd starts because exadata cell can beaded any cluster at any time.
5)      Ctssd :- The cluster time synchronization service process provides time synchronization for the cluster in the absence in the ntpd . if ntpd is configured , ctssd will run in observer mode.
4)      OHASD oraagent :-
              Oraagent process is started by OHASD this process is responsible to start the following process.
1)      Gipcd:-
             The Grid interprocess ommunication (GIPC) is support process that start enables redundant interconnect usage ,  redundant interconnect enables load balancing and load balancing across multiple private network(upto 4) . grid/bin/gipcd.bin os level it start.
2)      Mdnsd:-  
             Used by Grid Plug and Play to locate profiles in the cluster, as well as by GNS to perform name resolution. The mDNS process is a background process on Linux and UNIX and on Windows.
3)      Evmd:- This process distributes and  communicate some cluster events to all of the cluster members so that they aware of the cluster changes. This will start the emvlogger.bin reads the configuration files and determines what event to subscribe to from EVMD and it runs user defined action for those event.
4)      ASM:-
       Provides disk management for oracle clusterware  and oracle database.
5)      gpnpd:-
         Provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes have the most recent profile

5)      CRSD orarootagent :-
         The crsd process start another orarootagent process and another  oraagent process, the new orarootagent process is responsible to start the following process.
1)      Node vip :-
        Node vip is node application that is nodeapp it is responsible for eliminating response delay that is TCP timeouts to client program requesting connection to the database.  Each node vip is assigned an unused IP address. It is done via DHCP but can be assigned manually. There is one node vip per cluster startup, when cluster node is unreachable the surviving node vip is get fail and connection will be redirect to the surviving node  from unreachable node.
2)      Scan vip :-
      Single Client Access Name (SCAN) is a new Oracle Real Application Clusters (RAC) 11g Release 2 feature,
SCAN eliminates the dependencies on the static cluster node names
The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.
that provides a single name for clients connection to access an Oracle Database running in a cluster.
Three SCAN IP are started on the member nodes by using IP address assigned by DHCP server.
3)      Network resource :-
                  whatever  network resource  required  to cluster those will get started
4)      Gns vip:-
      The GNS virtual IP address is a static IP address configured in the DNS.
GNS is used to resolved client request for the cluster The DNS delegates queries to the GNS virtual IP address, and the GNS daemon responds  to incoming name resolution requests at that address. Within the subdomain,  the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map host names and IP addresses dynamically as nodes are added and removed from the cluster,
 Without requiring additional host configuration in the DNS.
6)      CRSD oraagent :-
      The new oraagent process  is responsible to start following resources.
1)      ONS:-
               Oracle notification service publish the notification for Fast Application Notification(FAN) events.
2)      ASM Instance:-
               One instance is started for on cluster provide disk management for cluster.
3)      SCAN Listener:-
4)      Node Listener:-
               5)   Database Instance
What is GPNP profile?
                 The GPnP profile is a small XML file located n  GRID_HOME/gpnp/<hostname>/profiles/peer under the name profile.xml. It is used to establish the correct global personality of a node. Each node maintains a local copy of the GPnP Profile and is maintanied by the GPnP Deamon (GPnPD.  GPnP Profile  is used to store necessary information required for the startup of Oracle Clusterware like  SPFILE location,ASM DiskString  etc.It contains various attributes defining node personality.
Cluster name
Network classifications (Public/Private)
Storage to be used for ASM : SPFILE location,ASM DiskString  etc
Digital signature information : The profile is security sensitive. It might identify the storage to be used as the root partition of a machine.  Hence, it contains digital signature information of the provisioning authority.

Sunday, 25 March 2018

Schema Refresh Using DataPump


Schema refresh is the routine task of the DBA life.To archive this  i am going to use Datapump (Expdp/Impdp) this is new technology which is introduced in the oracle 10G release. it is faster than the traditional export/import. use of expdp/impdp require one user which having the EXP_FULL_DATABASE role assing to him or we can do with SYS user.

Below explained the steps to perform schema refresh from SOURCE DATABASE to TARGET DATABASE.

Source Database:-

1) check the schema which you are want to export is present or not.

SQL> select USERNAME,ACCOUNT_STATUS,CREATED from dba_users where username='<YOUR_USERNAME>';

2) Check the size of the schema

SQL> SELECT sum(bytes)/1024/1024/1024 AS total_size_GB FROM  dba_segments WHERE owner='<YOUR_SCHEMA_NAME>';


3) Take count of objects which are present in the databsae schema.

SQL> select object_type,status,count(*)  from dba_objects where owner='<YOUR_SCHEMA_NAME>' group by object_type,status;


4) Create OS level as well as database level  directory where your data going to be store.

OS LEVEL:-
mkdir -p /u01/<Ditectory_name>

DB Level:-

sql> create or replace directorey DATAPUMP as '/u01/<Directory_name>';


5) Check space on mount point where your export data going be store make
       space according to that.

To check space on the mount pount if you are using

linux operating system df -h <mount_point>

on AIX operating system user df -gt <mount_point>

6) Take export of the schema

nohup expdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &

 export will run in back end.  To check status

ps -ef |grep export

or check the logs for status in log file.

After complition of the export copy dump file to target databsae (if taget databae is on same server no need to copy)
to copy for source server to target server use scp os command or use winscp tool.

below is the scp command

scp -p username@servername:<location_where_to_copy> (for servername either give IP  address or name of the server)

this will ask for the target server password: enter target server password.

That's it on source database.



TARGET DATABASE:-


1) Check the mount point size on the target database it should be more that the schema size.

2) create OS level as well as database level  directory same as source database .

OS LEVEL:- mkdir -p /u01/<Ditectory_name>

DB Level:-

sql> create or replace directorey DATAPUMP as '/u01/<Directory_name>';

Note: before going to import data need to take export of the target database schema

nohup expdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &


3) Check object count in target database.

SQL> select object_type,status,count(*)  from dba_objects where owner='<YOUR_SCHEMA_NAME>' group by object_type,status;

4) now drop the schema / objects

SQL> drop schema <SCHEMA_NAME> cascade;


NOTE:- Better to drop object instead of dropping the schema.

set lines 200
set pages 500

Drop:- Tables, Indexes
SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('TABLE');

Drop:-

Sequences,Views, Packages,Functions & Procedures,Synonyms, & Triggers

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');

Drop:- Types

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('TYPE');

5) Check object count again if it's zero then proceed with import.

check count with below query.

select owner,object_type,count(*)
 from dba_objects
 where owner in ('<SCHEMA_NAME')
 group by owner,object_type;


5) Now import the Dump file in to the target database.

nohup impdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &

Monitor Import job in log file or in os level using

ps -ef |grep import

6) After inport completed check is there any Invalid objects present.

select owner, object_type, status, count(*)
from dba_objects
where status = 'INVALID'
group by owner, object_type, status;

If you found any invaid object recompile the object.

SQL> @?/rdbms/admin/utlrp.sql

Hope this will you Help You.







Tuesday, 8 August 2017

Analyaze AWR Report



Analyaze AWR Report

Performance report in database
for this we need to check the following section of the AWR Report

1) Session Information:-

we can find out the session information from the top of the awr report





Check the Begin snap and End snap time if there is high number of seesions then we need to check the shadow process of the session or check whether shadow process is being created. The most likely cause is in application start-up issue, which is spawning all those sessions.

2) Load profile:-
It shows per-second and per-transaction statistics . And for database load like hard parse and number of transactions





















while looking to the Load profile first important look into is “DB CPU” per second . But we need to understand what is is “DB CPU” suppose we have 8 core into system. So we have a 8 seconds to work on the CPU. If in report if DB CPU per second is greater than the cores in host configuration means our environment is CPU bound issue so we need to add the CPU on our environment or check for further for it's happening all the time or just fraction of them.
Next we need to look into the Parses and Hard parses . If the hard parses to parses is high, it means database is performing the more hard parse. In that case we need to check on the parameter like cursor_sharing and the bind variables that are available on the application level.

3) Instance Efficiency percentages:-
This section show several hit ratio as well as Execute to Parse percentage and latch hit percentage. The execute to Parse percentage should be very high in good running instances.
If we have Low value of the % SQL with executions>1: statistics means instance is not re-using shared sql statements because SQL not using bind variables.




















Now we will check for the meaning of the particular hit ratio:-

a) Buffer Nowait:- show the percent of times when data buffers were accessed directly without any wait time.

b) Buffer Hit Ratio:- It calculate the percentage on the basic of how many times required block is found in the memory rather than having to execute an expensive read operation on disk to get the block.

c) Library Hit:- This shows that how many times the SQL statement and PL/SQL package is found in the shared pool memory cache.

d) Execute to Parse:- how frequently SQL statements are reused without re-parse.

e) Parse CPU to Parse Elapsd %:- Give the ratio of CPU time spent to parse sql statement
f) Redo NoWait:- shows whether the redo log buffer has sufficient size.
g) In-memory Sort :- shows the percentage of sorting performed in memory instead of using temporary tablespace.
h) Soft parse % :- It shows that whenever any session has issued any SQL statement that is already in shared pool and how it use an existing statement.
I) Latche Hit % :- shows how often latches were acquired without having to wait.

J) % Non-Parse CPU:- It show percentage of how much of CPU resources were spent on the actual SQL execution.


Int this statistics we need to look at the

soft parse % :- 100% shows that the SQL statements are actively re-used.

% Non-Parse CPU:- 100% means CPU resources are used into operation other than parsing . Means this is good for the database.

Parse CPU to Parse Elapsed:- it is happen very low. This represent that that oracle is waiting for resources during parsing for SQL statements. If it's then there is need to be investigate further.

3) TOP FIVE FOREGROUND EVENT

This section describe the events that were responsible for the most wait during the entire snapshot details. This report is very critical because this events are responsible for the system bottleneck this wait events are arranged by total wait events.

















a) DB File Sequential Read:-
This event signifies that the user process is reading a buffer into the SGA buffer cache and is waiting for a physical I/O call to return. A sequential read is a single-block read.

Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache. These waits would also show up as 'db file sequential read'.

b) DB CPU:-

DB CPU is Oracle server/foreground/shadow process CPU consumption.Each Oracle server process gathers its own CPU consumption using the time and/or getrusage C function system call. So unless there is a major screw-up by either the operating system or the Oracle kernel developers, the time will be good... very good. The name DB CPU is taken from the actual statistic name, which is found in both v$sess_time_model and v$sys_time_model.
If you look at any AWR or Statspack report in the "Time Model" section, you will see DB CPU.
c) DB FILE SCATTERED READ :-

This event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return. A db file scattered read issues a scattered read to read the data into multiple discontinuous memory locations of buffer cache.A scattered read is usually a multi block read. It can occur for a fast full scan (of an index) in addition to a full table scan.

The db file scattered read wait event identifies that a full scan is occurring. When performing a full scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other. Such reads are called scattered read calls, because the blocks are scattered throughout memory. This is why the corresponding wait event is called 'db file scattered read'. multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full scans into the buffer cache show up as waits for 'db file scattered read'



d) Direct Path Read:-

The session is waiting for a direct read to complete. A direct read is a physical I/O from a data file that bypasses the buffer cache and reads the data block directly into process-private memory.

If asynchronous I/O is supported (and in use), then Oracle can submit I/O requests and continue processing. Oracle can then pick up the results of the I/O request later and wait on "direct path read" until the required I/O completes.

If asynchronous I/O is not being used, then the I/O requests block until completed but these do not show as waits at the time the I/O is issued. The session returns later to pick up the completed I/O data but can then show a wait on "direct path read" even though this wait will return immediately. Hence this wait event is very misleading because: • The total number of waits does not reflect the number of I/O requests • The total time spent in "direct path read" does not always reflect the true

e) Control file Sequential Read:-
The control file sequential read oracle metric indicates indicates the process is waiting for block to be read from a control file.
This happens in many cases. For example:-

I) Making a backup of the controlfile.
ii) Sharing information (between instances) from the control file.
iii) Reading other block from the controlfile.
iV) Reading the header block

control file sequential read waits occur when control file are placed on the same physical disk causing read-write head contention at the disk level.
4) TIME MODEL STATISTICS

This sectio give the detail about how the database is spend the time. The most processing time on actual SQL execution but not
on parsing.

5) Operating system statistics:-

This shows the operating system load status





REF:- Oracle12c-Performance-Tuning-Recipes