Tuesday, 25 August 2020

ORA-10562: Error occurred while applying redo to data block

 

Issue:- ORA-10562: Error occurred while applying redo to data block


While recovering the standby database found the below error in alert log

Sun Aug 09 03:21:37 2020 

Errors in file /u01/app/testdb/oracle/diag/rdbms/testdb/testdb/trace/testdb_mrp0_15081.trc: 

ORA-10562: Error occurred while applying redo to data block (file# 4, block# 7173) 

ORA-10564: tablespace TOOLS 

ORA-01110: data file 4: '/u01/app/testdb/oradata02/testdb/tools_01.dbf' 

ORA-10561: block type 'TRANSACTION MANAGED DATA BLOCK', data object# 304906 

ORA-00607: Internal error occurred while making a change to a data block 

ORA-00602: internal programming exception 

ORA-07445: exception encountered: core dump [kdr9ir2rst0()+886] [SIGSEGV] 

[ADDR:0x7FFC05A5AFE0] [PC:0xB589B86] [Address not mapped t 

Sun Aug 09 03:21:37 2020 

MRP0: Background Media Recovery process shutdown (testdb)


To Resolve the issue I have implemented the below steps

1) Take the backup of the data file which is impacted (Backup need to take from primary database)

 In  My case data file 4 is impacted 

connect to RMAN on primary 

rman target / 

 Recovery Manager: Release 12.1.0.2.0 - Production on Sun Aug 9 03:54:26 2020 

 Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved. 

 connected to target database: prod (DBID=3642994164)


RMAN> backup datafile 4 format '/u01/app/prod/backup/Datafilefile_4.bk';

2) Transfer the backup piece to standby database

   You can use any tool to move file from primary to standby(i.e scp,ftp)

3) catalog the backup piece location on the standby side

 RMAN> catalog backuppiece '/u01/app/backup/Datafilefile_4.bk';

4) check  and confirm the backup piece

RMAN> list backuppiece '/u01/app/backup/Datafilefile_4.bk';

RMAN> list backup of datafile 4;


once confirm the backup

5) check mrp is running if mrp is running stop it .

alter database recover managed standby database cancel;

6)once MRP down restore the backup

RMAN> Restore datafile 4;


channel ORA_DISK_1: restore complete, elapsed time: 00:02:20

Finished restore at 09-Aug-20

7) start the MRP on standby database

SQl> Alter database recover managed standby disconnect from session;

check the alert log it will start applying the alert log 

Hope it will help to resolve issue

Thank you .


 



Saturday, 30 May 2020

Manual Switchover to Standby Database

We will see how to switchover the database role from primary to standby and vice versa

Switchover is the process or allowing to the primary/standby database to switch their role with their
respective primary/standby database. Where there is no any data loss during the role change this is known as switchover.

There are two main database role

              1)    Primary database
              2)    Standby database

Primary Database:-

           Is the database where all the active user connection are coming from end user.

Standby Database:-

      The database is connected to primary database any work happens  on the primary database will get reflect to standby database in the form of redo data. In some times it is used for reporting purpose (read only) mode.

Before going to switch over need to consider below things.

       1)      Standby database need to be in sync with primary.
       2)      Check network connection between primary and standby it need to be active
       3)      Check Standby redo logs(SRL’s) are configured on primary side.

Starting Manual Switchover

      1)      On Primary Database
      
          1.1) check switchover status

SQL> select switchover_status from v$database;

SWITCHOVER_STATUS
-----------------------------
To STANDBY

Note:- You will get output  Active session OR  To standby

1.2)  Change the switchover status

SQL> Alter database commit to switchover to physical standby with session shutdown;
Database altered

SQL> shutdown immediate

SQL> startup nomount

SQL>alter database mount standby database;

       2)    On Standby Database
          
            2.1) check the switchover status

      SQL> select switchover_status from v$database;

      SWITCHOVER_STATUS
       --------------------
      TO PRIMARY

      2.2) Stop the Managed Recovery Process (MRP)

      SQL> alter database recover managed standby database cancel;

       2.3) execute below command to switchover standby database to Primary

       SQL> Alter database commit to switchover to primary;

       SQL> shut immediate

       SQL> startup

       2.4) now check the database role it is now primary.

       SQL> select name,open_mode,database_role from v$database;

       NAME     OPEN_MODE DATABASE_ROLE
        --------- -------------------- ----------------
       TESTDB    READ WRITE PRIMARY

     3)  Now on  newly created standby database start Recovery (MRP)

       SQL>alter database recover managed standby database disconnect from session;

        Media recovery completed


.





Thursday, 28 November 2019

MRP process getting terminated with error ORA-10485

ORA-10485: Real-Time Query cannot be enabled while applying migration redo.

Issue:- In dataguard environment we have apply the PSU + OJVM patch on the database after this in  physical standby   database we observed the below error.

Sun Nov 24 12:14:55 2019
Media Recovery Log /u01/app/stdby1/FRA/stdby1/archivelog/2019_11_19/o1_mf_1_4682_gx77fcxz_.arc
MRP0: Background Media Recovery terminated with error 10485
Sun Nov 24 12:14:55 2019
Errors in file /u01/app/stdby1/oracle/diag/rdbms/stdby1/stdby1/trace/stdby1_pr00_59541.trc:
ORA-10485: Real-Time Query cannot be enabled while applying migration redo.
Managed Standby Recovery not using Real Time Apply
Sun Nov 24 12:14:56 2019
Completed: alter database recover managed standby database using current logfile disconnect
Recovery interrupted!
Block change tracking service stopping.
Stopping background process CTWR
Sun Nov 24 12:14:58 2019
MRP0: Background Media Recovery process shutdown (stdby1)

If you have dataguard environment and you have apply the psu+ojvm patch on the primary database your physical standby database will throw the below error

Cause:-

ORA-10485: Real-Time Query cannot be enabled while applying migration redo.
This is because your database is “open” i.e. Active Data Guard (license option) and Managed Recovery Process is trying to apply the redo of datapatch which it can’t do when “open“.
The Real-Time Query feature was enabled when an attempt was made to recover through migration redo generated during primary upgrades or downgrades”

Solution:-

Start the database in mount mode and start to applying the longs,onece all the redo get applied stop the redo apply and open the databse with real time apply.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup nomount
ORACLE instance started.

Total System Global Area 1.6034E+10 bytes
Fixed Size            2269072 bytes
Variable Size         7784628336 bytes
Database Buffers     8220835840 bytes
Redo Buffers           26480640 bytes

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;

Database altered.

SQL> alter database recover managed standby database nodelay disconnect ;
Database altered.

Check for all logs get applied, there should be no any GAP between primary and standby database.

SQL> select process,client_process,status,thread#,sequence#,block#,blocks from v$managed_standby;

PROCESS   CLIENT_P  STATUS        THREAD#     SEQUENCE#      BLOCK#     BLOCKS
---------           --------        ------------          ----------            ----------            ----------        ----------
ARCH         ARCH          CLOSING                   1                314027                   1                   53
ARCH         ARCH          CLOSING                   1                314025                   1               1237
ARCH         ARCH          CONNECTED            0                         0                    0                     0
ARCH         ARCH          CLOSING                   1                314026             2048               1880
RFS             UNKNOWN    IDLE                       0                         0                    0                     0
RFS             ARCH              IDLE                       0                         0                    0                     0
RFS             UNKNOWN    IDLE                       0                         0                    0                     0
RFS             LGWR             IDLE                       1                 314028                92                      1
MRP0          N/A               WAIT_FOR_LOG     1                 314028                  0                      0

   Thread  Last Sequence Received   Last Sequence Applied    Difference
---------- ----------------------                     ---------------------            ----------
     1                       314027                                       314027             0
       
SQL> alter database recover managed standby database cancel;
Media recovery complete.

SQL> alter database open read only;

Database altered.
alert log:-
alter database recover managed standby database cancel
Sun Nov 24 12:45:31 2019
MRP0: Background Media Recovery cancelled with status 16037


SQL> alter database recover managed standby database using current logfile disconnect;
Database altered.


alert log:-

Sun Nov 24 12:45:43 2019
Attempt to start background Managed Standby Recovery process (stdby1)
Starting background process MRP0
Sun Nov 24 12:45:43 2019
MRP0 started with pid=31, OS id=68196
Sun Nov 24 12:45:43 2019
MRP0: Background Managed Standby Recovery process started (stdby1)
Sun Nov 24 12:45:48 2019
 Started logmerger process
Sun Nov 24 12:45:49 2019
Managed Standby Recovery starting Real Time Apply
Sun Nov 24 12:45:49 2019
Parallel Media Recovery started with 8 slaves

Thursday, 7 February 2019

ORACLE RAC STARTUP SEQUENCE


                                                
 Now we are  going to see the startup sequence of ORACLE RAC , How it invoke the RAC daemon internally let’s start with it.Below is the diagram how each and every process get start and how those are depend on each other .
















Now we will see one by one
1)      INIT:-  
          INIT daemon is very first daemon to be start  at the time of the operating system boot, we can say it  is system and service manager for the oracle linuxor any linux flavor. This services are start and stop through script which is stored in the /etc/init.d directory .
2)      OHASD:-
Oracle High Availability Daemon   is the introduced in oracle 11g which is manage all clusterware resources , it is  responsible to start and monitor all local oracle clusterware daemon. We can say it is anchor for the all the process .
3)       OHASD orarootagent:-
                                      This process is responsible for starting the following process
1)      Osysmond :-  The system monitor service (osysmond)  monitoring and operating system metric collection service that sends the data to the cluster logger service. Service run on all node in the cluster. This process send the  all metric data to the cluster logger service  (ologgerd)
2)      ologgerd :- Cluster logger service  receives the os metric collection  information from all the nodes and kept it in the Cluster  Health Monitor (CHM) repository , which received from  the osysmond  service.  These is only one cluster logger service (loggerd) run on entire cluster  any other node is selected as standby for the  master logger service.  Master process manage the OS metrics data which is in CHM repository and manage the replica of the metrics information on the standby.
3)      Crsd:-  Cluster Ready service (process)  is primary  program for  managing high availability operation in the clusterware .  The CRS daemon (crsd) manage cluster resource based on the configuration information stored in the OCR for each resource. Is responsiblefor the start,stop  monitor and failover of resource it maintain OCR and also restart the resource when the failover occurs , this applicable for RAC system . For standalone configuration OHASD is  used
4)      Diskmon :- Disk monitor  daemon continuously run when  ocssd starts.  And it monitors and performs I/O  fencing for Exadata storage server (This server is called as cell as per Exadata). This process will run since the ocssd starts because exadata cell can beaded any cluster at any time.
5)      Ctssd :- The cluster time synchronization service process provides time synchronization for the cluster in the absence in the ntpd . if ntpd is configured , ctssd will run in observer mode.
4)      OHASD oraagent :-
              Oraagent process is started by OHASD this process is responsible to start the following process.
1)      Gipcd:-
             The Grid interprocess ommunication (GIPC) is support process that start enables redundant interconnect usage ,  redundant interconnect enables load balancing and load balancing across multiple private network(upto 4) . grid/bin/gipcd.bin os level it start.
2)      Mdnsd:-  
             Used by Grid Plug and Play to locate profiles in the cluster, as well as by GNS to perform name resolution. The mDNS process is a background process on Linux and UNIX and on Windows.
3)      Evmd:- This process distributes and  communicate some cluster events to all of the cluster members so that they aware of the cluster changes. This will start the emvlogger.bin reads the configuration files and determines what event to subscribe to from EVMD and it runs user defined action for those event.
4)      ASM:-
       Provides disk management for oracle clusterware  and oracle database.
5)      gpnpd:-
         Provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes have the most recent profile

5)      CRSD orarootagent :-
         The crsd process start another orarootagent process and another  oraagent process, the new orarootagent process is responsible to start the following process.
1)      Node vip :-
        Node vip is node application that is nodeapp it is responsible for eliminating response delay that is TCP timeouts to client program requesting connection to the database.  Each node vip is assigned an unused IP address. It is done via DHCP but can be assigned manually. There is one node vip per cluster startup, when cluster node is unreachable the surviving node vip is get fail and connection will be redirect to the surviving node  from unreachable node.
2)      Scan vip :-
      Single Client Access Name (SCAN) is a new Oracle Real Application Clusters (RAC) 11g Release 2 feature,
SCAN eliminates the dependencies on the static cluster node names
The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.
that provides a single name for clients connection to access an Oracle Database running in a cluster.
Three SCAN IP are started on the member nodes by using IP address assigned by DHCP server.
3)      Network resource :-
                  whatever  network resource  required  to cluster those will get started
4)      Gns vip:-
      The GNS virtual IP address is a static IP address configured in the DNS.
GNS is used to resolved client request for the cluster The DNS delegates queries to the GNS virtual IP address, and the GNS daemon responds  to incoming name resolution requests at that address. Within the subdomain,  the GNS uses multicast Domain Name Service (mDNS), included with Oracle Clusterware, to enable the cluster to map host names and IP addresses dynamically as nodes are added and removed from the cluster,
 Without requiring additional host configuration in the DNS.
6)      CRSD oraagent :-
      The new oraagent process  is responsible to start following resources.
1)      ONS:-
               Oracle notification service publish the notification for Fast Application Notification(FAN) events.
2)      ASM Instance:-
               One instance is started for on cluster provide disk management for cluster.
3)      SCAN Listener:-
4)      Node Listener:-
               5)   Database Instance
What is GPNP profile?
                 The GPnP profile is a small XML file located n  GRID_HOME/gpnp/<hostname>/profiles/peer under the name profile.xml. It is used to establish the correct global personality of a node. Each node maintains a local copy of the GPnP Profile and is maintanied by the GPnP Deamon (GPnPD.  GPnP Profile  is used to store necessary information required for the startup of Oracle Clusterware like  SPFILE location,ASM DiskString  etc.It contains various attributes defining node personality.
Cluster name
Network classifications (Public/Private)
Storage to be used for ASM : SPFILE location,ASM DiskString  etc
Digital signature information : The profile is security sensitive. It might identify the storage to be used as the root partition of a machine.  Hence, it contains digital signature information of the provisioning authority.

Sunday, 25 March 2018

Schema Refresh Using DataPump


Schema refresh is the routine task of the DBA life.To archive this  i am going to use Datapump (Expdp/Impdp) this is new technology which is introduced in the oracle 10G release. it is faster than the traditional export/import. use of expdp/impdp require one user which having the EXP_FULL_DATABASE role assing to him or we can do with SYS user.

Below explained the steps to perform schema refresh from SOURCE DATABASE to TARGET DATABASE.

Source Database:-

1) check the schema which you are want to export is present or not.

SQL> select USERNAME,ACCOUNT_STATUS,CREATED from dba_users where username='<YOUR_USERNAME>';

2) Check the size of the schema

SQL> SELECT sum(bytes)/1024/1024/1024 AS total_size_GB FROM  dba_segments WHERE owner='<YOUR_SCHEMA_NAME>';


3) Take count of objects which are present in the databsae schema.

SQL> select object_type,status,count(*)  from dba_objects where owner='<YOUR_SCHEMA_NAME>' group by object_type,status;


4) Create OS level as well as database level  directory where your data going to be store.

OS LEVEL:-
mkdir -p /u01/<Ditectory_name>

DB Level:-

sql> create or replace directorey DATAPUMP as '/u01/<Directory_name>';


5) Check space on mount point where your export data going be store make
       space according to that.

To check space on the mount pount if you are using

linux operating system df -h <mount_point>

on AIX operating system user df -gt <mount_point>

6) Take export of the schema

nohup expdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &

 export will run in back end.  To check status

ps -ef |grep export

or check the logs for status in log file.

After complition of the export copy dump file to target databsae (if taget databae is on same server no need to copy)
to copy for source server to target server use scp os command or use winscp tool.

below is the scp command

scp -p username@servername:<location_where_to_copy> (for servername either give IP  address or name of the server)

this will ask for the target server password: enter target server password.

That's it on source database.



TARGET DATABASE:-


1) Check the mount point size on the target database it should be more that the schema size.

2) create OS level as well as database level  directory same as source database .

OS LEVEL:- mkdir -p /u01/<Ditectory_name>

DB Level:-

sql> create or replace directorey DATAPUMP as '/u01/<Directory_name>';

Note: before going to import data need to take export of the target database schema

nohup expdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &


3) Check object count in target database.

SQL> select object_type,status,count(*)  from dba_objects where owner='<YOUR_SCHEMA_NAME>' group by object_type,status;

4) now drop the schema / objects

SQL> drop schema <SCHEMA_NAME> cascade;


NOTE:- Better to drop object instead of dropping the schema.

set lines 200
set pages 500

Drop:- Tables, Indexes
SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('TABLE');

Drop:-

Sequences,Views, Packages,Functions & Procedures,Synonyms, & Triggers

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');

Drop:- Types

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '<SCHEMA-NAME>'
AND OBJECT_TYPE IN ('TYPE');

5) Check object count again if it's zero then proceed with import.

check count with below query.

select owner,object_type,count(*)
 from dba_objects
 where owner in ('<SCHEMA_NAME')
 group by owner,object_type;


5) Now import the Dump file in to the target database.

nohup impdp \'/ as sysdba\' dumpfile=<SCHEMANAME_DBNAME>.dmp logfile=<SCHEMANAME_DBNAME>.log directory=DATAPUMP Schemas=<SCHEMA_NAME> &

Monitor Import job in log file or in os level using

ps -ef |grep import

6) After inport completed check is there any Invalid objects present.

select owner, object_type, status, count(*)
from dba_objects
where status = 'INVALID'
group by owner, object_type, status;

If you found any invaid object recompile the object.

SQL> @?/rdbms/admin/utlrp.sql

Hope this will you Help You.







Tuesday, 8 August 2017

Analyaze AWR Report



Analyaze AWR Report

Performance report in database
for this we need to check the following section of the AWR Report

1) Session Information:-

we can find out the session information from the top of the awr report





Check the Begin snap and End snap time if there is high number of seesions then we need to check the shadow process of the session or check whether shadow process is being created. The most likely cause is in application start-up issue, which is spawning all those sessions.

2) Load profile:-
It shows per-second and per-transaction statistics . And for database load like hard parse and number of transactions





















while looking to the Load profile first important look into is “DB CPU” per second . But we need to understand what is is “DB CPU” suppose we have 8 core into system. So we have a 8 seconds to work on the CPU. If in report if DB CPU per second is greater than the cores in host configuration means our environment is CPU bound issue so we need to add the CPU on our environment or check for further for it's happening all the time or just fraction of them.
Next we need to look into the Parses and Hard parses . If the hard parses to parses is high, it means database is performing the more hard parse. In that case we need to check on the parameter like cursor_sharing and the bind variables that are available on the application level.

3) Instance Efficiency percentages:-
This section show several hit ratio as well as Execute to Parse percentage and latch hit percentage. The execute to Parse percentage should be very high in good running instances.
If we have Low value of the % SQL with executions>1: statistics means instance is not re-using shared sql statements because SQL not using bind variables.




















Now we will check for the meaning of the particular hit ratio:-

a) Buffer Nowait:- show the percent of times when data buffers were accessed directly without any wait time.

b) Buffer Hit Ratio:- It calculate the percentage on the basic of how many times required block is found in the memory rather than having to execute an expensive read operation on disk to get the block.

c) Library Hit:- This shows that how many times the SQL statement and PL/SQL package is found in the shared pool memory cache.

d) Execute to Parse:- how frequently SQL statements are reused without re-parse.

e) Parse CPU to Parse Elapsd %:- Give the ratio of CPU time spent to parse sql statement
f) Redo NoWait:- shows whether the redo log buffer has sufficient size.
g) In-memory Sort :- shows the percentage of sorting performed in memory instead of using temporary tablespace.
h) Soft parse % :- It shows that whenever any session has issued any SQL statement that is already in shared pool and how it use an existing statement.
I) Latche Hit % :- shows how often latches were acquired without having to wait.

J) % Non-Parse CPU:- It show percentage of how much of CPU resources were spent on the actual SQL execution.


Int this statistics we need to look at the

soft parse % :- 100% shows that the SQL statements are actively re-used.

% Non-Parse CPU:- 100% means CPU resources are used into operation other than parsing . Means this is good for the database.

Parse CPU to Parse Elapsed:- it is happen very low. This represent that that oracle is waiting for resources during parsing for SQL statements. If it's then there is need to be investigate further.

3) TOP FIVE FOREGROUND EVENT

This section describe the events that were responsible for the most wait during the entire snapshot details. This report is very critical because this events are responsible for the system bottleneck this wait events are arranged by total wait events.

















a) DB File Sequential Read:-
This event signifies that the user process is reading a buffer into the SGA buffer cache and is waiting for a physical I/O call to return. A sequential read is a single-block read.

Single block I/Os are usually the result of using indexes. Rarely, full table scan calls could get truncated to a single block call due to extent boundaries, or buffers already present in the buffer cache. These waits would also show up as 'db file sequential read'.

b) DB CPU:-

DB CPU is Oracle server/foreground/shadow process CPU consumption.Each Oracle server process gathers its own CPU consumption using the time and/or getrusage C function system call. So unless there is a major screw-up by either the operating system or the Oracle kernel developers, the time will be good... very good. The name DB CPU is taken from the actual statistic name, which is found in both v$sess_time_model and v$sys_time_model.
If you look at any AWR or Statspack report in the "Time Model" section, you will see DB CPU.
c) DB FILE SCATTERED READ :-

This event signifies that the user process is reading buffers into the SGA buffer cache and is waiting for a physical I/O call to return. A db file scattered read issues a scattered read to read the data into multiple discontinuous memory locations of buffer cache.A scattered read is usually a multi block read. It can occur for a fast full scan (of an index) in addition to a full table scan.

The db file scattered read wait event identifies that a full scan is occurring. When performing a full scan into the buffer cache, the blocks read are read into memory locations that are not physically adjacent to each other. Such reads are called scattered read calls, because the blocks are scattered throughout memory. This is why the corresponding wait event is called 'db file scattered read'. multiblock (up to DB_FILE_MULTIBLOCK_READ_COUNT blocks) reads due to full scans into the buffer cache show up as waits for 'db file scattered read'



d) Direct Path Read:-

The session is waiting for a direct read to complete. A direct read is a physical I/O from a data file that bypasses the buffer cache and reads the data block directly into process-private memory.

If asynchronous I/O is supported (and in use), then Oracle can submit I/O requests and continue processing. Oracle can then pick up the results of the I/O request later and wait on "direct path read" until the required I/O completes.

If asynchronous I/O is not being used, then the I/O requests block until completed but these do not show as waits at the time the I/O is issued. The session returns later to pick up the completed I/O data but can then show a wait on "direct path read" even though this wait will return immediately. Hence this wait event is very misleading because: • The total number of waits does not reflect the number of I/O requests • The total time spent in "direct path read" does not always reflect the true

e) Control file Sequential Read:-
The control file sequential read oracle metric indicates indicates the process is waiting for block to be read from a control file.
This happens in many cases. For example:-

I) Making a backup of the controlfile.
ii) Sharing information (between instances) from the control file.
iii) Reading other block from the controlfile.
iV) Reading the header block

control file sequential read waits occur when control file are placed on the same physical disk causing read-write head contention at the disk level.
4) TIME MODEL STATISTICS

This sectio give the detail about how the database is spend the time. The most processing time on actual SQL execution but not
on parsing.

5) Operating system statistics:-

This shows the operating system load status





REF:- Oracle12c-Performance-Tuning-Recipes