Tuesday, October 30, 2012

Apps DBA Scripts

1. Provide Concurrent Program Name, It will list out all concurrent requests sets names that has concurrent program in it?

SELECT DISTINCT user_request_set_name

  FROM FND_REQUEST_SETS_TL
 WHERE request_set_id IN
          (SELECT request_set_id
             FROM FND_REQUEST_SET_PROGRAMS
            WHERE concurrent_program_id =
                     (SELECT CONCURRENT_PROGRAM_ID
                        FROM fnd_concurrent_programs_tl
                       WHERE upper(USER_CONCURRENT_PROGRAM_NAME) = upper( '&Enter_Prog_name')));

2. Provide Concurrent Request Set Name, It will list out all the concurrent programs It has?


SELECT USER_CONCURRENT_PROGRAM_NAME

  FROM fnd_concurrent_programs_tl
 WHERE CONCURRENT_PROGRAM_ID IN
          (SELECT concurrent_program_id
             FROM FND_REQUEST_SET_PROGRAMS
            WHERE request_set_id =
                     (SELECT request_set_id
                        FROM FND_REQUEST_SETS_TL
                       WHERE upper(user_request_set_name) = upper('&Request_Set_Name')));

3. Find out Inactive Form sessions for X hours?



set lines 200 pages 200
col event for a30

select a.sid,a.serial#,a.action,a.status,a.event,round(a.last_call_Et/60/60) LSTCALL_Hrs, 

round(a.seconds_in_wait/60/60) SCNDINWAIT_Hrs,
       a.process, b.spid
  from v$session a, v$process b
 where a.action like 'FRM%'
   and a.paddr = b.addr
   and a.last_call_Et/60/60 > &&No_Of_Hours_Old order by 6,5;

4. Find out Application Names (Products) in Oracle Apps?


set lines 200
set pagesize 300
col APPLICATION_NAME for a70
select a.APPLICATION_NAME, b.APPLICATION_SHORT_NAME from apps.fnd_application b, apps.fnd_application_tl a  where a.APPLICATION_ID=b.APPLICATION_ID; 

5. Find out What are all concurrent programs are assigned to What are all concurrent Managers?


set lines 200 pages 300

col USER_CONCURRENT_QUEUE_NAME for a50
col CONCURRENT_PROGRAM_NAME for a50

break on USER_CONCURRENT_QUEUE_NAME skip 1;
SELECT C.USER_CONCURRENT_QUEUE_NAME,B.CONCURRENT_PROGRAM_NAME,A.INCLUDE_FLAG
FROM APPLSYS.FND_CONCURRENT_QUEUE_CONTENT A, APPLSYS.FND_CONCURRENT_PROGRAMS B, APPS.FND_CONCURRENT_QUEUES_VL C
WHERE type_id = b.concurrent_program_id and c.concurrent_queue_id = a.concurrent_queue_id order by C.USER_CONCURRENT_QUEUE_NAME;

6. Find out session details  of a concurrent Request?


set lines 200 pages 300
col USER_CONCURRENT_PROGRAM_NAME for a40

select USER_CONCURRENT_PROGRAM_NAME,c.request_id,s.sid,p.spid,s.process,s.osuser,s.username,s.program,s.status,logon_time,last_call_et
from v$session s, v$process p, apps.fnd_concurrent_requests c,apps.fnd_concurrent_programs_tl ct
where oracle_process_id=p.spid
and s.paddr=p.addr and
ct.concurrent_program_id=c.concurrent_program_id
and request_id=&creq_id
/

7. Find out All Concurrent Queue/Manager sizes/Processes?

set lines 200 pages 300
select a.concurrent_queue_name,b.min_processes,b.max_processes from apps.fnd_concurrent_queues a,apps.fnd_concurrent_queue_size b where a.concurrent_queue_id=b.concurrent_queue_id;

8. Find out Responsibility name from a Concurrent program?

set lines 200 pages 300
SELECT DISTINCT
  FCPL.USER_CONCURRENT_PROGRAM_NAME
, FCP.CONCURRENT_PROGRAM_NAME
, FAPP.APPLICATION_NAME
, FRG.REQUEST_GROUP_NAME
, FNRTL.RESPONSIBILITY_NAME
FROM
  APPS.FND_REQUEST_GROUPS FRG
, APPS.FND_APPLICATION_TL FAPP
, APPS.FND_REQUEST_GROUP_UNITS FRGU
, APPS.FND_CONCURRENT_PROGRAMS FCP
, APPS.FND_CONCURRENT_PROGRAMS_TL FCPL
, APPS.FND_RESPONSIBILITY FNR
, APPS.FND_RESPONSIBILITY_TL FNRTL
WHERE FRG.APPLICATION_ID=fapp.APPLICATION_ID
AND FRG.APPLICATION_ID = FRGU.APPLICATION_ID
AND FRG.REQUEST_GROUP_ID = FRGU.REQUEST_GROUP_ID
AND FRG.REQUEST_GROUP_ID = FNR.REQUEST_GROUP_ID
AND FRG.APPLICATION_ID = FNR.APPLICATION_ID
AND FNR.RESPONSIBILITY_ID = FNRTL.RESPONSIBILITY_ID
AND FRGU.REQUEST_UNIT_ID = FCP.CONCURRENT_PROGRAM_ID
AND FRGU.UNIT_APPLICATION_ID = FCP.APPLICATION_ID
AND FCP.CONCURRENT_PROGRAM_ID = FCPL.CONCURRENT_PROGRAM_ID
AND FCPL.USER_CONCURRENT_PROGRAM_NAME LIKE '%&Conc_Prog_name%'
AND FNRTL.LANGUAGE = 'US'
AND FAPP.LANGUAGE = 'US';

8. Find out Currently running concurrent requests with OSPID,SID and Serial# etc?

SELECT a.request_id, d.sid, d.serial# , c.SPID
    FROM apps.fnd_concurrent_requests a,
    apps.fnd_concurrent_processes b,
    v$process c,
    v$session d
    WHERE a.controlling_manager = b.concurrent_process_id
    AND c.pid = b.oracle_process_id
    AND b.session_id=d.audsid
    AND a.phase_code = 'R' and status_code='R';

Sunday, October 07, 2012

Simple PCP configuration


Simple PCP configuration on Non RAC Environment

1. Assume that you have your Application tier is shared across all internal middle tiers
2. Assume you have two CM nodes
3. Stop the Concurrent Managers on both the CM nodes
4. Apply patch 9074947 on application tier
5. set APPLDCP value ON in the context file of both the CM nodes.
6. Run autoconfig on Both cm nodes
7. Verify tnsnames.ora on both CM nodes has FNDFS entries of both the nodes
8. Ensure Internal Monitors on both CM nodes is defined properly and have workshifts assigned to them
9. Make sure 2 Internal monitor Managers of both the nodes is activated. If not activate from Concurrent -> Manager -> Administrator
10. Use the following query to find out your concurrent managers details.

SELECT distinct C.USER_CONCURRENT_QUEUE_NAME, C.MAX_PROCESSES, C.RUNNING_PROCESSES, C.TARGET_PROCESSES, C.NODE_NAME, C.NODE_NAME2
FROM APPLSYS.FND_CONCURRENT_QUEUE_CONTENT A, APPLSYS.FND_CONCURRENT_PROGRAMS B, APPS.FND_CONCURRENT_QUEUES_VL C
WHERE type_id = b.concurrent_program_id and c.concurrent_queue_id = a.concurrent_queue_id;

10. Now Define Primary and secondary nodes for each concurrent manager from Concurrent -> Manager -> Define as per your configuration.
11. Bring up Concurrent Managers on both the CM nodes.
12. Test the concurrent managers by submitting two uniqe concurrent programs that should be running on managers on both the nodes.

Failover Testing

Test Case #1

1. Bring down Apps Listener on CM node 2 using kill -9
2. Kill all FNDLIBR process on CM node 2 using kill -9
3. Start Apps listener on CM node 2 after 5 mins.
4. Monitor CM2, you should see all the managers on CM2 should come up automatially

Test Case #2

1. Bring down apps listener on CM node2 using kill -9.
2. Start the managers on CM1, you should see managers of CM2 will start on CM node 1.
3. Bring up Apps listenr on CM2.
4. You should see, Managers of CM2 should fall back to CM2 from CM1.

Test Case #3

1. Bring down host of CM node 2. It should be shutdown.
2. You should see all the managers of CM2 should be failed over to CM1
3. Bring up host of CM node2.
4. Bring up apps listener on cm node2.
5. You should see all the managers of CM2 should fall back to CM2 fro CM1.

Oracle Note id : How to Setup and Test Failover of PCP on Non-RAC Environments. [ID 743716.1]

Thursday, February 23, 2012

sys.utl_recomp.recomp_parallel(20); is failing with "ORA-00955: name is already used by an existing object"

Issue
The utl_recomp package errors out trying to compile invalids


SQL> exec sys.utl_recomp.recomp_parallel(20);


BEGIN SYS.UTL_RECOMP.RECOMP_PARALLEL;END;
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.UTL_RECOMP", line 662
ORA-06512: at "SYS.UTL_RECOMP", line 715
ORA-06512: at line 1


Cause
Bug:9881143 where an index is left from a previous failed execution of utl_recomp.  The following commands can be used to generate a trace file to verify.

alter session set events '955 trace name errorstack level 3';
exec sys.utl_recomp.recomp_parallel(20);

Looking in the trace file will show the problem index.

*** 2010-12-16 12:42:24.026
ksedmp: internal or fatal error
ORA-00955: name is already used by an existing object
Current SQL statement for this session:
CREATE INDEX utl_recomp_comp_idx1 ON utl_recomp_compiled(obj#) NOPARALLEL

Solution

Drop the index utl_recomp_comp_idx1 left by utl_recomp and then rerun the package.

sqlplus / as sysdba
drop index utl_recomp_comp_idx1;
exec sys.utl_recomp.recomp_parallel(20);

Bug:9881143 is fixed in 11.1.0.6.


Ref:Error Running Utl_Recomp.Recomp_Serial [ID 1274504.1]

Friday, February 03, 2012

How to Create The Service Manager ‘FNDSM’ on Oracle Applications


Run below command to check if node is registered for FNDSM:-
select CONCURRENT_QUEUE_NAME from FND_CONCURRENT_QUEUES where CONCURRENT_QUEUE_NAME like ‘FNDSM%’;

if not then use Note.218893.1 :How to Create The Service Manager ‘FNDSM’ on Oracle Applications

Solution

From the Application tier:

1. Log in as applmgr
2. cd to $FND_TOP/patch/115/sql
3. Run the script: afdcm037.sql
4. This script will create libraries for FNDSM and create Managers for
Preexisting Nodes.

Note: Service Manager “FNDSM” can not be created from form:
Concurrent> Manager> Define under Sysadmin Responsibility.

Reference <bug:6085070 FNDSM TRIGGER CAUSES SERVICE MANAGER NOT TO BE CREATED AFTER CLONING SINGLE NODE>

Wednesday, February 01, 2012

RAC Wait events

Important RAC wait events

SQL> select event from v$system_event  where event like '%global%' order by event;

EVENT
----------------------------------------------------------------
buffer busy global CR
buffer busy global cache
ges global resource directory to be frozen - no
ges global resource directory to be unfrozen - no
global cache busy
global cache cr request
global cache domain validation - no
global cache null to s
global cache null to x
global cache open s
global cache open x
global cache s to x

buffer busy global cache

This wait event falls under the umbrella of ‘global buffer busy events’. This wait event occurs when a user is waiting for a block that is currently held by another session on the same instance and the blocking session is itself waiting on a global cache transfer.

buffer busy global CR

This wait event falls under the umbrella of ‘global buffer busy events’. This wait event occurs when multiple CR requests for the same block are submitted from the same instance before the first request completes, users may queue up behind it

global cache busy

This wait event falls under the umbrella of ‘global buffer busy events’. This wait event means that a user on the local instance attempts to acquire a block globally and a pending acquisition or release is already in progress.

global cache cr request

This wait event falls under the umbrella of ‘global cache events’. This wait event determines that an instance has requested a consistent read version of a block from another instance and is waiting for the block to arrive

global cache null to s and global cache null to x

This wait event falls under the umbrella of ‘global cache events’. These events are waited for when a block was used by an instance, transferred to another instance, and then requested back again.

global cache open s and global cache open x

This wait event falls under the umbrella of ‘global cache events’. These events are used when an instance has to read a block from disk into cache as the block does not exist in any instances cache. High values on these waits may be indicative of a small buffer cache, therefore you may see a low cache hit ratio for your buffer cache at the same time as seeing these wait events.

global cache s to x

This wait event falls under the umbrella of ‘global cache events’. This event occurs when a session converts a block from shared to exclusive mode.


To find locks in RAC
SELECT inst_id,DECODE(request,0,'Holder: ','Waiter: ')||sid sess,
id1, id2, lmode, request, type
FROM GV$LOCK
WHERE (id1, id2, type) IN
(SELECT id1, id2, type FROM gV$LOCK WHERE request>0)
ORDER BY id1, request
;

Some more about RAC in details:

Cluster command in 10g and 11g

1. CRSCTL: Cluster Control utility performs various administrative operation of oracle clusterware. It is located in $ORA_CRS_HOME/bin and must be executed by the “root” user.

a. To check the current state of all oracle clusterware daemon:

[root@PROD1 bin]# ./crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy

b.You can also check the state of individual oracle clusterware daemon:

[root@PROD1 bin]# ./crsctl check cssd

CSS appears healthy

[root@PROD1 bin]# ./crsctl check crsd

CRS appears healthy

[root@PROD1 bin]# ./crsctl check evmd
EVM appears healthy

c. To start oracle clusterware

[root@PROD1 bin]# ./crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly

d. To stop oracle clusterware

[root@PROD1 bin]# ./crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.

e. To disable oracle clusterware:

[root@PROD1 bin]# ./crsctl disable crs

f. To enable oracle clusterware:

[root@PROD1 bin]# ./crsctl enable crs

g. To get current value of CSS parameter

[root@PROD1 bin]# ./crsctl get css

for example: to get value of misscount parameter

[root@PROD1 bin]# ./crsctl get css misscount
60

h. To set a new value of CSS parameter

[root@PROD1 bin]# ./crsctl set css

for example: to set value of misscount parameter

[root@PROD1 bin]# ./crsctl set css misscount 120
Configuration parameter misscount is now set to 120.

i. To unset CSS parameter value

[root@PROD1 bin]# ./crsctl unset css

for example: to unset value of misscount parameter

[root@PROD1 bin]# ./crsctl unset css misscount
Configuration parameter misscount is now undefined.

j. To list the module for debugging in CSS

[root@PROD2 bin]# ./crsctl lsmodules css
The following are the CSS modules ::
CSSD
COMMCRS
COMMNS
2. CRS_STAT: It reports the current state of resources configured in the OCR.

[oracle@rac1 bin]$ ./crs_stat -t
Name Type Target State Host
———————————————————————————–
ora….C1.inst application ONLINE ONLINE PROD1
ora….C2.inst application ONLINE ONLINE PROD2
ora….AC1.srv application ONLINE ONLINE PROD1
ora.RAC.abc.cs application ONLINE ONLINE PROD1
ora.RAC.db application ONLINE ONLINE PROD2
ora….AC1.srv application ONLINE ONLINE PROD1
ora….ice2.cs application ONLINE ONLINE PROD1
ora….AC1.srv application ONLINE ONLINE PROD1


3. OCRDUMP : It dumps the contents of OCR into a text file.

[root@PROD1 bin]# ./ocrdump /home/oracle/ocr.dmp

4. OCRCHECK : It verifies the integrity of the OCR.

[root@PROD2 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 5237072
Used space (kbytes) : 9360
Available space (kbytes) : 5227712
ID : 794527192
Device/File Name : /u01/oracle/oradata/ocr
Device/File integrity check succeeded

Cluster registry integrity check succeeded

7. OCRCONFIG: It perform various administrative operation on the OCR.

Interconnect setup in RAC

The interconnect is a very important part of the cluster environment it is on of the aorta’s of a cluster environment. The interconnect is used as physical layer between the cluster nodes to perform heartbeats as well as the cache fusion is using it. The interconnect must be a private connection. Cross over cable is not support.

In a day to day operation it is proven that when the interconnect is configured correctly the interconnect will not be the bottleneck in case of performance issues. In the rest of this article will be focus on the how to validate the interconnect is really used. An DBA must be able to validate the interconnect settings in case of performance problems. Out of scope is the physical attachment of the interconnect.

Although you should thread performance issues in a Cluster environment the way you would normally also do in no-cluster environments here some area’s you can focus on. Normally the average interconnect latency using gigabit must be < 5ms. Latency around 2ms are normal.

10g and 11g
set linesize 120
col name for a22
col ip_address for a15

select inst_id,name,ip_address,is_public from gv$configured_interconnects order by 1,2;


INST_ID NAME IP_ADDRESS IS_PUBLIC
---------- ---------------------- --------------- ---------
1 en11 145.72.220.10 YES
1 en12 145.72.220.83 NO
2 en11 145.72.220.20 YES
2 en12 145.72.220.84 NO


Used interfaces for the interconnect ?
In 1.1.1 the interfaces available are listed, but which interface is used for the cache fusion part ? And make sure the interconnect is using the correct interface ?
set linesize 120
col name for a22
col ip_address for a15
select inst_id,name,ip_address,is_public from gv$cluster_interconnects order by 1,2;

INST_ID NAME IP_ADDRESS IS_PUBLIC
---------- ---------------------- --------------- ---------
1 en12 145.72.220.83 NO
2 en12 145.72.220.84 NO


9i,10g and 11gOradebug
Before the above queries where available (10g and 11g) you needed to use oradebug to validate if the correct interface was used. This was the way to validate the interconnect usage in Oracle 9i, bust still possible in 10g and 11g as well.

SQL> connect / as sysdba
SQL> alter session set tracefile_identifier=oradebug_interc
SQL> oradebug setmypid
SQL> oradebug ipc
SQL> exit

Now if you open the tracefile, in the bdump location, you can find the IP address used for the interconnect. Here is the result of the above oradebug ipc command.

INSTANCE FAILURE AND RECOVERY (RAC)

Oracle uses a simple mechanism of a heartbeat to detect instance failure of a node in a RAC cluster. The cluster manager software detects if it cannot receive the heartbeat of the other instances and if it does not the recovery process begins. From what I can find, this heartbeat is provided by the udlm package on Sun and the MC/ServiceGuard on HP. Fundamentally, as all instances in a cluster can read all threads of redo another surviving member performs recovery on behalf of the failed instance. This comprises two main steps. Firstly, the Global Resource Directory must be remastered so that all resources that were mastered on the failed node are remastered to the surviving nodes. This remastering is performed by lmon. Secondly, the redo thread of the failed node is read so that changes to blocks on the failed node that were not written to disk (dirty blocks) can be applied to the datafiles. The RAC Concepts Guide outlines the following steps that are followed after node failure has occurred:

1. During the first phase of recovery, which is the GES reconfiguration, Oracle first reconfigures the GES enqueues. Then Oracle reconfigures the GCS resources. During this time, all GCS resource requests and write requests are temporarily suspended. However, processes and transactions can continue to modify data blocks as long as these processes and transactions have already acquired the necessary enqueues.

2. After the reconfiguration of enqueues that the GES controlled, a log read and the remastering of GCS resources occur in parallel. At the end of this step the block resources that need to be recovered have been identified. Only the redo of the dead instances must be read.

3. Buffer space for recovery is allocated and the resources that were identified in the previous reading of the log are claimed as recovery resources. Then, assuming that there are PIs of blocks to be recovered in other caches in the cluster database, resource buffers are requested from other instances. The resource buffers are the starting point of recovery for a particular block.

4. All resources and enqueues required for subsequent processing have been acquired and the Global Resource Directory is now unfrozen. Any data blocks that are not in recovery can now be accessed. Note that the system is already partially available.

5. The cache layer recovers and writes each block identified in step 2, releasing the recovery resources immediately after block recovery so that more blocks become available as cache recovery proceeds.

6. After all blocks have been recovered and the recovery resources have been released, the system is again fully available. Recovered blocks are available after recovery completes.

If all instances in a cluster fail then crash recovery is performed by the first instance in the cluster to start, just as in single instance.

CONCEPTS AND FUNCTIONAL OVERVIEW 

RAC is the ability to have two or more instances connect to the same database. These instances reside on separate servers, thereby allowing each instance to make full use of the processing ability of each server. As these instances are accessing the same database they need to be able to communicate with each other, this is done through physical interconnects that join the servers together. All the servers in the configuration make up the cluster.

All instances in a cluster share access to common database resources, this access needs to be coordinated between the instances in order to maintain the overall integrity of the database. In order to coordinate this access RAC databases have a Global Resource Directory. This Global Resource Directory is RAC specific and is not required on single instance systems.

Global Resource Directory(GRD)
Global Resource Directory provides an extra layer of control in order to allow all instances in a cluster to share access to shared database resources. The Global Resource Directory resides in the SGA of each instance in the cluster, with each instance maintaining its own portion. The main role of the Global Resource Directory is to ensure that access and changes to common resources is controlled between the instances in order to maintain the integrity of the database.

Global Enqueue Service(GES) and the Global Cache Service(GCS) maintain the information in the Global Resource Directory. Although the Global Resource Directory is split amongst all instances in the cluster, the Global Cache Service(GCS) and Global Enqueue Service(GES) nominate one instance to manage all information about a particular database resource. This instance is called the resource master. This ownership is periodically checked and changed accordingly between instances, this is done to reduce interconnect traffic and resource acquisition time.

Global Cache Service(GCS)
Global Cache Service is responsible for cache fusion i.e. transmitting data blocks between the instances. The main features of this processing are:

-The lms processes are the Global Cache Service(GCS) background processes in the instance
-Blocks can exist in more than one instance at a time.
-If an instance is requested to transfer a dirty block (a dirty block is a block that has been modified but not yet written to disk) to another instance in exclusive mode it keeps a past image of the block. This past image is basically a copy of the block as the holding instance last knew it before it transferred it to the requesting instance. This past image is used for recovery. Once the most recent copy (the master copy) of the dirty block is written to disk by dbwr the past images can and will be discarded. Note that PI’s can be used for a consistent read of a block as this saves having to build a copy from the rollback segment. The important thing to note is that an instance will always create a PI version of a dirty block before sending it to another instance if the reqeusting instance wants it in exclusive mode. If an instance requests the master block for read (consistent or current) there is no need for the holding instance to keep a PI as the requesting instance is not going to change the block
-The most recent copy (the master copy or current block) of a block contains all changes made to it by transactions, regardless of which instance the change occurred on and whether the transaction(s) has committed or not
-A block is assigned a role and mode and a status (clean or dirty)
-The block is held in a local role if it is only held in one SGA, it is held in a global role if it is held in more than one SGA
-The block can be held in null, shared or exclusive mode. Null mode means that the instance has no access rights on the blocks, shared mode means that the instance can read the block and exclusive mode means that the instance can write to the block. Many instances can have the same block in null or shared mode, but only one can have it in exclusive mode (as exclusive mode implies the instance wants to modify the block). To view the current mode of the block in an instance view v$bh.status. The following applies to the state of the blocks:

Global Enqueue Service(GES)
Global Enqueue Service(GES) manages all non-cache fusion resource requests and tracks the status of all enqueuing mechanisms. The GES only does this for resources that are accessed by more than one instance. The primary resources that the GES controls are dictionary cache locks and library cache locks. The GES manages the interinstance communication that occurs between the instances for these resources. These resources exist in single instance, the difference being that in RAC these must be coordinated between all instances in the cluster.

Dictionary cache locks – The data dictionary must be consistent across all nodes, if a table definition is changed on one instance the Global Enqueue Service ensures that the definition is propagated to the dictionary cache on all the other instances.

Library cache locks – These locks are held by sessions whilst they parse or execute SQL or PLSQL statements. These locks prevent other sessions from modifying the definitions of objects in the data dictionary that are being referenced by the statement that is currently parsing or executing.

BACKGROUND PROCESSES IN RAC


Each instance in a RAC configuration has the same background processes as a single instance database, however there are extra background processes evident in a RAC enabled instance, these are mainly related to the Global Cache Service(GCS) and the Global Enqueue Service(GES):

LMS processes – These are the processes for the Global Cache Service. These processes are responsible for transferring blocks between instances and maintaining the Global Resource Directory to reflect the modes and roles the blocks are held in on each instance. A block can exist in more than one instance at a time, but the Global Cache Service controls who has what version of the block thereby ensuring that the most up to date block (the master copy) is always the one that is updated. All other versions of the block will be past images or read consistent versions of the block.

LMON – Global Enqueue Service Monitor
This process monitors global enqueues and resources across the cluster and performs global enqueue recovery operations. LMON also handles recovery associated with global resources and can detect instance failure of other nodes

LMD – Global Enqueue Service Daemon
This process manages global enqueue and global resource access. Within each instance, the LMD process manages incoming remote resource requests

LCK – This is the lock process and makes up part of the Global Enqueue Service. It manages non-Cache Fusion resource requests such as library and dictionary cache requests

DIAG – This is diagnostic daemon.Captures diagnostic information related to instance process failures.

To view all the background processes (RAC and non-RAC) evident in an instance:

SQL> select * from v$bgprocess where paddr <> ‘00’

STORAGE SYSTEM FOR RAC 
In a RAC environment all instances need to be able to write to the same datafiles simultaneously. There are 2 ways to do this, use RAW devices or use a Cluster Filesystem.

Datafiles and tempfiles
All datafiles and tempfiles must reside on shared disks. The first instance to start will verify that it can read all datafiles identified in the controlfile. This must be done so that the first instance to start can determine whether instance or media recovery is required or not, this behaviour is no different to single instance. However, instances that join the cluster at a later date can operate even if they cannot access all the files, they will simply raise an error when an attempt is made to access the file.

Control files
The control files must be on shared disks and must be accessible by all instances at startup time as determined in the parameter file.

Redo log files and archived logs
On RAC each instance must write to its own set of redo logs. This set is called a thread of redo. All threads of redo must reside on shared disks. The instance gets its thread of redo at startup time as determined by the thread parameter. If an instance cannot get its thread of redo it will fail to open. Each redo group will still be uniquely numbered at the database level and will be multiplexed or mirrored, just as in single instance. The only difference is that in RAC each redo group belongs to a thread, and only the instance specifying that thread number at startup time will write to the redo groups in that thread.

Each instance can, however, read all threads of redo. This is to facilitate instance recovery i.e if instance a fails then instance b will read instance a’s thread of redo to recover the failure. This must happen so that the consistency and integrity of the database is maintained if one instance fails. In order to facilitate instance recovery all redo files must reside on shared disks.

Archived log files are generated by each thread of redo and are uniquely identified by the thread number that we include in the log_archive_format and the sequence number which is unique for each instance. Archived_log can be on the local filesystem or shared filesystem.

Sequences
Sequences are held on disk. Even in single instance many DBA’s cache sequence numbers to avoid contention for the sequence. We cache most sequences in RAC to avoid contention on the sequence. If you have a high cumulative wait time in v$enqueue_stat on the SQ enqueue (the sequence number enqueue) then you should consider caching enqueues. RAC does support CACHING and ORDERING of sequence numbers.

Undo management
Undo/rollback datafiles must reside on RAW devices. If you use MANUAL undo then each instance must specify unique rollback segments in the instance specific parameter file. If you use AUTOMATIC undo then each instance must specify a separate tablespace, this tablespace must be available and of type UNDO. All instances in a RAC cluster must run in the same UNDO mode i.e you can’t have one running AUTOMATIC undo and another running MANUAL undo. If you are using AUTOMATIC undo monitor v$undostat for statistics.

To see if the oracle home is RAC enabled issue the following SQL:

select * from dba_registry where comp_id = 'RAC';

To relink an oracle home with RAC disabled or enabled:

cd $ORACLE_HOME/rdbms/lib
make –f ins_rdbms.mk rac_off install OR
make –f ins_rdbms.mk rac_on install

Important RAC wait events

SQL> select event from v$system_event  where event like '%global%' order by event;

EVENT
----------------------------------------------------------------
buffer busy global CR
buffer busy global cache
ges global resource directory to be frozen - no
ges global resource directory to be unfrozen - no
global cache busy
global cache cr request
global cache domain validation - no
global cache null to s
global cache null to x
global cache open s
global cache open x
global cache s to x

buffer busy global cache

This wait event falls under the umbrella of ‘global buffer busy events’. This wait event occurs when a user is waiting for a block that is currently held by another session on the same instance and the blocking session is itself waiting on a global cache transfer.

buffer busy global CR

This wait event falls under the umbrella of ‘global buffer busy events’. This wait event occurs when multiple CR requests for the same block are submitted from the same instance before the first request completes, users may queue up behind it

global cache busy

This wait event falls under the umbrella of ‘global buffer busy events’. This wait event means that a user on the local instance attempts to acquire a block globally and a pending acquisition or release is already in progress.

global cache cr request

This wait event falls under the umbrella of ‘global cache events’. This wait event determines that an instance has requested a consistent read version of a block from another instance and is waiting for the block to arrive

global cache null to s and global cache null to x

This wait event falls under the umbrella of ‘global cache events’. These events are waited for when a block was used by an instance, transferred to another instance, and then requested back again.

global cache open s and global cache open x

This wait event falls under the umbrella of ‘global cache events’. These events are used when an instance has to read a block from disk into cache as the block does not exist in any instances cache. High values on these waits may be indicative of a small buffer cache, therefore you may see a low cache hit ratio for your buffer cache at the same time as seeing these wait events.

global cache s to x

This wait event falls under the umbrella of ‘global cache events’. This event occurs when a session converts a block from shared to exclusive mode.

To find locks in RAC
SELECT inst_id,DECODE(request,0,'Holder: ','Waiter: ')||sid sess,
id1, id2, lmode, request, type
FROM GV$LOCK
WHERE (id1, id2, type) IN
(SELECT id1, id2, type FROM gV$LOCK WHERE request>0)
ORDER BY id1, request;

Some Imp points

1. Cache Fusion:Cache Fusion is a new parallel database architecture for exploiting clustered computers to achieve scalability of all types of applications. Cache Fusion is a shared cache architecture that uses high speed low latency interconnects available today on clustered systems to maintain database cache coherency. Database blocks are shipped across the interconnect to the node where access to the data is needed. This is accomplished transparently to the application and users of the system. As Cache Fusion uses at most a 3 point protocol, this means that it easily scales to clusters with a large numbers of nodes


2.The LMD and LMS processes are critical RAC processes that should not be blocked on CPU by queuing up behind other scheduled CPU events

3 v$ges_statistics view returns various statistics on the Global Enqueue Service.

4.gv$lock view will show all the locks held by all the instances

Source: http://appsoracle.blogspot.in/search/label/RAC

Tuesday, January 24, 2012

How to map semaphores id and cleanup in unix to Oracle process

How to map semaphores id in unix to Oracle process:


eg: ps -ef | grep pmon | grep -v grep

oracle   23398     1  0  2011 ?        00:06:05 ora_pmon_DB1
oracle   24488     1  0 Jan10 ?        00:01:51 ora_pmon_DB2
oracle   31682     1  0 Jan23 ?        00:00:23 ora_pmon_DB3


ipcs -map

------ Shared Memory Creator/Last-op --------
shmid      owner      cpid       lpid    

80642052   oracle     31653      32595  
79790085   oracle     23371      29972     - marked  in trace file
80510982   oracle     24226      30222  

Now each shmid is mapped with each database. now we need to find out which shmid belongs to which database

1. Login to DB1
2. sqlplus ' /as sysdba'

        SQL> oradebug setmypid
        Statement processed.
        SQL> oradebug ipc
        Information written to trace file.

3. got to dump file location and open lastly generated trace file

Dump file /oracle/admin/DB1/udump/db1_ora_15433.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, Oracle Label Security, OLAP,
Data Mining Scoring Engine and Real Application Testing options
ORACLE_HOME = /oracle/product/10.2.0.5/DB1
System name:    Linux
Node name:      db1-linux
Release:        2.6.18-238.9.1.el5
Version:        #1 SMP Fri Mar 18 12:42:39 EDT 2011
Machine:        x86_64
Instance name: DB1
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 15433, image: oracle@db1-linux (TNS V1-V3)

*** 2012-01-24 12:51:31.038
*** ACTION NAME:() 2012-01-24 12:51:31.038
*** MODULE NAME:(sqlplus@ldb1-linux (TNS V1-V3)) 2012-01-24 12:51:31.038
*** SERVICE NAME:(SYS$USERS) 2012-01-24 12:51:31.038
*** SESSION ID:(2769.41080) 2012-01-24 12:51:31.038
Dump of unix-generic skgm context
areaflags            000000e7
realmflags           0000000f
mapsize              00000800
protectsize          00001000
lcmsize              00001000
seglen               00200000
largestsize  0000001f7d2fb800
smallestsize 0000000001000000
stacklimit     0x7fffd5de24c0
stackdir                   -1
mode                      640
magic                acc01ade
Handle:            0x1965f470 `/oracle/product/10.2.0.5/DB1'
Dump of unix-generic realm handle `/oracle/product/10.2.0.5/DB1', flags = 00000000
 Area #0 `Fixed Size' containing Subareas 0-0
  Total size 0000000000209ab8 Minimum Subarea size 00000000
   Area  Subarea    Shmid      Stable Addr      Actual Addr
      0        0         79790085 0x00000060000000 0x00000060000000
                              Subarea size     Segment size
                          000000000020a000 0000000540200000
 Area #1 `Variable Size' containing Subareas 2-2

4. You can map this id to id you got in above output in ipcs 

Cleanup the process using ipcrm command:


ipcrm -m 79790085 












Sunday, January 22, 2012

More about RAC(Real Application Cluster)


RAC(Real Application Cluster)

1. When RAC is introduced?

Ans: Introduced in Oracle 9i

2. How to identify RAC instance?

Ans: show parameter cluster or use the DBMS_UTILITY.IS_CLUSTER_DATABASE function.

3. RAC advantages/features?

A:  1. High availability
     2. Failover
     3. Reliability
     4. Scalability
     5. Managebility
     6. Recoverability
     7. Transparency
     8. Row locking
     9. Error detection
     10. Buffer cache management
     11. Continuos Operations
     12. Load balancing/sharing

4. Components in RAC?


SGA - Each instance has its own SGA
Background processes - Each instance has its own set of background processes
Datafiles - Shared by all instances, so must be placed in shared storage
Control files - Shared by all instances, so must be placed in shared storage
Online redo logfiles - Only one instance can write, but other instance can read during recovery and archiving. If an instance is shutdown log switches by other instances can force idle instance redologs to be archived.
Archived redolog - Private to the instance, but other instance will need access to all required archives logs during media recovery.
Flash recovery log - Shared by all the instances, so must be place in shared storage.
Alert log & trace files - Private to each instance, other instances never read  or write to those files.
ORACLE_HOME - It can be private to each instance or can be on shared file system.

5. Network/IPs

1. Public/Physical IP - To communicate to server.
2. Private IP - This is used for inter instance communication used by cluster and dedicated to the server nodes of a cluster
3. Virtual IP - This is used in listener configuration for load balancing/failover.

6. What is shared and What is not shared?

Shared:
1. Disk access
2. Resources that manages data.
3. All instances have common data and control files.
Not shared:
Each node has its own dedicated:
1. System memory
2. OS
3. Database instance
4. application software
5. Each instance has individual Log files and Rollback segments

7. RAC background processes

  1. LMSn (Global Cache Service Processes) -
      a..LMSn handles block transfers between the holding instance's buffer cache and requesting foreground process on the requesting instance.
    b.LMS maintains read consistency by rolling back any uncommitted transactions for blocks that are being requested by any remote instance.
    c.Even if ’n’ value(0-9) varies depending on the amount of messaging traffic amongst nodes in the cluster, there is default, one LMS process per pair of CPUs.
2. LMON (Global Enqueue Service Monitor) -
        It constantly handles reconfiguration of locks and global resources when a node joins or leaves the cluster. Its services are also known as Cluster Group Services (CGS).
3. LMD  (Global Enqueue Service Daemon)  -
       It manages lock manager service requests for GCS resources and sends them to a service queue to be handled by the LMSn process. The LMD  process also handles global deadlock detection and remote resource requests (remote resource requests are requests originating from another   instance).
4. LCK (Lock Process) -
 LCK manages non-cache fusion resource requests such as library and row cache requests and lock requests that are local to the server. Because the LMS process handles the primary function of lock management, only a single LCK process exists in each instance.
5. DIAG (Diagnosability Daemon) -
 This background process monitors the health of the instance and captures diagnostic data about process failures within instances. The operation of this daemon is automated and updates an alert log file to record the activity that it performs.
6. GSD (Global service Daemon) -
 This is a component in RAC that receives requests from the SRVCTL control utility to execute administrative tasks like startup or shutdown. The command is executed locally on each node and the results are returned to SRVCTL. The GSD is installed on the nodes by default.

8. INTERNAL STRUCTURES AND SERVICES


 1. Global Resource Directory (GRD)

  1. Records current state and owner of each resource
  2. Contains convert and write queues
  3. Distributed across all instances in cluster
  4. Maintained by GCS and GES

2. Global Cache Services (GCS)

  1. Implements cache coherency for database
  2. Coordinates access to database blocks for instances
3. Global Enqueue Services (GES)

  1. Controls access to other resources (locks) including library cache and dictionary cache
  2. Performs deadlock detection


SRVCTL Utility commands

stop/start

1. srvctl start database -d <DB Name> [to start all instances of database with listeners ]
2. srvctl stop database –d <DB Name>
3. srvctl stop database -d <DB Name> -o immediate
4. srvctl start database -d <DB Name> -o force
5. srvctl stop database -d <DB Name> -i instance <Instance name>       [ individual instance]
6. srvctl stop service -d <database> [-s <service><service>] [-i <instance>,<instance>]
7. srvctl stop nodeapps -n <node>
8. srvctl stop asm -n <node>
9. srvctl start service -d <database> -s <service><service> -i <instance>,<instance>
10. srvctl start nodeapps -n <node>
11. srvctl start asm -n <node>

status

srvctl status database -d <database
srvctl status instance -d <database> -i <instance>
srvctl status nodeapps -n <node>
srvctl status service -d <database>
srvctl status asm -n <node>

adding/removing

srvctl add database -d <database> -o <oracle_home>
srvctl add instance -d <database> -i <instance> -n <node>
srvctl add service -d <database> -s <service> -r <preferred_list>
srvctl add nodeapps -n <node> -o <oracle_home> -A <name|ip>/network
srvctl add asm -n <node> -i <asm_instance> -o <oracle_home>
srvctl remove database -d <database> -o <oracle_home>
srvctl remove instance -d <database> -i <instance> -n <node>
srvctl remove service -d <database> -s <service> -r <preferred_list>
srvctl remove nodeapps -n <node> -o <oracle_home> -A <name|p>/network
srvctl asm remove -n <node>

nodeapps

1. VIP
2. ONS
3. GSD
4. Listener

Clusterware Components

OPROCd - (Process Monitor Daemon)
Provides basic cluster integrity services, Faiure of the process causes Node Restart. It runs as root.
CRSd   - CRS daemon, the failure of this daemon results in a node being reboot to avoid data corruption
Resource monitoring, failover and node recovery, failure of the process Daemon caused restarted automatically  
EVMd -  (EventManagement)
spawns a child process event logger and generates callouts OCSSd - Oracle Cluster Synchronization Service Daemon (updates the registry). Failure of          this process causes Daemon automatically restarted, no node restart. It runs as oracle
OCSSd - (Cluster Synchronization Services)
Basic node membership, group services, basic locking. Failure of this proces Node Restart and it runs as oracle

How to check CRS version
crsctl query crs activeversion
crsctl query crs softwareversion

Clusterware Files
Oracle Clusterware requires two files that must be located on shared storage for its operation.

1. Oracle Cluster Registry (OCR)
2. Voting Disk

Oracle Cluster Registry (OCR)

Located on shared storage and in Oracle 10.2 and above can be mirrored to maximum two copies.

1. Defines cluster resources including
2. Databases and Instances ( RDBMS and ASM)
3. Services and Node Applications (VIP,ONS,GSD)
4. Listener Process

Voting Disk (Quorum Disk / File in Oracle 9i)

1. Used to determine RAC instance membership and is located on shared storage accessible to all instances.
2. used to determine which instance takes control of cluster in case of node failure to avoid split brain .
3. In Oracle 10.2 and above can be mirrored to only Odd number of copies (1, 3, 5 etc)

crsctl commands

/oracle/product/grid_home/bin/crsctl check crs
/oracle/product/grid_home/bin/crsctl stat res -t
/oracle/product/grid_home/bin/ocrcheck
/oracle/product/grid_home/bin/crsctl query css votedisk
/oracle/product/grid_home/bin/cluvfy stage -post crsinst -n all -verbose
/oracle/product/grid_home/bin/srvctl status scan_listener

More about VIP

1. To make the applications highly available and to eliminate SPOF,Oracle 10g introduced a new feature called CLUSTER VIPs i.e a virtual IP address different    from the set of in cluster IP addresses that is used by the outside world to connect to the database.

2. A VIP name and address must be registered in the DNS along with standard static IP information. Listeners would be configured to listen on VIPs instead of    the public IP.
3. When a node is down, the VIP is automatically failed over to oneof the other nodes. The node that gets the VIP will “re-ARP”to the world, indicating the    new MAC address of the VIP. Clients are sent error message immediately rather than waiting for the TCP timeout value.

ASM (Automatic Storage Management)

ASM (Automatic Storage Management)

In summary ASM provides the following functionality:

1. Manages groups of disks, called disk groups.
2. Manages disk redundancy within a disk group.
3. Provides near-optimal I/O balancing without any manual tuning.
4. Enables management of database objects without specifying mount points and filenames.
5. Supports large files.

Initialization Parameters and ASM Instance Creation

The initialization parameters that are of specific interest for an ASM instance are:

1. INSTANCE_TYPE - Set to ASM or RDBMS depending on the instance type. The default is RDBMS.
2. DB_UNIQUE_NAME - Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.
3. ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when  the POWER clause is omitted from a rebalance operation.
4. ASM_DISKGROUPS - The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. 
5. ASM configuration changes are automatically reflected in this parameter.
6. ASM_DISKSTRING - Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk  group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of                          already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.

Startup and Shutdown of ASM Instances

ASM instance are started and stopped in a similar way to normal database instances. The options for the STARTUP commands are:

FORCE - Performs a SHUTDOWN ABORT before restarting the ASM instance.
MOUNT - Starts the ASM instance and mounts the disk groups specified by the ASM_DISKGROUPS parameter.
NOMOUNT - Starts the ASM instance without mounting any disk groups.
OPEN - This is not a valid option for an ASM instance.
The options for the SHUTDOWN command are:

NORMAL - The ASM instance waits for all connected ASM instances and SQL sessions to exit then shuts down.
IMMEDIATE - The ASM instance waits for any SQL transactions to complete then shuts down. It doesn't wait for sessions to exit.
TRANSACTIONAL - Same as IMMEDIATE.
ABORT - The ASM instance shuts down instantly.
Disks

Disk groups are created using the CREATE DISKGROUP statement. This statement allows you to specify the level of redundancy:

1. NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.
2. HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
3. EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware mirroring or RAID. If you have hardware RAID it should be used in preference to ASM redundancy, so this will be the standard option for most installations.

Disk group creation

CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY
  FAILGROUP failure_group_1 DISK
    '/devices/diska1' NAME diska1,
    '/devices/diska2' NAME diska2
  FAILGROUP failure_group_2 DISK
    '/devices/diskb1' NAME diskb1,
    '/devices/diskb2' NAME diskb2;

Drop Disk group

DROP DISKGROUP disk_group_1 INCLUDING CONTENTS;

Add Disks to disk group

ALTER DISKGROUP disk_group_1 ADD DISK
  '/devices/disk*3',
  '/devices/disk*4';

Delete Disks from Disk group

ALTER DISKGROUP disk_group_1 DROP DISK diska2;

Resize Disks

-- Resize a specific disk.
ALTER DISKGROUP disk_group_1
  RESIZE DISK diska1 SIZE 100G;

-- Resize all disks in a failure group.
ALTER DISKGROUP disk_group_1
  RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 100G;

-- Resize all disks in a disk group.
ALTER DISKGROUP disk_group_1
  RESIZE ALL SIZE 100G;

Undrop diskgroup

ALTER DISKGROUP disk_group_1 UNDROP DISKS;

Rebalancing Diskgroup

ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;

Disk group mount/unmount

ALTER DISKGROUP ALL DISMOUNT;
ALTER DISKGROUP ALL MOUNT;
ALTER DISKGROUP disk_group_1 DISMOUNT;
ALTER DISKGROUP disk_group_1 MOUNT;

Directories

-- Create a directory.
ALTER DISKGROUP disk_group_1 ADD DIRECTORY '+disk_group_1/my_dir';

-- Rename a directory.
ALTER DISKGROUP disk_group_1 RENAME DIRECTORY '+disk_group_1/my_dir' TO '+disk_group_1/my_dir_2';

-- Delete a directory and all its contents.
ALTER DISKGROUP disk_group_1 DROP DIRECTORY '+disk_group_1/my_dir_2' FORCE;

Aliases

-- Create an alias using the fully qualified filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
  FOR '+disk_group_1/mydb/datafile/my_ts.342.3';

-- Create an alias using the numeric form filename.
ALTER DISKGROUP disk_group_1 ADD ALIAS '+disk_group_1/my_dir/my_file.dbf'
  FOR '+disk_group_1.342.3';

-- Rename an alias.
ALTER DISKGROUP disk_group_1 RENAME ALIAS '+disk_group_1/my_dir/my_file.dbf'
  TO '+disk_group_1/my_dir/my_file2.dbf';

-- Delete an alias.
ALTER DISKGROUP disk_group_1 DELETE ALIAS '+disk_group_1/my_dir/my_file.dbf';

Files

-- Drop file using an alias.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/my_dir/my_file.dbf';

-- Drop file using a numeric form filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1.342.3';

-- Drop file using a fully qualified filename.
ALTER DISKGROUP disk_group_1 DROP FILE '+disk_group_1/mydb/datafile/my_ts.342.3';

Checking metadata

-- Check metadata for a specific file.
ALTER DISKGROUP disk_group_1 CHECK FILE '+disk_group_1/my_dir/my_file.dbf'

-- Check metadata for a specific failure group in the disk group.
ALTER DISKGROUP disk_group_1 CHECK FAILGROUP failure_group_1;

-- Check metadata for a specific disk in the disk group.
ALTER DISKGROUP disk_group_1 CHECK DISK diska1;

-- Check metadata for all disks in the disk group.
ALTER DISKGROUP disk_group_1 CHECK ALL;

Change compatible attribute for a diskgroup

alter diskgroup DG_ASM1POC_DT_01 set attribute 'compatible.rdbms'='10.2.0.5'; This parameter cannot be lowered.

Start/stop diskgroups

srvctl start diskgroup -g DG_ASM1POC_DT_01 -n lnxdb-svl-02,lnxdb-svl-03,lnxdb-svl-04

srvctl stop diskgroup -g DG_ASM1POC_DT_01 -n lnxdb-svl-02,lnxdb-svl-03,lnxdb-svl-04

How to list the diskgroups in ASM instance?

sqlplus / as sysasm
select name, state from v$asm_diskgroup;
or
asmcmd lsdg or 
asmcmd ls

ASM Views

ViewASM InstanceDB Instance
V$ASM_ALIASDisplays a row for each alias present in every disk group mounted by the ASM instance.Returns no rows
V$ASM_CLIENTDisplays a row for each database instance using a disk group managed by the ASM instance.Displays a row for the ASM instance if the database has open ASM files.
V$ASM_DISKDisplays a row for each disk discovered by the ASM instance, including disks which are not part of any disk group.Displays a row for each disk in disk groups in use by the database instance.
V$ASM_DISKGROUPDisplays a row for each disk group discovered by the ASM instance.Displays a row for each disk group mounted by the local ASM instance.
V$ASM_FILEDisplays a row for each file for each disk group mounted by the ASM instance.Displays no rows.
V$ASM_OPERATIONDisplays a row for each file for each long running operation executing in the ASM instance.Displays no rows.
V$ASM_TEMPLATEDisplays a row for each template present in each disk group mounted by the ASM instance.Displays a row for each template present in each disk group mounted by the ASM instance with which the database instance communicates.

ASM Background processes

1. ARBn - ASM Rebalance Process , Rebalances data extents within an ASM disk group
2. ASMB - ASM Background Process. Communicates with the ASM instance, managing storage and providing statistics
3. GMON - ASM Disk Group Monitor Process. Monitors all mounted ASM disk groups
4. ABMR - Auto BMR Background Process. Coordinates execution of tasks such as filtering duplicate block media recovery requests and performing flood control.
5. ACFS - ASM Cluster File System CSS Process. Tracks the cluster membership in CSS and informs the file system driver of membership changes.
6. Bnnn - ASM Blocking Slave Process for GMON. Performs maintenance actions on ASM disk groups 
7. CTWR - Change Tracking Writer Process. Tracks the cluster membership in CSS and informs the file system driver of membership changes 
8. LMON - Global Enqueue Service Monitor Process. Monitors an Oracle RAC cluster to manage global resources 
9. LMSn - Global Cache Service Process. Manages resources and provides resource control among Oracle RAC instances 
10. RMSn - Oracle RAC Management Process. Performs manageability tasks for Oracle RAC

Reference : oracle-base.com

Oracle EBS integration with Oracle IDCS for SSO

Oracle EBS integration with Oracle IDCS for SSO Oracle EBS SSO? Why is it so important? Oracle E-Business Suite is a widely used application...