
Interactive Report Filter – URL Parameters
EQ = Equals
LT = Less than
LTE = Less then or equal to
GT = Greater Than
GTE = Greater than or equal to
LIKE = SQL Like operator
N = Null
NN = Not Null
Example:
http://example.com:8080/apex/f?p=100:2:439263181862::NO:CIR:IREQ_PRODUCT_ID:1
Oracle Apex
Install APEX 4.2
ASH and AWR Performance Tuning Scripts
Top Recent Wait Events
col EVENT format a60 select * from ( select active_session_history.event, sum(active_session_history.wait_time + active_session_history.time_waited) ttl_wait_time from v$active_session_history active_session_history where active_session_history.event is not null group by active_session_history.event order by 2 desc) where rownum < 6 /
Top Wait Events Since Instance Startup
col event format a60 select event, total_waits, time_waited from v$system_event e, v$event_name n where n.event_id = e.event_id and n.wait_class !='Idle' and n.wait_class = (select wait_class from v$session_wait_class where wait_class !='Idle' group by wait_class having sum(time_waited) = (select max(sum(time_waited)) from v$session_wait_class where wait_class !='Idle' group by (wait_class))) order by 3;
List Of Users Currently Waiting
col username format a12 col sid format 9999 col state format a15 col event format a50 col wait_time format 99999999 set pagesize 100 set linesize 120 select s.sid, s.username, se.event, se.state, se.wait_time from v$session s, v$session_wait se where s.sid=se.sid and se.event not like 'SQL*Net%' and se.event not like '%rdbms%' and s.username is not null order by se.wait_time;
Find The Main Database Wait Events In A Particular Time Interval
First determine the snapshot id values for the period in question.
In this example we need to find the SNAP_ID for the period 10 PM to 11 PM on the 14th of November, 2012.
select snap_id,begin_interval_time,end_interval_time from dba_hist_snapshot where to_char(begin_interval_time,'DD-MON-YYYY')='14-NOV-2012' and EXTRACT(HOUR FROM begin_interval_time) between 22 and 23;
set verify off
select * from (
select active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from dba_hist_active_sess_history active_session_history
where event is not null
and SNAP_ID between &ssnapid and &esnapid
group by active_session_history.event
order by 2 desc)
where rownum
Top CPU Consuming SQL During A Certain Time Period
Note – in this case we are finding the Top 5 CPU intensive SQL statements executed between 9.00 AM and 11.00 AM
select * from ( select SQL_ID, sum(CPU_TIME_DELTA), sum(DISK_READS_DELTA), count(*) from DBA_HIST_SQLSTAT a, dba_hist_snapshot s where s.snap_id = a.snap_id and s.begin_interval_time > sysdate -1 and EXTRACT(HOUR FROM S.END_INTERVAL_TIME) between 9 and 11 group by SQL_ID order by sum(CPU_TIME_DELTA) desc) where rownum
Which Database Objects Experienced the Most Number of Waits in the Past One Hour
set linesize 120 col event format a40 col object_name format a40 select * from ( select dba_objects.object_name, dba_objects.object_type, active_session_history.event, sum(active_session_history.wait_time + active_session_history.time_waited) ttl_wait_time from v$active_session_history active_session_history, dba_objects where active_session_history.sample_time between sysdate - 1/24 and sysdate and active_session_history.current_obj# = dba_objects.object_id group by dba_objects.object_name, dba_objects.object_type, active_session_history.event order by 4 desc) where rownum < 6;
Top Segments ordered by Physical Reads
col segment_name format a20 col owner format a10 select segment_name,object_type,total_physical_reads from ( select owner||'.'||object_name as segment_name,object_type, value as total_physical_reads from v$segment_statistics where statistic_name in ('physical reads') order by total_physical_reads desc) where rownum
Top 5 SQL statements in the past one hour
select * from ( select active_session_history.sql_id, dba_users.username, sqlarea.sql_text, sum(active_session_history.wait_time + active_session_history.time_waited) ttl_wait_time from v$active_session_history active_session_history, v$sqlarea sqlarea, dba_users where active_session_history.sample_time between sysdate - 1/24 and sysdate and active_session_history.sql_id = sqlarea.sql_id and active_session_history.user_id = dba_users.user_id group by active_session_history.sql_id,sqlarea.sql_text, dba_users.username order by 4 desc ) where rownum
SQL with the highest I/O in the past one day
select * from ( SELECT /*+LEADING(x h) USE_NL(h)*/ h.sql_id , SUM(10) ash_secs FROM dba_hist_snapshot x , dba_hist_active_sess_history h WHERE x.begin_interval_time > sysdate -1 AND h.SNAP_id = X.SNAP_id AND h.dbid = x.dbid AND h.instance_number = x.instance_number AND h.event in ('db file sequential read','db file scattered read') GROUP BY h.sql_id ORDER BY ash_secs desc ) where rownum
Top CPU consuming queries since past one day
select * from ( select SQL_ID, sum(CPU_TIME_DELTA), sum(DISK_READS_DELTA), count(*) from DBA_HIST_SQLSTAT a, dba_hist_snapshot s where s.snap_id = a.snap_id and s.begin_interval_time > sysdate -1 group by SQL_ID order by sum(CPU_TIME_DELTA) desc) where rownum
Find what the top SQL was at a particular reported time of day
First determine the snapshot id values for the period in question.
In thos example we need to find the SNAP_ID for the period 10 PM to 11 PM on the 14th of November, 2012.
select snap_id,begin_interval_time,end_interval_time from dba_hist_snapshot where to_char(begin_interval_time,'DD-MON-YYYY')='14-NOV-2012' and EXTRACT(HOUR FROM begin_interval_time) between 22 and 23; select * from ( select sql.sql_id c1, sql.buffer_gets_delta c2, sql.disk_reads_delta c3, sql.iowait_delta c4 from dba_hist_sqlstat sql, dba_hist_snapshot s where s.snap_id = sql.snap_id and s.snap_id= &snapid order by c3 desc) where rownum < 6 /
Analyse a particular SQL ID and see the trends for the past day
select s.snap_id, to_char(s.begin_interval_time,'HH24:MI') c1, sql.executions_delta c2, sql.buffer_gets_delta c3, sql.disk_reads_delta c4, sql.iowait_delta c5, sql.cpu_time_delta c6, sql.elapsed_time_delta c7 from dba_hist_sqlstat sql, dba_hist_snapshot s where s.snap_id = sql.snap_id and s.begin_interval_time > sysdate -1 and sql.sql_id='&sqlid' order by c7 /
Do we have multiple plan hash values for the same SQL ID – in that case may be changed plan is causing bad performance
select SQL_ID , PLAN_HASH_VALUE , sum(EXECUTIONS_DELTA) EXECUTIONS , sum(ROWS_PROCESSED_DELTA) CROWS , trunc(sum(CPU_TIME_DELTA)/1000000/60) CPU_MINS , trunc(sum(ELAPSED_TIME_DELTA)/1000000/60) ELA_MINS from DBA_HIST_SQLSTAT where SQL_ID in ( '&sqlid') group by SQL_ID , PLAN_HASH_VALUE order by SQL_ID, CPU_MINS;
Top 5 Queries for past week based on ADDM recommendations
/* Top 10 SQL_ID's for the last 7 days as identified by ADDM from DBA_ADVISOR_RECOMMENDATIONS and dba_advisor_log */ col SQL_ID form a16 col Benefit form 9999999999999 select * from ( select b.ATTR1 as SQL_ID, max(a.BENEFIT) as "Benefit" from DBA_ADVISOR_RECOMMENDATIONS a, DBA_ADVISOR_OBJECTS b where a.REC_ID = b.OBJECT_ID and a.TASK_ID = b.TASK_ID and a.TASK_ID in (select distinct b.task_id from dba_hist_snapshot a, dba_advisor_tasks b, dba_advisor_log l where a.begin_interval_time > sysdate - 7 and a.dbid = (select dbid from v$database) and a.INSTANCE_NUMBER = (select INSTANCE_NUMBER from v$instance) and to_char(a.begin_interval_time, 'yyyymmddHH24') = to_char(b.created, 'yyyymmddHH24') and b.advisor_name = 'ADDM' and b.task_id = l.task_id and l.status = 'COMPLETED') and length(b.ATTR4) > 1 group by b.ATTR1 order by max(a.BENEFIT) desc) where rownum < 6; Source
Recover an Oracle Database with Missing Archived Logs
Scenario: To recover an Oracle database from a backup with missing archived logs. The recovery process will stop at some point asking for archive logs when we try to recover from such a state.
The assumption here is that we have exhausted all possible locations to find another good and valid copy or backup of the archivelog that we are looking for.
If the archivelog is not found in any of the locations, then the approach and strategy on how to recover and open the database depends on the SCN (System Change Number) of the datafiles, as well as, whether the logsequence# required for the recovery is still available in the online redo logs.
For the SCN of the datafiles, it is important to know the mode of the database when the datafiles are backed up. That is whether the database is open, mounted or shutdown (normally) when the backup is taken.
If the datafiles are restored from an online or hot backup, which means that the database is open when the backup istaken, then we must apply at least the archivelog(s) or redolog(s) whose log sequence# are generated from the beginning and until the completion of the said backup that was used to restore the datafiles.
However, if the datafiles are restored from an offline or cold backup, and the database is cleanly shutdown before thebackup is taken, that means that the database is either not open, is in nomount mode or mounted when the backup is taken, then the datafiles are already synchronized in terms of their SCN. In this situation, we can immediately open the database without even applying archivelogs, because the datafiles are already in a consistent state, except if there is a requirement to roll the database forward to a point-in-time after the said backup is taken.
The critical key thing here is to ensure that all of the online datafiles are synchronized in terms of their SCN before we can normally open the database. So, run the following SQL statement, as shown below, to determine whether the datafiles aresynchronized or not. Take note that we query the V$DATAFILE_HEADER, because we want to know the SCN recorded inthe header of the physical datafile, and not the V$DATAFILE, which derives the information from the controlfile.
select status, checkpoint_change#, to_char(checkpoint_time,
‘DD-MON-YYYY HH24:MI:SS’) as checkpoint_time, count(*)
from v$datafile_header
group by status, checkpoint_change#, checkpoint_time
order by status, checkpoint_change#, checkpoint_time;
The results of the above query must return one and only one row for the online datafiles, which means that they are already synchronized in terms of their SCN. Otherwise, if the results return more than one row for the online datafiles, then the datafiles are still not synchronized yet. In this case, we need to apply archivelog(s) or redolog(s) to synchronize all of the online datafiles. By the way, take note of the CHECKPOINT_TIME in the V$DATAFILE_HEADER,which indicates the date and time how far the datafiles have been recovered.
The results of the query above may return some offline datafiles. So, ensure that all of the required datafiles are online,because we may not be able to recover later the offline datafile once we open the database in resetlogs. Even though wecan recover the database beyond resetlogs for the Oracle database starting from 10g and later versions due to the introduction of the format “%R” in the LOG_ARCHIVE_FORMAT, it is recommended that you online the required datafilesnow than after the database is open in resetlogs to avoid any possible problems. However, in some cases, we intentionally offline the datafile(s), because we are doing a partial database restore, or perhaps we don’t need thecontents of the said datafile.
You may run the following query to determine the offline datafiles:
select file#, name from
v$datafile
where file# in (select file#
from v$datafile_header
where status=’OFFLINE’);
You may issue the following SQL statement to change the status of the required datafile(s) from “OFFLINE” to “ONLINE”:
alter database datafile <file#> online;
If we are lucky that the required log sequence# is still available in the online redologs and the corresponding redologmember is still physically existing on disk, then we may apply them instead of the archivelog. To confirm, issue the following query, as shown below, that is to determine the redolog member(s) that you can apply to recover the database:
set echo on feedback on pagesize 100 numwidth 16
alter session set nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;select LF.member, L.group#, L.thread#, L.sequence#, L.status,L.first_change#, L.first_time, DF.min_checkpoint_change#
from v$log L, v$logfile LF,
(select min(checkpoint_change#) min_checkpoint_change#
from v$datafile_headerwhere status=’ONLINE’) DF
where LF.group# = L.group#
and L.first_change# >= DF.min_checkpoint_change#;
If the above query returns no rows, because the V$DATABASE.CONTROLFILE_TYPE has a value of “BACKUP”, then try to apply each of the redolog members one at a time during the recovery. You may run the following query to determine theredolog members:
select * from v$logfile;
If you have tried to apply all of the online redolog members instead of an archivelog during the recovery, but you always received the ORA-00310 error, as shown in the example below, then the log sequence# required for recovery is no longer available in the online redolog.
ORA-00279: change 189189555 generated at 11/03/2007 09:27:46 needed for thread 1
ORA-00289: suggestion : +BACKUP
ORA-00280: change 189189555 for thread 1 is in sequence #428
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
+BACKUP/prmy/onlinelog/group_2.258.603422107
ORA-00310: archived log contains sequence 503; sequence 428 required
ORA-00334: archived log: ‘+BACKUP/prmy/onlinelog/group_2.258.603422107’
After trying all of the possible solutions mentioned above, but you still cannot open the database, because the archivelog required for recovery is either missing, lost or corrupted, or the corresponding log sequence# is no longer available in theonline redolog, since they are already overwritten during the redolog switches, then we cannot normally open the database, since the datafiles are in an inconsistent state. So, the following are the 3 options available to allow you to open the database:
Option#1:
Force open the database by setting some hidden parameters in the init.ora. Note that you can only do this under the guidance of Oracle Support with a service request. As per Oracle Metalink, there is no 100% guarantee that this will open the database. However, once the database is opened, then we must immediately rebuild the database.
Database rebuild means doing the following, namely:
(1) perform a full-database export
(2) create a brand new and separate database
(3) import the recent export dump.
This option can be tedious and time consuming, but once we successfullyopen the new database, then we expect minimal or perhaps no data loss at all. Before you try this option, ensure that you have a good and valid backup of the current database.
When recovery process is initiated using backup controlfile, it will output recovery succeeded but inorder to open the database the datafiles should be in consistent state.
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 9867098396261 generated at 03/21/2008 13:37:44 needed for
thread 1
ORA-00289: suggestion : /arcredo/XSCLFY/log1_648355446_2093.arc
ORA-00280: change 9867098396261 for thread 1 is in sequence #2093Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
ORA-01112: media recovery not startedSQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
Now, a hidden parameter _ALLOW_RESETLOGS_CORRUPTION=TRUE will allow us to open database even though it’s not properly recovered.
Force open the database by setting the _ALLOW_RESETLOGS_CORRUPTION = TRUE. It allows us to open database but instance may crash immediately due to undo tablespace corruption. Check alert log file to view details of the issue.To resolve undo corruption issue, change undo_management to “Manual” in init.ora. Now the database will open successfully. Once database is up and running, create a new undo tablespace and drop the old corrupted undo tablespace. Also chang back the undo_management to “Auto” and undo_tablespace to “NewUndoTablespace” in init.ora._ALLOW_RESETLOGS_CORRUPTION=TRUE allows database to open without consistency checks. This may result in a corrupted database. The database should be recreated.
Option#2:
If you have a good and valid backup of the database, then restore the database from the said backup,and recover the database by applying up to the last available archivelog. In this option, we will only recover the databaseup to the last archivelog that is applied, and any data after that are lost. If no archivelogs are applied at all, then we can only recover the database from the backup that is restored. However, if we restored from an online or hot backup, then we may not be able to open the database, because we still need to apply the archivelogs generated during the said backup inorder to synchronize the SCN of the datafiles before we can normally open the database.
Option#3:
Manually extract the data using the Oracle’s Data Unloader (DUL), which is performed by Oracle Field Support at the customer site on the next business day and for an extra charge. If the customer wants to pursue this approach, we need the complete name, phone# and email address of the person who has the authority to sign the work order in behalf of the customer.
Source / Reference:
How to recover and open the database if the archive log required for recovery is missing.
Links:
Recovering an Oracle Database with Missing Archived Logs
Resolving missing archive log gap at Standby Database
Session State
#1 Get information on the sessions waiting and working
Query for displaying sessions, session state, and events:
select sid,
decode(state, ‘WAITING’,’Waiting’,
‘Working’) state,
decode(state,
‘WAITING’,
‘So far ‘||seconds_in_wait,
‘Last waited ‘||
wait_time/100)||
‘ secs for ‘||event
“Description”
from v$session
where username = ‘TEST’;SID STATE DESCRIPTION
556 Waiting So far 610498 secs for SQL*Net message from client
#2 Sessions from a specific user
select SID, osuser, machine, terminal, service_name,
logon_time, last_call_et
from v$session
where username = ‘TEST’;
#3 Sessions from a specific machine
select sid, username, program,
decode(state, ‘WAITING’, ‘Waiting’,
‘Working’) state,
last_call_et, seconds_in_wait, event
from v$session
where machine = ‘an23’;
#4 Get the SQL
SQL statement a session is executing, which will provide more insights into the workings of the session
select sql_id
from v$session
where sid = 3089;
Standby Database
Data Guard promises:
- Disaster recovery
- High availability
- Data protection
- Flexible balancing between data protection and performance requirements
- Simple management through the data guard broker.
- Gap detection
Advantages:
1. Very low failure rate
All system components are duplicated. The primary and standby instances can run on different hosts. They can also have separate locations depending on the safety requirements.
2. Very short downtime
If an error occurs in the primary database system and you have to recover the database, you can perform the recovery very quickly on the standby host. You can avoid the time-consuming datafile restore, since these files are already located on the standby host.
The only thing you need to do is to import the last entries from the redo log files. Therefore, the standby instance can take over the tasks of the primary instance very quickly.
3. Significant decrease of the load on the production host
The database backup requires considerable resources and time for large databases. Since the backup can run on the standby host, the load on the primary instance is reduced significantly.
Therefore, the resources on the production host are fully available for production operation, and you do not need to interrupt or restrict database operation for a backup.
4. Consistency
Applying redo log files to the standby database immediately verifies their consistency. No other tool can achieve this level of verification.
Disadvantages:
1. High costs
For a standby database scenario, all system components need to be available in duplicate. In particular, duplicate hardware resources (CPU, hard disks, and so on) are expensive.
2. High system administration expense
You need to set up the standby host. If structural changes are made on the primary database system, you must make the required resources are available on the standby host. When the standby instance has taken over production operation – a “takeover” – you must set up a replacement standby database.
3. High requirements for switchover software
So that the standby instance can take over production operation, the appropriate switchover software is required. The user and the suppliers of the hardware and system software are responsible for selecting this software and making sure that it functions correctly.
Step-by-step instructions on how to create a Physical Standby Database on Windows and UNIX servers, and maintenance tips on the databases in a Data Guard Environment:
(Article by Hailie Jiao )
In this example the database version is 10.2.0.3.. The Primary database and Standby database are located on different machines at different sites. The Primary database is called PRIM and the Standby database is called STAN. I use Flash Recovery Area, and OMF.
a) Setup the environment
1. Make sure the operating system and platform architecture on the primary and standby systems are the same.
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.
b) On the Primary Database Side
1. Enable forced logging on your primary database:
SQL> ALTER DATABASE FORCE LOGGING;
2. Create a password file if it doesn’t exist.
1) To check if a password file already exists, run the following command:
SQL> select * from v$pwfile_users;
2) If it doesn’t exist, use the following command to create one:
On Windows: $cd %ORACLE_HOME%database $orapwd file=pwdPRIM.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with the password for the SYS user.) On UNIX: $Cd $ORACLE_HOME/dbs $Orapwd file=pwdPRIM.ora password=xxxxxxxx force=y (Note: Replace xxxxxxxxx with your actual password for the SYS user.)
3. Configure a Standby Redo log.
1) The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:
SQL> select bytes from v$log; BYTES ---------- 52428800 52428800 52428800
2) Use the following command to determine your current log file groups:
SQL> select group#, member from v$logfile;
3) Create standby Redo log groups.
Primary database had 3 log file groups originally and 3 standby redo log groups are created now using the following commands:
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M; SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;
4) To verify the results of the standby redo log groups creation, run the following query:
SQL>select * from v$standby_log;
4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:
SQL>shutdown immediate; SQL>startup mount; SQL>alter database archivelog; SQL>alter database open; SQL>archive log list;
5. Set Primary Database Initialization Parameters
Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.
1) Create pfile from spfile for the primary database:
On Windows: SQL>create pfile=’databasepfilePRIM.ora’ from spfile; (Note- specify your Oracle home path to replace ‘’). On UNIX: SQL>create pfile=’/dbs/pfilePRIM.ora’ from spfile; (Note- specify your Oracle home path to replace ‘’). 2) Edit pfilePRIM.ora to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly) db_name=PRIM db_unique_name=PRIM LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)' LOG_ARCHIVE_DEST_1= 'LOCATION=F:Oracleflash_recovery_areaPRIMARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PRIM' LOG_ARCHIVE_DEST_2= 'SERVICE=STAN LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=STAN' LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 remote_login_passwordfile='EXCLUSIVE' FAL_SERVER=STAN FAL_CLIENT=PRIM STANDBY_FILE_MANAGEMENT=AUTO # Specify the location of the standby DB datafiles followed by the primary location; DB_FILE_NAME_CONVERT='E:oracleproduct10.2.0oradataSTANDATAFILE', 'E:oracleproduct10.2.0oradataPRIMDATAFILE' # Specify the location of the standby DB online redo log files followed by the primary location LOG_FILE_NAME_CONVERT=’E:oracleproduct10.2.0oradataSTANONLINELOG’, ’E:oracleproduct10.2.0oradataPRIMONLINELOG’, ’F:Oracleflash_recovery_areaSTANONLINELOG’, ’F:Oracleflash_recovery_areaPRIMONLINELOG’
6. Create spfile from pfile, and restart primary database using the new spfile.
Data Guard must use SPFILE. Create the SPFILE and restart database.
On windows: SQL> shutdown immediate; SQL> startup nomount pfile=’databasepfilePRIM.ora’; SQL>create spfile from pfile=’databasepfilePRIM.ora’; -- Restart the Primary database using the newly created SPFILE. SQL>shutdown immediate; SQL>Startup; (Note- specify your Oracle home path to replace ‘’). On UNIX: SQL> shutdown immediate; SQL> startup nomount pfile=’/dbs/pfilePRIM.ora’; SQL>create spfile from pfile=’/dbs/pfilePRIM.ora’; -- Restart the Primary database using the newly created SPFILE. SQL>shutdown immediate; SQL>Startup;
c) On the Standby Database Site:
1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:
SQL>shutdown immediate;
On Standby Server (While the Primary database is shut down):
1) Create directory for data files, for example, on windows,
E:oracleproduct10.2.0oradataSTANDATAFILE.
On UNIX, create the directory accordingly.
2) Copy the data files and temp files over.
3) Create directory (multiplexing) for online logs, for example, on Windows,
E:oracleproduct10.2.0oradataSTANONLINELOG and
F:Oracleflash_recovery_areaSTANONLINELOG.
On UNIX, create the directories accordingly.
4) Copy the online logs over.
2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:
SQL>startup mount; SQL>alter database create standby controlfile as ‘STAN.ctl; SQL>ALTER DATABASE OPEN;
3. Copy the Primary DB pfile to Standby server and rename/edit the file.
1) Copy pfilePRIM.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.
2) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)
*.audit_file_dest='E:oracleproduct10.2.0adminSTANadump' *.background_dump_dest='E:oracleproduct10.2.0adminSTANbdump' *.core_dump_dest='E:oracleproduct10.2.0adminSTANcdump' *.user_dump_dest='E:oracleproduct10.2.0adminSTANudump' *.compatible='10.2.0.3.0' control_files='E:ORACLEPRODUCT10.2.0ORADATASTANCONTROLFILESTAN.CTL','F:ORACLEFLASH_RECOVERY_AREASTANCONTROLFILESTAN.CTL' db_name='PRIM' db_unique_name=STAN LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIM,STAN)’ LOG_ARCHIVE_DEST_1= ‘LOCATION=F:Oracleflash_recovery_areaSTANARCHIVELOG VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=STAN’ LOG_ARCHIVE_DEST_2= ‘SERVICE=PRIM LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PRIM’ LOG_ARCHIVE_DEST_STATE_1=ENABLE LOG_ARCHIVE_DEST_STATE_2=ENABLE LOG_ARCHIVE_FORMAT=%t_%s_%r.arc LOG_ARCHIVE_MAX_PROCESSES=30 FAL_SERVER=PRIM FAL_CLIENT=STAN remote_login_passwordfile='EXCLUSIVE' # Specify the location of the primary DB datafiles followed by the standby location DB_FILE_NAME_CONVERT=’E:oracleproduct10.2.0oradataPRIMDATAFILE’, ’E:oracleproduct10.2.0oradataSTANDATAFILE’ # Specify the location of the primary DB online redo log files followed by the standby location LOG_FILE_NAME_CONVERT=’E:oracleproduct10.2.0oradataPRIMONLINELOG’, ’E:oracleproduct10.2.0oradataSTANONLINELOG’, ’F:Oracleflash_recovery_areaPRIMONLINELOG’, ’F:Oracleflash_recovery_areaSTANONLINELOG’ STANDBY_FILE_MANAGEMENT=AUTO (Note: Not all the parameter entries are listed here.)
4. On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.
5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations
6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.
On Windows copy it to database folder, and on UNIX copy it to /dbs directory. And then rename the password file.
7. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID STAN –STARTMODE manual
8. Configure listeners for the primary and standby databases.
1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop $lsnrctl start
2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.
$lsnrctl stop $lsnrctl start
9. Create Oracle Net service names.
1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM $tnsping STAN
2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN
10. On Standby server, setup the environment variables to point to the Standby database.
Set up ORACLE_HOME and ORACLE_SID.
11. Start up nomount the standby database and generate a spfile.
On Windows: SQL>startup nomount pfile=’databasepfileSTAN.ora’; SQL>create spfile from pfile=’databasepfileSTAN.ora’; -- Restart the Standby database using the newly created SPFILE. SQL>shutdown immediate; SQL>startup mount; On UNIX: SQL>startup nomount pfile=’/dbs/pfileSTAN.ora’; SQL>create spfile from pfile=’/dbs/pfileSTAN.ora’; -- Restart the Standby database using the newly created SPFILE. SQL>shutdown immediate; SQL>startup mount; (Note- specify your Oracle home path to replace ‘’).
12. Start Redo apply
1) On the standby database, to start redo apply:
SQL>alter database recover managed standby database disconnect from session;
If you ever need to stop log apply services:
SQL> alter database recover managed standby database cancel;
13. Verify the standby database is performing properly:
1) On Standby perform a query:
SQL>select sequence#, first_time, next_time from v$archived_log;
2) On Primary, force a logfile switch:
SQL>alter system switch logfile;
3) On Standby, verify the archived redo log files were applied:
SQL>select sequence#, applied from v$archived_log order by sequence#;
14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.
To start real-time apply:
SQL> alter database recover managed standby database using current logfile disconnect;
15. To create multiple standby databases, repeat this procedure.
d) Maintenance:
1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.
2. Cleanup the archive logs on Primary and Standby servers.
I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.
For the standby database, I run RMAN to backup and delete the archive logs once per week.
$rman target /@STAN; RMAN>backup archivelog all delete input;
To delete the archivelog backup files on the standby server, I run the following once a month:
RMAN>delete backupset;
3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.
Refer to section b) 2, step 2 to update/recreate password file for the Standby database.
Source / Reference Links:
Database and Web Server on the same machine?
- Why is it not advisable to have the database and web server on the same machine? – Stack Overflow
- Should SQL Server be on the same machine as your IIS installation?
My catch:
Single Server
Pros: Performance gain, less latency time
Cons: Vulnerable
Separate servers
Pros: Security advantage, scalability
Cons: Expense on extra hardware
Active Connections to Oracle Database
V$SESSION displays session information for each current session.
select
substr(a.spid,1,9) pid,
substr(b.sid,1,5) sid,
substr(b.serial#,1,5) ser#,
substr(b.machine,1,6) box,
substr(b.username,1,10) username,
— b.server,
substr(b.osuser,1,8) os_user,
substr(b.program,1,30) program
from v$session b, v$process a
where
b.paddr = a.addr
and type=’USER’
order by spid;