Recover an Oracle Database with Missing Archived Logs

Scenario:  To recover an Oracle database from a backup with missing archived logs. The recovery process will stop at some point asking for archive logs when we try to recover from such a state.

The assumption here is that we have exhausted all possible locations to find another good and valid copy or backup of the archivelog that we are looking for.

If the archivelog is not found in any of the locations, then the approach and strategy on how to recover and open the database depends on the SCN (System Change Number) of the datafiles, as well as, whether the logsequence# required for the recovery is still available in the online redo logs.

For the SCN of the datafiles, it is important to know the mode of the database when the datafiles are backed up. That is whether the database is open, mounted or shutdown (normally) when the backup is taken.

If the datafiles are restored from an online or hot backup, which means that the database is open when the backup istaken, then we must apply at least the archivelog(s) or redolog(s) whose log sequence# are generated from the beginning and until the completion of the said backup that was used to restore the datafiles.

However, if the datafiles are restored from an offline or cold backup, and the database is cleanly shutdown before thebackup is taken, that means that the database is either not open, is in nomount mode or mounted when the backup is taken, then the datafiles are already synchronized in terms of their SCN. In this situation, we can immediately open the database without even applying archivelogs, because the datafiles are already in a consistent state, except if there is a requirement to roll the database forward to a point-in-time after the said backup is taken.

The critical key thing here is to ensure that all of the online datafiles are synchronized in terms of their SCN before we can normally open the database. So, run the following SQL statement, as shown below, to determine whether the datafiles aresynchronized or not. Take note that we query the V$DATAFILE_HEADER, because we want to know the SCN recorded inthe header of the physical datafile, and not the V$DATAFILE, which derives the information from the controlfile.

select status, checkpoint_change#,  to_char(checkpoint_time,

‘DD-MON-YYYY HH24:MI:SS’) as checkpoint_time, count(*)

from v$datafile_header

group by status, checkpoint_change#, checkpoint_time

order by status, checkpoint_change#, checkpoint_time;

The results of the above query must return one and only one row for the online datafiles, which means that they are already synchronized in terms of their SCN. Otherwise, if the results return more than one row for the online datafiles, then the datafiles are still not synchronized yet. In this case, we need to apply archivelog(s) or redolog(s) to synchronize all of the online datafiles. By the way, take note of the CHECKPOINT_TIME in the V$DATAFILE_HEADER,which indicates the date and time how far the datafiles have been recovered.

The results of the query above may return some offline datafiles. So, ensure that all of the required datafiles are online,because we may not be able to recover later the offline datafile once we open the database in resetlogs. Even though wecan recover the database beyond resetlogs for the Oracle database starting from 10g and later versions due to the introduction of the format “%R” in the LOG_ARCHIVE_FORMAT, it is recommended that you online the required datafilesnow than after the database is open in resetlogs to avoid any possible problems. However, in some cases, we intentionally offline the datafile(s), because we are doing a partial database restore, or perhaps we don’t need thecontents of the said datafile.

You may run the following query to determine the offline datafiles:

select file#, name from

v$datafile

where file# in (select file#

from v$datafile_header

where status=’OFFLINE’);

You may issue the following SQL statement to change the status of the required datafile(s) from “OFFLINE” to “ONLINE”:

alter database datafile <file#> online;

If we are lucky that the required log sequence# is still available in the online redologs and the corresponding redologmember is still physically existing on disk, then we may apply them instead of the archivelog. To confirm, issue the following query, as shown below, that is to determine the redolog member(s) that you can apply to recover the database:

set echo on feedback on pagesize 100 numwidth 16
alter session set nls_date_format = ‘DD-MON-YYYY HH24:MI:SS’;

select LF.member, L.group#, L.thread#, L.sequence#, L.status,L.first_change#, L.first_time, DF.min_checkpoint_change#
from v$log L, v$logfile LF,
(select min(checkpoint_change#) min_checkpoint_change#
from v$datafile_headerwhere status=’ONLINE’) DF
where LF.group# = L.group#
and L.first_change# >= DF.min_checkpoint_change#;

If the above query returns no rows, because the V$DATABASE.CONTROLFILE_TYPE has a value of “BACKUP”, then try to apply each of the redolog members one at a time during the recovery. You may run the following query to determine theredolog members:

select * from v$logfile;

If you have tried to apply all of the online redolog members instead of an archivelog during the recovery, but you always received the ORA-00310 error, as shown in the example below, then the log sequence# required for recovery is no longer available in the online redolog.

ORA-00279: change 189189555 generated at 11/03/2007 09:27:46 needed for thread 1

ORA-00289: suggestion : +BACKUP

ORA-00280: change 189189555 for thread 1 is in sequence #428

Specify log: {<RET>=suggested | filename | AUTO | CANCEL}

+BACKUP/prmy/onlinelog/group_2.258.603422107

ORA-00310: archived log contains sequence 503; sequence 428 required

ORA-00334: archived log: ‘+BACKUP/prmy/onlinelog/group_2.258.603422107’

After trying all of the possible solutions mentioned above, but you still cannot open the database, because the archivelog required for recovery is either missing, lost or corrupted, or the corresponding log sequence# is no longer available in theonline redolog, since they are already overwritten during the redolog switches, then we cannot normally open the database, since the datafiles are in an inconsistent state. So, the following are the 3 options available to allow you to open the database:

Option#1:

Force open the database by setting some hidden parameters in the init.ora. Note that you can only do this under the guidance of Oracle Support with a service request. As per Oracle Metalink,  there is no 100% guarantee that this will open the database. However, once the database is opened, then we must immediately rebuild the database.

Database rebuild means doing the following, namely:

(1) perform a full-database export

(2) create a brand new and separate database

(3) import the recent export dump.

This option can be tedious and time consuming, but once we successfullyopen the new database, then we expect minimal or perhaps no data loss at all. Before you try this option, ensure that you have a good and valid backup of the current database.

When recovery process is initiated using backup controlfile, it will output recovery succeeded but inorder to open the database the datafiles should be in consistent state.

SQL> recover database until cancel using backup controlfile;
ORA-00279: change 9867098396261 generated at 03/21/2008 13:37:44 needed for
thread 1
ORA-00289: suggestion : /arcredo/XSCLFY/log1_648355446_2093.arc
ORA-00280: change 9867098396261 for thread 1 is in sequence #2093Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’
ORA-01112: media recovery not startedSQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01195: online backup of file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u100/oradata/XSCLFY/SYSTEM01_SCLFY.dbf’

Now, a hidden parameter _ALLOW_RESETLOGS_CORRUPTION=TRUE will allow us to open database even though it’s not properly recovered.

Force open the database by setting the _ALLOW_RESETLOGS_CORRUPTION = TRUE. It allows us to open database but instance may crash immediately due to undo tablespace corruption. Check alert log file to view details of the issue.To resolve undo corruption issue, change undo_management to “Manual” in init.ora. Now the database will open successfully. Once database is up and running, create a new undo tablespace and drop the old corrupted undo tablespace. Also chang back the undo_management to “Auto” and undo_tablespace to “NewUndoTablespace” in init.ora._ALLOW_RESETLOGS_CORRUPTION=TRUE allows database to open without consistency checks. This may result in a corrupted database. The database should be recreated.

 

Option#2:

If you have a good and valid backup of the database, then restore the database from the said backup,and recover the database by applying up to the last available archivelog. In this option, we will only recover the databaseup to the last archivelog that is applied, and any data after that are lost. If no archivelogs are applied at all, then we can only recover the database from the backup that is restored. However, if we restored from an online or hot backup, then we may not be able to open the database, because we still need to apply the archivelogs generated during the said backup inorder to synchronize the SCN of the datafiles before we can normally open the database.

Option#3:

Manually extract the data using the Oracle’s Data Unloader (DUL), which is performed by Oracle Field Support at the customer site on the next business day and for an extra charge. If the customer wants to pursue this approach, we need the complete name, phone# and email address of the person who has the authority to sign the work order in behalf of the customer.

Source / Reference:

How to recover and open the database if the archivelog required for recovery is either missing, lost or corrupted?

How to recover and open the database if the archive log required for recovery is missing.

Links:

Recovering an Oracle Database with Missing Archived Logs

Resolving missing archive log gap at Standby Database


Standby Database

Data Guard promises:

  • Disaster recovery
  • High availability
  • Data protection
  • Flexible balancing between data protection and performance requirements
  •  Simple management through the data guard broker.
  • Gap detection

Advantages:

1. Very low failure rate
All system components are duplicated. The primary and standby instances can run on different hosts. They can also have separate locations depending on the safety requirements.

2. Very short downtime

If an error occurs in the primary database system and you have to recover the database, you can perform the recovery very quickly on the standby host. You can avoid the time-consuming datafile restore, since these files are already located on the standby host.

The only thing you need to do is to import the last entries from the redo log files. Therefore, the standby instance can take over the tasks of the primary instance very quickly.

3. Significant decrease of the load on the production host

The database backup requires considerable resources and time for large databases. Since the backup can run on the standby host, the load on the primary instance is reduced significantly.
Therefore, the resources on the production host are fully available for production operation, and you do not need to interrupt or restrict database operation for a backup.

4. Consistency

Applying redo log files to the standby database immediately verifies their consistency. No other tool can achieve this level of verification.

Disadvantages:

1. High costs

For a standby database scenario, all system components need to be available in duplicate. In particular, duplicate hardware resources (CPU, hard disks, and so on) are expensive.

2. High system administration expense

You need to set up the standby host. If structural changes are made on the primary database system, you must make the required resources are available on the standby host. When the standby instance has taken over production operation – a “takeover” – you must set up a replacement standby database.

3. High requirements for switchover software

So that the standby instance can take over production operation, the appropriate switchover software is required. The user and the suppliers of the hardware and system software are responsible for selecting this software and making sure that it functions correctly.

Step-by-step instructions on how to create a Physical Standby Database on Windows and UNIX servers, and maintenance tips on the databases in a Data Guard Environment:

(Article by Hailie Jiao )

In this example the database version is 10.2.0.3.. The Primary database and Standby database are located on different machines at different sites. The Primary database is called PRIM and the Standby database is called STAN. I use Flash Recovery Area, and OMF.

a) Setup the environment

1. Make sure the operating system and platform architecture on the primary and standby systems are the same.
2. Install Oracle database software without the starter database on the standby server and patch it if necessary. Make sure the same Oracle software release is used on the Primary and Standby databases, and Oracle home paths are identical.
3. Test the Standby Database creation on a test environment first before working on the Production database.

b) On the Primary Database Side

1. Enable forced logging on your primary database:

SQL> ALTER DATABASE FORCE LOGGING;

2. Create a password file if it doesn’t exist.
1) To check if a password file already exists, run the following command:

SQL> select * from v$pwfile_users;

2) If it doesn’t exist, use the following command to create one:

On Windows:
 $cd %ORACLE_HOME%database
 $orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
 (Note: Replace xxxxxxxxx with the password for the SYS user.)
On UNIX:
 $Cd $ORACLE_HOME/dbs
 $Orapwd file=pwdPRIM.ora password=xxxxxxxx force=y
 (Note: Replace xxxxxxxxx with your actual password for the SYS user.)

3. Configure a Standby Redo log.

1)  The size of the standby redo log files should match the size of the current Primary database online redo log files. To find out the size of your online redo log files:

SQL> select bytes from v$log;

BYTES
----------
52428800
52428800
52428800

2) Use the following command to determine your current log file groups:

SQL> select group#, member from v$logfile;

3) Create standby Redo log groups.
Primary database had 3 log file groups originally and 3 standby redo log groups are created now using the following commands:

SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 SIZE 50M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 SIZE 50M;

4) To verify the results of the standby redo log groups creation, run the following query:

SQL>select * from v$standby_log;

4. Enable Archiving on Primary.
If your primary database is not already in Archive Log mode, enable the archive log mode:

SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database archivelog;
SQL>alter database open;
SQL>archive log list;

5. Set Primary Database Initialization Parameters

Create a text initialization parameter file (PFILE) from the server parameter file (SPFILE), to add the new primary role parameters.

1) Create pfile from spfile for the primary database:

On Windows:
SQL>create pfile=’databasepfilePRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
 On UNIX:
SQL>create pfile=’/dbs/pfilePRIM.ora’ from spfile;
(Note- specify your Oracle home path to replace ‘’).
2) Edit pfilePRIM.ora to add the new primary and standby role parameters: (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)

db_name=PRIM
db_unique_name=PRIM
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PRIM,STAN)'
LOG_ARCHIVE_DEST_1=
'LOCATION=F:Oracleflash_recovery_areaPRIMARCHIVELOG
VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
DB_UNIQUE_NAME=PRIM'
LOG_ARCHIVE_DEST_2=
'SERVICE=STAN LGWR ASYNC
VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
DB_UNIQUE_NAME=STAN'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=30
remote_login_passwordfile='EXCLUSIVE'
FAL_SERVER=STAN
FAL_CLIENT=PRIM
STANDBY_FILE_MANAGEMENT=AUTO
# Specify the location of the standby DB datafiles followed by the primary location;
DB_FILE_NAME_CONVERT='E:oracleproduct10.2.0oradataSTANDATAFILE',
'E:oracleproduct10.2.0oradataPRIMDATAFILE'
# Specify the location of the standby DB online redo log files followed by the primary location 
LOG_FILE_NAME_CONVERT=’E:oracleproduct10.2.0oradataSTANONLINELOG’, 
’E:oracleproduct10.2.0oradataPRIMONLINELOG’,
 ’F:Oracleflash_recovery_areaSTANONLINELOG’,
 ’F:Oracleflash_recovery_areaPRIMONLINELOG’

6. Create spfile from pfile, and restart primary database using the new spfile.

Data Guard must use SPFILE. Create the SPFILE and restart database.

On windows:
SQL> shutdown immediate;
SQL> startup nomount pfile=’databasepfilePRIM.ora’;
SQL>create spfile from pfile=’databasepfilePRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;
(Note- specify your Oracle home path to replace ‘’).
On UNIX:
SQL> shutdown immediate;
SQL> startup nomount pfile=’/dbs/pfilePRIM.ora’;
SQL>create spfile from pfile=’/dbs/pfilePRIM.ora’;
-- Restart the Primary database using the newly created SPFILE.
SQL>shutdown immediate;
SQL>Startup;

c) On the Standby Database Site:

1. Create a copy of Primary database data files on the Standby Server:
On Primary DB:

SQL>shutdown immediate;

On Standby Server (While the Primary database is shut down):
1) Create directory for data files, for example, on windows,
E:oracleproduct10.2.0oradataSTANDATAFILE.
On UNIX, create the directory accordingly.

2) Copy the data files and temp files over.

3) Create directory (multiplexing) for online logs, for example, on Windows,
E:oracleproduct10.2.0oradataSTANONLINELOG and
F:Oracleflash_recovery_areaSTANONLINELOG.
On UNIX, create the directories accordingly.

4) Copy the online logs over.

2. Create a Control File for the standby database:
On Primary DB, create a control file for the standby to use:

SQL>startup mount;
SQL>alter database create standby controlfile as ‘STAN.ctl;
SQL>ALTER DATABASE OPEN;

3. Copy the Primary DB pfile to Standby server and rename/edit the file.

1) Copy pfilePRIM.ora from Primary server to Standby server, to database folder on Windows or dbs folder on UNIX under the Oracle home path.
2) Rename it to pfileSTAN.ora, and modify the file as follows. : (Here the file paths are from a windows system. For UNIX system, specify the path accordingly)

*.audit_file_dest='E:oracleproduct10.2.0adminSTANadump'
 *.background_dump_dest='E:oracleproduct10.2.0adminSTANbdump'
 *.core_dump_dest='E:oracleproduct10.2.0adminSTANcdump'
 *.user_dump_dest='E:oracleproduct10.2.0adminSTANudump'
 *.compatible='10.2.0.3.0'
 control_files='E:ORACLEPRODUCT10.2.0ORADATASTANCONTROLFILESTAN.CTL','F:ORACLEFLASH_RECOVERY_AREASTANCONTROLFILESTAN.CTL'
 db_name='PRIM'
 db_unique_name=STAN
 LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PRIM,STAN)’
 LOG_ARCHIVE_DEST_1=
 ‘LOCATION=F:Oracleflash_recovery_areaSTANARCHIVELOG
 VALID_FOR=(ALL_LOGFILES,ALL_ROLES)
 DB_UNIQUE_NAME=STAN’
 LOG_ARCHIVE_DEST_2=
 ‘SERVICE=PRIM LGWR ASYNC
 VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE)
 DB_UNIQUE_NAME=PRIM’
 LOG_ARCHIVE_DEST_STATE_1=ENABLE
 LOG_ARCHIVE_DEST_STATE_2=ENABLE
 LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
 LOG_ARCHIVE_MAX_PROCESSES=30
 FAL_SERVER=PRIM
 FAL_CLIENT=STAN
 remote_login_passwordfile='EXCLUSIVE'
 # Specify the location of the primary DB datafiles followed by the standby location
 DB_FILE_NAME_CONVERT=’E:oracleproduct10.2.0oradataPRIMDATAFILE’,
 ’E:oracleproduct10.2.0oradataSTANDATAFILE’
 # Specify the location of the primary DB online redo log files followed by the standby location
 LOG_FILE_NAME_CONVERT=’E:oracleproduct10.2.0oradataPRIMONLINELOG’,
 ’E:oracleproduct10.2.0oradataSTANONLINELOG’,
 ’F:Oracleflash_recovery_areaPRIMONLINELOG’,
 ’F:Oracleflash_recovery_areaSTANONLINELOG’
 STANDBY_FILE_MANAGEMENT=AUTO
(Note: Not all the parameter entries are listed here.)

4. On Standby server, create all required directories for dump and archived log destination:
Create directories adump, bdump, cdump, udump, and archived log destinations for the standby database.

5. Copy the standby control file ‘STAN.ctl’ from primary to standby destinations

6. Copy the Primary password file to standby and rename it to pwdSTAN.ora.
On Windows copy it to database folder, and on UNIX copy it to /dbs directory. And then rename the password file.

7. For Windows, create a Windows-based services (optional):
$oradim –NEW –SID STAN –STARTMODE manual

8. Configure listeners for the primary and standby databases.

1) On Primary system: use Oracle Net Manager to configure a listener for PRIM and STAN. Then restart the listener.

$lsnrctl stop
$lsnrctl start

2) On Standby server: use Net Manager to configure a listener for PRIM and STAN. Then restart the listener.

$lsnrctl stop
$lsnrctl start

9. Create Oracle Net service names.
1) On Primary system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:

$tnsping PRIM
$tnsping STAN

2) On Standby system: use Oracle Net Manager to create network service names for PRIM and STAN. Check tnsping to both services:
$tnsping PRIM
$tnsping STAN

10. On Standby server, setup the environment variables to point to the Standby database.

Set up ORACLE_HOME and ORACLE_SID.

11. Start up nomount the standby database and generate a spfile.

On Windows:
 SQL>startup nomount pfile=’databasepfileSTAN.ora’;
 SQL>create spfile from pfile=’databasepfileSTAN.ora’;
 -- Restart the Standby database using the newly created SPFILE.
 SQL>shutdown immediate;
 SQL>startup mount;
On UNIX:
 SQL>startup nomount pfile=’/dbs/pfileSTAN.ora’;
 SQL>create spfile from pfile=’/dbs/pfileSTAN.ora’;
 -- Restart the Standby database using the newly created SPFILE.
 SQL>shutdown immediate;
 SQL>startup mount;
 (Note- specify your Oracle home path to replace ‘’).

12. Start Redo apply

1) On the standby database, to start redo apply:

SQL>alter database recover managed standby database disconnect from session;

If you ever need to stop log apply services:

SQL> alter database recover managed standby database cancel;

13. Verify the standby database is performing properly:

1) On Standby perform a query:

SQL>select sequence#, first_time, next_time from v$archived_log;

2) On Primary, force a logfile switch:

SQL>alter system switch logfile;

3) On Standby, verify the archived redo log files were applied:

SQL>select sequence#, applied from v$archived_log order by sequence#;

14. If you want the redo data to be applied as it is received without waiting for the current standby redo log file to be archived, enable the real-time apply.

To start real-time apply:

SQL> alter database recover managed standby database using current logfile disconnect;

15. To create multiple standby databases, repeat this procedure.

d) Maintenance:

1. Check the alert log files of Primary and Standby databases frequently to monitor the database operations in a Data Guard environment.

2. Cleanup the archive logs on Primary and Standby servers.

I scheduled weekly Hot Whole database backup against my primary database that also backs up and delete the archived logs on Primary.

For the standby database, I run RMAN to backup and delete the archive logs once per week.

$rman target /@STAN;
RMAN>backup archivelog all delete input;

To delete the archivelog backup files on the standby server, I run the following once a month:

RMAN>delete backupset;

3. Password management
The password for the SYS user must be identical on every system for the redo data transmission to succeed. If you change the password for SYS on Primary database, you will have to update the password file for Standby database accordingly, otherwise the logs won’t be shipped to the standby server.

Refer to section b) 2, step 2 to update/recreate password file for the Standby database.

Source / Reference Links:

Database Schema Documentation

Tools that provide options to generate database schema documentation are found all over web. But you may end up with a paid version of the tool in the end.

Oracle SQL Developer itself has an option to generate a HTML document on  database schema.

  • Open Oracle SQL Developer.
  • Create a connection to the desired database.
  • Right click on the connection and find ‘Generate DB Doc’ option.
  • Select the option and specify the target directory.

Other open source tools for database schema documentation:

Network Exports/Imports using Network_Link

The NETWORK_LINK parameter identifies a database link to be used as the source for a network export/import. The following database link will be used to demonstrate its use.

CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;
CONN test/test
CREATE DATABASE LINK remote_scott
                    CONNECT TO scott IDENTIFIED BY tiger USING 'DEV';

In the case of exports, the NETWORK_LINK parameter identifies the database link pointing to the source server. The objects are exported from the source server in the normal manner, but written to a directory object on the local server, rather than one on the source server. Both the local and remote users require the EXP_FULL_DATABASE role granted to them.

expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT
        directory=TEST_DIR dumpfile=EMP.dmp logfile=expdpEMP.log

For imports, the NETWORK_LINK parameter also identifies the database link pointing to the source server. The difference here is the objects are imported directly from the source into the local server without being written to a dump file. Although there is no need for a DUMPFILE parameter, a directory object is still required for the logs associated with the operation. Both the local and remote users require the IMP_FULL_DATABASE role granted to them.

impdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT
     directory=TEST_DIR logfile=impdpSCOTT.log
    remap_schema=SCOTT:TEST
  • Ensure that the exporting user at the source database has the EXP_FULL_DATABASE role.This user must be specified when you create the database link.
  • Ensure that the importing user at the destination database has the IMP_FULL_DATABASE role.
  • Create and test a database link between the source and destination databases.
  • Run the following command, where import_user is the username for the importing user, and db_link is the name of the database link owned by the exporting user:
    IMPDP import_user/password NETWORK_LINK=db_link FULL=Y;
  • A log file for the import operation is written to the DATA_PUMP_DIR directory. You can discover the location of this directory by running the following command:
    SQL> select * from dba_directories where DIRECTORY_NAME like 'DATA_PUMP_DIR';

Source/Reference Links:

Oracle Database Running Out of Space ?

Are you running out of space in the file system where database files (data files) are installed?

Try out this tip in order to move some of your data files from one drive to another and update the datafile location in your database.

The operating system: Oracle Enterprise Linux
The Database: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

In our environment we have installed our database’s files (data files) in /oracle/oradata/mydb location. The drive is getting up to 99% of utilization. Now we will move the system01.dbf from the above mentioned location to a new location in /oracle/hdb1/oradata/mydb location. /oracle/hdb1 is mounted from /dev/hdb1. The drive is nearly empty, that’s why I chose it.
Now for the real moving part, we will perform the following steps:
Login to SQL* Plus and shutdown the database
Logout from SQL* Plus and move the files from the source directory to destination directory.
Login to SQL* Plus as /nolog
Connect as SYSDBA to the database to an idle instance.
Issue startup mount command to bring up the database.
Issue ALTER DATABASE RENAME command to rename the data files from the source directory to destination directory.
Now finally open the database.
The above mentioned are the brief steps by which we can achieve our goal. Now to demonstrate see the following commands as well so that it serves as a live example:

Step 1
oracle@astrn10: sqlplus /nolog
SQL> conn /as sysdba
SQL> shutdown immediate;
Troubleshooting: If you are not able to get the SQL prompt, check your ORACLE_HOME, and ORACLE_SID.
Step 2
SQL> exit;
oracle@astrn10: mv /oracle/oradata/mydb/system01.dbf /oracle/hdb1/oradata/mydb/
Now check whether the file have been moved or not by issuing the following command:
oracle@astrn10: ls /oracle/hdb1/oradata/mydb/
total 429924352
-rw-r----- 1 oracle oinstall 429924352 Feb 12 11:00 system01.dbf
Now we are ready for the next step.
Step 3
oracle@astrn10: sqlplus /nolog
SQL>
Step 4
SQL> conn /as sysdba
Connected to idle instance.
Step 5
SQL> startup mount;
Database mounted.
Step 6
SQL> alter database rename file '/oracle/oradata/mydb/system01.dbf' to '/oracle/hdb1/oradata/mydb/system01.dbf';
Database altered.
Step 7
SQL> alter database open;
Database opened.

That’s all. We are done with our agenda for moving data files from one drive to another. If this where Windows/any other operating system, then copy files as per your operating system commands/interface in Step 2.
In order to copy more files (in this demonstration we have moved only one file), then repeat Step #2 and Step # 6 for each file.
#End of tip

Source:

Running out of space? Want to move Oracle Datafiles? – Ask Anantha

draft*

Oracle Locks

Draft*

Query to identify the owner, object, object type, sid, serial number, status, OS user and machine to track locks:

select c.owner, c.object_name, c.object_type,b.sid, b.serial#, b.status, b.osuser, b.machine, a.session_id
 from v$locked_object a , v$session b, dba_objects c
where b.sid = a.session_id
and a.object_id = c.object_id;

More Queries:

#1
select
s1.username || '@' || s1.machine 
|| ' ( SID,S#=' || s1.sid || ',' || s1.serial# || ' ) is blocking '
|| s2.username || '@' || s2.machine
|| ' ( SID,S#=' || s2.sid || ',' || s2.serial# || ' )'
AS blocking_status
from
v$lock l1,
v$session s1,
v$lock l2,
v$session s2
where
s1.sid = l1.sid
and s2.sid = l2.sid
and l1.BLOCK = 1
and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2;
#2
select
 oracle_username
 os_user_name,
 locked_mode,
 object_name,
 object_type
from
 v$locked_object a,dba_objects b
where
 a.object_id = b.object_id

Kill the session causing lock:

alter system kill session 'SID,SERIAL';
Eg: alter system kill session '323,1609';

Source / Reference Links:

Automatic Shared Memory Management – ASMM

The Oracle Automatic Shared Memory Management is a feature that automatically readjusts the sizes of the main pools (db_cache_size, shared_pool_size, large_pool_size, java_pool_size) based on existing workloads for optimal performance.

About Automatic Shared Memory Management:

Automatic Shared Memory Management simplifies SGA memory management. You specify the total amount of SGA memory available to an instance using the SGA_TARGET initialization parameter and Oracle Database automatically distributes this memory among the various SGA components to ensure the most effective memory utilization. When automatic shared memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of a workload without requiring any additional configuration. The database automatically distributes the available memory among the various components as required, allowing the system to maximize the use of all available SGA memory. Oracle Database remembers the sizes of the automatically tuned components across instance shutdowns if you are using a server parameter file (SPFILE). As a result, the system does need to learn the characteristics of the workload again each time an instance is started. It can begin with information from the past instance and continue evaluating workload where it left off at the last shutdown.

The benefits of ASMM are:

  • Reduces the change of running out of shared pool memory
  • Uses available memory optimally
  • Improves database performance by constantly matching memory allocations and instance memory needs

Based on workload information, automatic shared memory tuning:

  • Captures statistics periodically in the background
  • Uses the different memory advisories
  • Performs ?what-if? analyses to determine best distribution of memory
  • Moves memory to where it is most needed
  • Has no need to configure parameters for the worst-case scenario
  • Resurrects component sizes from last shutdown if SPFILE is used

Oracle Automatic Shared Memory Management is enabled by setting:

  • a spfile used to specify init.ora values
  • sga_target parameter is set to a non-zero value
  • statistics_level parameter set to to TYPICAL (the default) or ALL (The other value of this parameter is BASIC, which will not allow changing the memory pools automatically.)
  • shared_pool_size must be set to a non-zero value

Oracle10g has introduced special double underscore hidden parameter to control ASMM:

  • __db_cache_size
  • __shared_pool_size
  • __large_pool_size

Once enabled, Oracle ASMM will morph the pool areas within the confines of sga_max_size.

When ASMM is enabled, then the following memory pools are automatically sized:

1 .Buffer cache (DB_CACHE_SIZE)
2. Shared pool (SHARED_POOL_SIZE)
3. Large pool (LARGE_POOL_SIZE)
4. Java pool (JAVA_POOL_SIZE)

The following pools are manually sized components and are not affected by ASMM.

1. DB_KEEP_CACHE_SIZE
2. DB_RECYCLE_CACHE_SIZE
3. DB_nK_CACHE_SIZE (non-default block size)
3. STREAMS_POOL_SIZE
4. LOG_BUFFER

Reference Links:

Oracle Cloning

About Cloning
(From Oracle.com)
Cloning is the process of copying an existing Oracle installation to a different location and updating the copied bits to work in the new environment. The changes made by applying one-off patches on the source Oracle home are also present after the clone operation. The source and the destination path (host to be cloned) need not be the same. During cloning, OUI replays the actions that were run to install the home. Cloning is similar to installation, except that OUI runs the actions in a special mode referred to as clone mode. Some situations in which cloning is useful are:

Creating an installation that is a copy of a production, test, or development installation. Cloning enables you to create a new installation with all patches applied to it in a single step. This contrasts with going through the installation process by performing separate steps to install, configure, and patch the installation.

Rapidly deploying an instance and the applications that it hosts.

Preparing an Oracle home and deploying it to many hosts.

The cloned installation behaves the same as the source installation. For example, the cloned Oracle home can be removed using OUI or patched using OPatch. You can also use the cloned Oracle home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most usage cases. However, you can also customize various aspects of cloning, for example, to specify custom port assignments, or to preserve custom settings.

The cloning process copies all of the files from the source Oracle home to the destination Oracle home. Thus, any files used by the source instance located outside the source Oracle home’s directory structure are not copied to the destination location.

The size of the binaries at the source and the destination may differ because these are relinked as part of the clone operation, and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.

OUI cloning is more beneficial than using the tarball approach, because cloning configures the Central Inventory and the Oracle home inventory in the cloned home. Cloning also makes the home manageable and allows the paths in the cloned home and the target home to be different.

1. Sign in as SYSDBA on the source system through SQL Plus and issue:

ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

Create database syntax will be now placed in the trace file directory. The trace keyword tells oracle to generate a script containing a create controlfile command and store it in the trace directory identified in the user_dump_dest parameter of the init.ora file. It will look something like this:

Trace file:

— The following are current System-scope REDO Log Archival related
— parameters and can be included in the database initialization file.

— LOG_ARCHIVE_DEST=”
— LOG_ARCHIVE_DUPLEX_DEST=”

— LOG_ARCHIVE_FORMAT=ARC%S_%R.%T

— DB_UNIQUE_NAME=”TEST”

— LOG_ARCHIVE_CONFIG=’SEND, RECEIVE, NODG_CONFIG’
— LOG_ARCHIVE_MAX_PROCESSES=2
— STANDBY_FILE_MANAGEMENT=MANUAL
— STANDBY_ARCHIVE_DEST=%ORACLE_HOME%RDBMS
— FAL_CLIENT=”
— FAL_SERVER=”

— LOG_ARCHIVE_DEST_10=’LOCATION=USE_DB_RECOVERY_FILE_DEST’
— LOG_ARCHIVE_DEST_10=’OPTIONAL REOPEN=300 NODELAY’
— LOG_ARCHIVE_DEST_10=’ARCH NOAFFIRM NOEXPEDITE NOVERIFY SYNC’
— LOG_ARCHIVE_DEST_10=’REGISTER NOALTERNATE NODEPENDENCY’
— LOG_ARCHIVE_DEST_10=’NOMAX_FAILURE NOQUOTA_SIZE NOQUOTA_USED NODB_UNIQUE_NAME’
— LOG_ARCHIVE_DEST_10=’VALID_FOR=(PRIMARY_ROLE,ONLINE_LOGFILES)’
— LOG_ARCHIVE_DEST_STATE_10=ENABLE


— Below are two sets of SQL statements, each of which creates a new
— control file and uses it to open the database. The first set opens
— the database with the NORESETLOGS option and should be used only if
— the current versions of all online logs are available. The second
— set opens the database with the RESETLOGS option and should be used
— if online logs are unavailable.
— The appropriate set of statements can be copied from the trace into
— a script file, edited as necessary, and executed when there is a
— need to re-create the control file.

— Set #1. NORESETLOGS case

— The following commands will create a new control file and use it
— to open the database.
— Data used by Recovery Manager will be lost.
— Additional logs may be required for media recovery of offline
— Use this only if the current versions of all online logs are
— available.

— After mounting the created controlfile, the following SQL
— statement will place the database in the appropriate
— protection mode:
— ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “TEST” NORESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO01.LOG’ SIZE 50M,
GROUP 2 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO02.LOG’ SIZE 50M,
GROUP 3 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO03.LOG’ SIZE 50M
— STANDBY LOGFILE

DATAFILE
‘C:ORACLEPRODUCT10.2.0ORADATATESTSYSTEM01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUNDOTBS01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTSYSAUX01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUSERS01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTEXAMPLE01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTTEST.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUSERS02.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTEXAMPLE2.DBF’
CHARACTER SET WE8MSWIN1252
;

— Commands to re-create incarnation table
— Below log names MUST be changed to existing filenames on
— disk. Any one log file from each branch can be used to
— re-create incarnation records.
— ALTER DATABASE REGISTER LOGFILE ‘C:ORACLEPRODUCT10.2.0FLASH_RECOVERY_AREATESTARCHIVELOG2011_10_17O1_MF_1_1_%U_.ARC’;
— ALTER DATABASE REGISTER LOGFILE ‘C:ORACLEPRODUCT10.2.0FLASH_RECOVERY_AREATESTARCHIVELOG2011_10_17O1_MF_1_1_%U_.ARC’;
— Recovery is required if any of the datafiles are restored backups,
— or if the last shutdown was not normal or immediate.
RECOVER DATABASE

— All logs need archiving and a log switch is needed.
ALTER SYSTEM ARCHIVE LOG ALL;

— Database can now be opened normally.
ALTER DATABASE OPEN;

— Commands to add tempfiles to temporary tablespaces.
— Online tempfiles have complete space information.
— Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE ‘C:ORACLEPRODUCT10.2.0ORADATATESTTEMP01.DBF’
SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
— End of tempfile additions.

— Set #2. RESETLOGS case

— The following commands will create a new control file and use it
— to open the database.
— Data used by Recovery Manager will be lost.
— The contents of online logs will be lost and all backups will
— be invalidated. Use this only if online logs are damaged.

— After mounting the created controlfile, the following SQL
— statement will place the database in the appropriate
— protection mode:
— ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE PERFORMANCE

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “TEST” RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO01.LOG’ SIZE 50M,
GROUP 2 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO02.LOG’ SIZE 50M,
GROUP 3 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO03.LOG’ SIZE 50M
— STANDBY LOGFILE

DATAFILE
‘C:ORACLEPRODUCT10.2.0ORADATATESTSYSTEM01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUNDOTBS01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTSYSAUX01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUSERS01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTEXAMPLE01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTTEST.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUSERS02.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTEXAMPLE2.DBF’
CHARACTER SET WE8MSWIN1252
;

— Commands to re-create incarnation table
— Below log names MUST be changed to existing filenames on
— disk. Any one log file from each branch can be used to
— re-create incarnation records.
— ALTER DATABASE REGISTER LOGFILE ‘C:ORACLEPRODUCT10.2.0FLASH_RECOVERY_AREATESTARCHIVELOG2011_10_17O1_MF_1_1_%U_.ARC’;
— ALTER DATABASE REGISTER LOGFILE ‘C:ORACLEPRODUCT10.2.0FLASH_RECOVERY_AREATESTARCHIVELOG2011_10_17O1_MF_1_1_%U_.ARC’;
— Recovery is required if any of the datafiles are restored backups,
— or if the last shutdown was not normal or immediate.
RECOVER DATABASE USING BACKUP CONTROLFILE

— Database can now be opened zeroing the online logs.
ALTER DATABASE OPEN RESETLOGS;

— Commands to add tempfiles to temporary tablespaces.
— Online tempfiles have complete space information.
— Other tempfiles may require adjustment.
ALTER TABLESPACE TEMP ADD TEMPFILE ‘C:ORACLEPRODUCT10.2.0ORADATATESTTEMP01.DBF’
SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;
— End of tempfile additions.

2. Shutdown source database.

3. Install Oracle server on new machine. Create directories ( bdump, udump and cdump and other directories as in source database system ) for the copy database on target system.

4. Copy all data files into the new directories on the new server. You may change the file names if you want, but you must edit the controlfile to reflect the new data files names on the new server.

5. Add database name to tnsnames.ora, listener.ora on target system.

6. Create database service with ORADIM ( If OS is Windows )

oradim -new -sid TEST -INTPWD mypassword -STARTMODE AUTO

7. On target system, set SID and use sqlplus to connect as SYSDBA

8. Delete the control files if any already copied over using OS commands

9. Rename trace file to ‘ DB_CREATE_CONTROLFILE.SQL ‘, edited, contents are as follows:

STARTUP NOMOUNT
CREATE CONTROLFILE REUSE DATABASE “TEST” RESETLOGS ARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 3
MAXDATAFILES 100
MAXINSTANCES 8
MAXLOGHISTORY 292
LOGFILE
GROUP 1 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO01.LOG’ SIZE 50M,
GROUP 2 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO02.LOG’ SIZE 50M,
GROUP 3 ‘C:ORACLEPRODUCT10.2.0ORADATATESTREDO03.LOG’ SIZE 50M

DATAFILE
‘C:ORACLEPRODUCT10.2.0ORADATATESTSYSTEM01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUNDOTBS01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTSYSAUX01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUSERS01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTEXAMPLE01.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTTEST.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTUSERS02.DBF’,
‘C:ORACLEPRODUCT10.2.0ORADATATESTEXAMPLE2.DBF’
CHARACTER SET WE8MSWIN1252
;

RECOVER DATABASE USING BACKUP CONTROLFILE

ALTER DATABASE OPEN RESETLOGS;

ALTER TABLESPACE TEMP ADD TEMPFILE ‘C:ORACLEPRODUCT10.2.0ORADATATESTTEMP01.DBF’
SIZE 20971520 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

10. Run the create control file script issuing:

@DB_CREATE_CONTROLFILE.SQL

If everything went OK and you see no errors, then execute:

SHUTDOWN IMMEDIATE;

11. Startup new database and verify it work fine.

Start Oracle service on Windows using ORADIM utility

To start a new service on Windows, Oracle provides a command-line utility, ORADIM. It can also be used to remove, edit and stop a service.

Create:

C:oracleproduct10.1.0db_2>oradim -new -sid %ORACLE_SID% -intpwd password -startmode A
Instance created.

Startup:

C:oracleproduct10.1.0db_2>oradim –startup –sid %ORACLE_SID%

Shutdown:

C:oracleproduct10.1.0db_2>oradim –shutdown –sid %ORACLE_SID%

Delete:

C:oracleproduct10.1.0db_2>oradim –delete –sid %ORACLE_SID%

  • startup – Indicates that the specified instance should be started.
  • shutdown – Indicates that the specified instance should be shutdown.
  • sid – The SID of the database to start.
  • intpwd – The password for the database user.
  • startmode – It could be A(utomatic) or M(anual), this refers to whether or not starting the service automatically opens the database.

To verify whether the service is created or not, check Windows services by typing services.msc into the console. A service named OracleServiceDBName (DBName = %ORACLE_SID%) will be found.

Oracle also created a password file under %ORACLE_HOME%database.

C:oracleproduct10.1.0db_2databasePWDDBName.ORA

Registry Settings:

Check registry editor by typing regedit into the console.

HKLMSoftwareOracleKey_<Oracle Home Name>ORA<SID>_AUTOSTART
Start the database when the service starts

HKLMSoftwareOracleKey_<Oracle Home Name>ORA<SID>_PFILE
Location of the Configuration File

HKLMSoftwareOracleKey_<Oracle Home Name>ORA<SID>_SHUTDOWN
Shutdown the database when the service stops

HKLMSoftwareOracleKey_<Oracle Home Name>ORA<SID>_SHUTDOWNTYPE
Type of shutdown to do when stopping the server

RMAN – Disaster Recovery using RMAN

Oracle Backup and Recovery on Windows, Part III – Disaster Recovery using RMAN:

(Article  extracted from  Orafusion.com)

1. Introduction:

This is the final article of a three part series, introducing Oracle’s RMAN (Recovery MANager) backup and recovery tool to novice DBAs. The first article dealt with taking RMAN backups and the second one covered some recovery scenarios. This article discusses disaster recovery – i.e. a situation in which your database server has been destroyed and has taken all your database files (control files, logs and data files) with it. Obviously, recovery from a disaster of this nature is dependent on what you have in terms of backups and hardware resources. We assume you have the following available after the disaster:

  • A server with the same disk layout as the original.
  • The last full hot backup on tape.

With the above items at hand, it is possible to recover all data up to the last full backup. One can do better if subsequent archive logs (after the last backup) are available. In our case these aren’t available, since our only archive destination was on the destroyed server (see Part I ). Oracle provides methods to achieve better data protection. We will discuss some of these towards the end of the article.

Now on with the task at hand. The high-level steps involved in disaster recovery are:

  • Build replacement server.
  • Restore backup from tape.
  • Install database software.
  • Create Oracle service.
  • Restore and recover database.

It sounds quite straightforward and it is reasonably so. However, there are some details that are often missed in books and documentation. The devil, as always, is in these details. Here we hope to provide you with enough detail to plan and practice disaster recovery in your test environment.

2. Build the server

You need a server to host the database, so the first step is to acquire or build the new machine. This is not strictly a DBA task, so we won’t delve into details here. The main point to keep in mind is that the replacement server should, as far as possible, be identical to the old one. In particular, pay attention to the following areas:

Disk layout and capacity: Ideally the server should have the same number of disks as the original. This avoids messy renaming of files during recovery. Obviously, the new disks should also have enough space to hold all software and data that was on the original server.
Operating system, service pack and patches: The operating system environment should be the same as the original, right up to service pack and patch level.
Memory: The new server must have enough memory to cater to Oracle and operating system / other software requirements. Oracle memory structures (Shared pool, db buffer caches etc) will be sized identically to the original database instance. Use of the backup server parameter file will ensure this.

Although you probably won’t build and configure the machine yourself. It is important to work with your systems people so that the above items are built to the recovery server specs.

3. Restore backup from tape

The next step is to get your backup from tape on to disk. In our example from Part I, the directory to be restored is e:backup. The details of this depend on the backup product used, so we can’t go into it any further. This task would normally be performed your local sysadmin.

4. Install Oracle Software

Now we get to the meat of the database recovery process. The next step is to install Oracle software on the machine. The following points should be kept in mind when installing the software:

  • Install the same version of Oracle as was on the destroyed server. The version number should match right down to the patch level, so this may be a multi-step process involving installation followed by the application of one or more patchsets and patches.
  • Do not create a new database at this stage.
  • Create a listener using the Network Configuration Assistant. Ensure that it has the same name and listening ports as the original listener. Relevant listener configuration information can be found in the backed up listener.ora file.

4. Create directory structure for database files

After software installation is completed, create all directories required for datafiles, (online and archived) logs, control files and backups. All directory paths should match those on the original server. This, though not mandatory, saves additional steps associated with renaming files during recovery.

Don’t worry if you do not know where the database files should be located. You can obtain the required information from the backup spfile and control file at a later stage. Continue reading – we’ll come back to this later.

5. Create Oracle service

As described in section 2 of Part II, an Oracle service must be exist before a database is created. The service is created using the oradim utility, which must be run from the command line. The following commands show how to create and modify a service (comments in italics, typed commands in bold):

–create a new service with manual startup

C:>oradim -new -sid ORCL -startmode m

–modify service to startup automatically

C:>oradim -edit -sid ORCL -startmode a

Unfortunately oradim does not give any feedback, but you can check that the service exists via the Services administrative panel. The service has been configured to start automatically when the computer is powered up. Note that oradim offers options to delete, startup and shutdown a service. See the documentation for details.

6. Restore and recover database

Now it is time to get down to the nuts and bolts of database recovery. There are several steps, so we’ll list them in order:

  • Copy password and tnsnames file from backup: The backed up password file and tnsnames.ora files should be copied from the backup directory (e:backup, in our example) to the proper locations. Default location for password and tnsnames files are ORACLE_HOMEdatabase ORACLE_HOMEnetworkadmin respectively.
  • Set ORACLE_SID environment variable: ORACLE_SID should be set to the proper SID name (ORCL in our case). This can be set either in the registry (registry key: HKLMSoftwareOracleHOMEORACLE_SID) or from the system applet in the control panel.
  • Invoke RMAN and set the DBID: We invoke rman and connect to the target database as usual. No login credentials are required since we connect from an OS account belonging to ORA_DBA. Note that RMAN accepts a connection to the database although the database is yet to be recovered. RMAN doesn’t as yet “know” which database we intend to connect to. We therefore need to identify the (to be restored) database to RMAN. This is done through the database identifier (DBID). The DBID can be figured out from the name of the controlfile backup. Example: if you use the controlfile backup format suggested in Part I, your controlfile backup name will be something like “CTL_SP_BAK_C-1507972899-20050228-00”. In this case the DBID is 1507972899. Here’s a transcript illustrating the process of setting the DBID:
C:>rman
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
RMAN> connect target /
connected to target database (not started)
RMAN> set dbid 1507972899
executing command: SET DBID
RMAN>
  • Restore spfile from backup: To restore the spfile, you first need to startup the database in the nomount state. This starts up the database using a dummy parameter file. After that you can restore the spfile from the backup (which has been restored from tape in Section 3). Finally you restart the database in nomount state. The restart is required in in order to start the instance using the restored parameter file. Here is an example RMAN transcript for the foregoing procedure. Note the difference in SGA size and components between the two startups:
RMAN> startup nomount
startup failed: ORA-01078: failure in processing system parameters
 LRM-00109: could not open parameter file 'C:ORACLEORA92DATABASEINITORCL.ORA'
trying to start the Oracle instance without parameter files ...
 Oracle instance started
Total System Global Area 97590928 bytes
Fixed Size 454288 bytes
 Variable Size 46137344 bytes
 Database Buffers 50331648 bytes
 Redo Buffers 667648 bytes
RMAN> restore spfile from 'e:backupCTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
using target database controlfile instead of recovery catalog
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: sid=9 devtype=DISK
 channel ORA_DISK_1: autobackup found: e:backupCTL_SP_BAK_C-1507972899-20050228-00
 channel ORA_DISK_1: SPFILE restore from autobackup complete
 Finished restore at 01/MAR/05
RMAN> startup force nomount
Oracle instance started
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
 Variable Size 763363328 bytes
 Database Buffers 754974720 bytes
 Redo Buffers 2142208 bytes
RMAN>
The instance is now started up with the correct initialisation parameters.

We are now in a position to determine the locations of control file and archive destination, as this information sits in the spfile. This is done via SQL Plus as follows:

C:>sqlplus /nolog
....output not shown
SQL>connect / as sysdba
 Connected.
 SQL> show parameter control_file
....output not shown
SQL> show parameter log_archive_dest
 ....output not shown

The directories listed in the CONTROL_FILES and LOG_ARCHIVE_DEST_N parameters should be created at this stage if they haven’t been created earlier.

  • Restore control file from backup: The instance now “knows” where the control files should be restored, as this is listed in the CONTROL_FILES initialisation parameter. Therefore, the next step is to restore these files from backup. Once the control files are restored, the instance should be restarted in mount mode. A restart is required because the instance must read the initialisation parameter file in order to determine the control file locations. At the end of this step RMAN also has its proper configuration parameters, as these are stored in the control file.

Here is a RMAN session transcript showing the steps detailed here:

RMAN> restore controlfile from 'e:backupCTL_SP_BAK_C-1507972899-20050228-00';
Starting restore at 01/MAR/05
allocated channel: ORA_DISK_1
 hannel ORA_DISK_1: sid=13 devtype=DISK
 channel ORA_DISK_1: restoring controlfile
 channel ORA_DISK_1: restore complete
 replicating controlfile
 input filename=D:ORACLE_DATACONTROLFILEORCLCONTROL01.CTL
 output filename=E:ORACLE_DATACONTROLFILEORCLCONTROL02.CTL
 output filename=C:ORACLE_DUP_DESTCONTROLFILEORCLCONTROL03.CTL
 Finished restore at 01/MAR/05
RMAN> shutdown
Oracle instance shut down
RMAN> exit
Recovery Manager complete.
C:>rman target /
Recovery Manager: Release 9.2.0.4.0 - Production
Copyright (c) 1995, 2002, Oracle Corporation. All rights reserved.
connected to target database (not started)
RMAN> startup mount;
Oracle instance started
 database mounted
Total System Global Area 1520937712 bytes
Fixed Size 457456 bytes
 Variable Size 763363328 bytes
 Database Buffers 754974720 bytes
 Redo Buffers 2142208 bytes
RMAN> show all;
using target database controlfile instead of recovery catalog
 RMAN configuration parameters are:
 CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
 CONFIGURE BACKUP OPTIMIZATION OFF; # default
 CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
 CONFIGURE CONTROLFILE AUTOBACKUP ON;
 CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO 'e:backupctl_sp_bak_%F';
 CONFIGURE DEVICE TYPE DISK PARALLELISM 2;
 CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
 CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
 CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT 'e:backup%U.bak' MAXPIECESIZE 4G;
 CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT 'e:backup%U.bak' MAXPIECESIZE 4G;
 CONFIGURE MAXSETSIZE TO UNLIMITED; # default
 CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'C:ORACLEORA92DATABASESNCFORCL.ORA'; # default
RMAN>

At this stage we can determine the locations of data files and redo logs if we don’t know where they should go. This is done from SQL Plus as follows:

C:>sqlplus /nolog
...output not shown
SQL> connect / as sysdba
 Connected.
 SQL> select name from v$datafile;
...output not shown
SQL> select member from v$logfile;
...output not shown
SQL>

The directories shown in the output should be created manually if this hasn’t been done earlier.

  • Restore all datafiles: This is easy. Simply issue a “restore database” command from RMAN, and it will do all the rest for you:
RMAN> restore database;
Starting restore at 01/MAR/05
using target database controlfile instead of recovery catalog
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: sid=11 devtype=DISK
 allocated channel: ORA_DISK_2
 channel ORA_DISK_2: sid=8 devtype=DISK
 channel ORA_DISK_1: starting datafile backupset restore
 channel ORA_DISK_1: specifying datafile(s) to restore from backup set
 restoring datafile 00001 to D:ORACLE_DATADATAFILESORCLSYSTEM01.DBF
 restoring datafile 00003 to D:ORACLE_DATADATAFILESORCLUSERS01.DBF
 restoring datafile 00004 to D:ORACLE_DATADATAFILESORCLUSERS02.DBF
 channel ORA_DISK_2: starting datafile backupset restore
 channel ORA_DISK_2: specifying datafile(s) to restore from backup set
 restoring datafile 00002 to D:ORACLE_DATADATAFILESORCLUNDOTBS01.DBF
 restoring datafile 00005 to D:ORACLE_DATADATAFILESORCLTOOLS01.DBF
 restoring datafile 00006 to D:ORACLE_DATADATAFILESORCLTOOLS02.DBF
 channel ORA_DISK_2: restored backup piece 1
 piece handle=E:BACKUP80G6E1TT_1_1.BAK tag=TAG20041130T222501 params=NULL
 channel ORA_DISK_1: restored backup piece 1
 piece handle=E:BACKUP81G6E1TU_1_1.BAK tag=TAG20041130T222501 params=NULL
 channel ORA_DISK_2: restored backup piece 2
 piece handle=E:BACKUP80G6E1TT_2_1.BAK tag=TAG20041130T222501 params=NULL
 channel ORA_DISK_1: restored backup piece 2
 piece handle=E:BACKUP81G6E1TU_2_1.BAK tag=TAG20041130T222501 params=NULL
 channel ORA_DISK_1: restored backup piece 3
 piece handle=E:BACKUP81G6E1TU_3_1.BAK tag=TAG20041130T222501 params=NULL
 channel ORA_DISK_1: restore complete
 channel ORA_DISK_2: restored backup piece 3
 piece handle=E:BACKUP80G6E1TT_3_1.BAK tag=TAG20041130T222501 params=NULL
 channel ORA_DISK_2: restore complete
 Finished restore at 01/MAR/05
RMAN>
  • Recover database: The final step is to recover the database. Obviously recovery is dependent on the available archived (and online) redo logs. Since we have lost our database server and have no remote archive destination, we can recover only up to the time of the backup. Further, since this is an incomplete recovery, we will have to open the database with resetlogs. Here’s a sample RMAN session illustrating this:
RMAN> recover database;
Starting recover at 01/MAR/05
 using channel ORA_DISK_1
 using channel ORA_DISK_2
starting media recovery
unable to find archive log archive log thread=1 sequence=1388
RMAN-00571: ==============================
 RMAN-00569: =ERROR MESSAGE STACK FOLLOWS =
 RMAN-00571: ===============================
 RMAN-03002: failure of recover command at 04/01/2005 14:14:43
 RMAN-06054: media recovery requesting unknown log: thread 1 scn 32230460
RMAN> alter database open resetlogs;
database opened
RMAN>

Note that RMAN automatically applies all available archive logs. It first applies the backed up log and then searches for subsequent logs in the archive destination. This opens the door for further recovery if the necessary logs are available. In our case, however, we have no more redo so we open the database with resetlogs. The error message above simply indicates that RMAN has searched, unsuccessfully, for subsequent logs.

That’s it. The database has been recovered, from scratch, to the last available backup. Now having done this, it is worth spending some time in discussing how one can do better – i.e. recover up to a point beyond the backup. We do this in the next section.

7. Options for better recovery

The above recovery leaves one with a sense of dissatisfaction: one could have done much better had the necessary logs been available. Clearly, one would have to copy the logs to a remote machine in order to guarantee access in a disaster situation. A couple of ways to do this include:

Copy archive logs to a remote destination using OS scripts: This is achieved simply by a script scheduled to run every hour or so. The script copies, to a remote network computer, all the archive logs generated since the script last ran. This achieves better recoverability than before. However, it does not achieve up to the minute recovery. Further, one has to ensure that the a log switch is performed before the logs are copied, so as to ensure that redo associated with recent transactions (within the last hour) is copied to the remote destination.The log switch can be performed using the “alter system archive log current” command.

Configure the Oracle ARC process to copy logs to the remote destination: This is done by defining a secondary archive destination via one of the LOG_ARCHIVE_DEST_N initialisation parameters. This method is not recommended because it is somewhat fragile – see this discussion on Tom Kyte’s site, for example. Be sure to use a mapped network drive as the archive destination if you choose to go down this path. Oracle 9i will not recognise archive log destinations specified using UNC (Universal Naming Convention).

Finally, any article on disaster recovery should mention Oracle Data Guard – which is Oracle Corporation’s recommended disaster recovery solution for mission critical systems. This is essentially a standby database that is kept synchronised with the primary through the continuous application of redo. There are different levels of synchronisation depending on required availability, performance and (most important!) the acceptable data loss. There are two types of standby databases depending on how redo is applied: 1) Physical standby – which is an identical, block for block, copy of the primary, and 2) Logical standby – which is kept synchronised by applying SQL mined from redo logs. Interested readers are referred to the Oracle documentation on Data Guard for further details, as it is a vast subject, appropriate for a book rather than an article this size. A good starting point is Oracle Data Guard Concepts and Administration guide.

8. Concluding Remarks

This brings me to the end of the three part series on RMAN. I hope the material covered helps you plan your backup and recovery strategy. Remember this: your backup strategy should be dictated by business requirements rather than the latest and greatest technology. It is technically not hard to implement up-to-the-minute recovery, given the appropriate hardware and network bandwidth. However, unless you work for a bank or similar business, your end-users will likely accept some data loss once they hear the costs involved in up-to-the-minute recovery. If your requirements do allow for some data loss, it should be possible to implement a robust recovery strategy using the concepts and procedures discussed in this series of articles.

Source

Reference Links: