Determine OS Block Size

1. Windows Machine
If you use ntfs file system you can use fsutil fsinfo ntfsinfo drivename:  to get information of block size.

c:>fsutil fsinfo ntfsinfo f:
Microsoft Windows XP [Version 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
C:Documents and SettingsAdministrator>fsutil fsinfo ntfsinfo D:
NTFS Volume Serial Number : 0xc054aec854aec090
Version : 3.1
Number Sectors : 0x000000000c34f28c
Total Clusters : 0x0000000001869e51
Free Clusters : 0x00000000015f0419
Total Reserved : 0x0000000000000000
Bytes Per Sector : 512
Bytes Per Cluster : 4096
Bytes Per FileRecord Segment : 1024
Clusters Per FileRecord Segment : 0
Mft Valid Data Length : 0x0000000003528000
Mft Start Lcn : 0x00000000000c0000
Mft2 Start Lcn : 0x0000000000c34f28
Mft Zone Start : 0x00000000000c3520
Mft Zone End : 0x00000000003cd3e0
C:Documents and SettingsAdministrator>

Plan Oracle Backup Strategy

This article focus on general guidelines that can help to decide:

  • when to perform database backups
  • which parts of a database you should back up
  • what tools Oracle provides for those backups
  • how to configure your database to improve its robustness and make backup and recovery easier

The specifics of the strategy must balance the needs of your restore strategy with questions of cost, resources, personnel and other factors.

1.Protecting the redundancy set

  •  The redundancy set should contain:
    1.  Last backup of the control file and all the datafiles
    2. All archived redo logs generated after the last backup was taken
    3. Duplicates of the current control file and online redo log files, generated by Oracle database multiplexing, operating system mirroring, or both
    4. Copies of configuration files such as the server parameter file, tnsnames.ora, and listener.ora
  • Do not store the redundancy set on the same disk that contains the datafiles, online logs and control files of the database.
  • Make use of a flash recovery area, on a separate device from the working set files and keep it as the ideal location to store backup files and mirrored files.
  • Multiplex the online redo log files and current control file at the database level and the multiplexed files should be on different disks mounted under different disk controllers. Also mirror the online redo logs and current control file at the operating system or hardware level although this is not a substitute for multiplexing at the database level.
  • Archive the redo logs to multiple locations, ideally on different disks if database is running in Archivelog mode.
  • Use operating system or hardware mirroring for the control file.
  • Use operating system or hardware mirroring for the primary datafiles if possible, to avoid having to perform media recovery for simple disk failures.
  • Keep at least one copy of the entire redundancy set including the most recent backup on disk.
  • If the target database is stored on a RAID device, then store the redundancy set on a set of disks that are not in the same RAID device.
  • If you store the redundancy set on tape, then maintain at least two copies of the data to protect against the risk of tape failure.

2. Flash Recovery Area

  • It is recommended that you take advantage of the flash recovery area to store as many backup and recovery -related files as as possible, including disk backups and archived redo logs.
  • Obsolete files no longer needed to meet the recoverability goals and files backed up to tape or other media become eligible for deletion and are deleted when space is needed.

3. ARCHIVELOG and NOARCHIVELOG Mode

  • Database running in ARCHIVELOG mode is preferable to running in NOARCHIVELOG mode because  it provides more flexible recovery options after a data loss, such as point-in-time recovery of the database or some tablespaces.
  • In NOARCHIVELOG mode, one can’t perform online backups of the database and have to shut database down cleanly before backup is taken. Also any data recovery techniques that require the archived redo logs can’t be used.
  • The stored archived redo logs must be managed such that limited disk space is alloted for archived redo logs and older logs which are no longer needed to meet recoverability goals are deleted.

4. Oracle Flashback Features and Restore Points

  • Using the flashback features of Oracle improves the availability of the database when repairing the effects of unwanted database changes. The logical-level flashback features allow the objects not affected to remain available, and Flashback Database allows for faster rewind of the entire database than point-in-time recovery.
  • Incorporating Flashback Database or guaranteed restore points into the strategy requires to enable a flash recovery area, as well as creating guaranteed restore points at the right points in time, or configuring flashback logging.
5. Backup Retention Policy
  • RMAN automates the implementation of a backup retention policy, using the following commands:
    1. CONFIGURE RETENTION POLICY command lets to set the retention policy that will apply to all of your database files by default.
    2. REPORT OBSOLETE command lets to list backups currently on disk that are obsolete under the retention policy.
    3. DELETE OBSOLETE command deletes the files which REPORT OBSOLETE lists as obsolete.
    4. CHANGE… KEEP lets to set a separate retention policy for specific backups, such as long-term backups kept for archival purposes.
  • Redundancy based retention policy
    1. In a redundancy-based retention policy, you a number n is specified such that you always keep at least n distinct backups of each file in the database.
    2. A redundancy-based backup retention policy determines whether a backup is obsolete based on how many backups of a file are currently on disk.
  • Recovery window-based retention policy
    1. In a recovery window-based retention policy, a time interval is specified in the past and keep all backups required to let you perform point-in-time recovery to any point during that window.
    2. A recovery window-based retention policy lets you guarantee that you can perform point-in-time recovery to any point in the past, up to a number of days that you specify.

6. Archiving Older Backups

  • An older backup of datafiles and archived logs is necessary for performing point-in-time recovery to a time before your most recent backup.
  • If your most recent backup is corrupt, you can still recover your database using an older backup and the complete set of archived logs since that older backup.
  • Keep a copy of the database for archival purposes.

7. Frequency of backups

  • Base the frequency of backups on the rate or frequency of database changes such as:
    1. Addition and deletion of tables
    2. Insertions and deletions of rows in existing tables
    3. Updates to data within tables
  • The more frequently your database is updated, the more often you should perform database backups.
  • If database updates are relatively infrequent, then you can make whole database backups infrequently and supplement them with incremental backups.

8.  Backups before and after structural changes

  • If you make any of the following structural changes, then perform a backup of the appropriate portion of your database immediately before and after completing the following changes:
    1. Create or drop a tablespace.
    2. Add or rename a datafile in an existing tablespace.
    3. Add, rename, or drop an online redo log group or member.
  • If you are in NOARCHIVELOG mode, then you must shut down the database and perform a consistent whole database backup after any such change.
  • If you are running in ARCHIVELOG mode, then you must make a control file backup after any such change, using either RMAN’s BACKUP CONTROLFILE command or the SQL ALTER DATABASE BACKUP CONTROLFILE statement.

9. Scheduling Backups for Frequently-Updated Tablespaces

  • If you run in ARCHIVELOG mode, then you can back up an individual tablespace or even a single datafile. You might want to do this for one or more tablespaces that are updated much more often than the rest of your database, as is sometimes the case for the SYSTEM tablespace and automatic undo tablespaces.

10. Backing Up after NOLOGGING Operations

  • When a direct path load is performed to populate a database, no redo data is logged for those database changes. You cannot recover these changes after a restore from backup using conventional media recovery.
  • Likewise, when tables and indexes are created as NOLOGGING, the database does not log redo data for these objects, which means that you cannot recover these objects from existing backups. Therefore, you should back up your datafiles after operations for which no redo data is logged.

11. Exporting Data for Added Protection and Flexibility

  • Oracle database import and export utilities are used to export database objects (tables, stored procedures, and so forth) from databases to be stored as files, and re-import objects from those files.
  • An export provides a logical-level snapshot of the exported objects at the time of the export, as a binary file that can be imported back into the source database or some other database.

12. Preventing the Backup of Online Redo Logs

  • Online redo logs, unlike archived logs, should never be backed up. The chief danger associated by having backups of online redo logs is that you may accidentally restore those backups without meaning to, and corrupt your database.
  • The best method for protecting the online logs against media failure is to multiplex them, with multiple log members in each group, on different disks attached to different disk controllers.

13. Keeping Records of the Hardware and Software Configuration of the Server

  • You should have the following documentation about the hardware configuration:
    1. The name, make, and model of the machine that hosts the database
    2. The version and patch of the operating system
    3. The number of disks and disk controllers
    4. The disk capacity and free space
    5. The names of all datafiles
    6. The name and version of the media management software (if you use a third-party media manager)
  • You should also keep the following documentation about the software configuration:
  1. The name of the database instance (SID)
  2. The database identifier (DBID)
  3. The version and patch release of the Oracle database server
  4. The version and patch release of the networking software
  5. The method (RMAN or user-managed) and frequency of database backups
  6. The method of restore and recovery (RMAN or user-managed
Article based on the Oracle Doc – Planning Backup Strategy

Network Exports/Imports using Network_Link

The NETWORK_LINK parameter identifies a database link to be used as the source for a network export/import. The following database link will be used to demonstrate its use.

CONN / AS SYSDBA
GRANT CREATE DATABASE LINK TO test;
CONN test/test
CREATE DATABASE LINK remote_scott
                    CONNECT TO scott IDENTIFIED BY tiger USING 'DEV';

In the case of exports, the NETWORK_LINK parameter identifies the database link pointing to the source server. The objects are exported from the source server in the normal manner, but written to a directory object on the local server, rather than one on the source server. Both the local and remote users require the EXP_FULL_DATABASE role granted to them.

expdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT
        directory=TEST_DIR dumpfile=EMP.dmp logfile=expdpEMP.log

For imports, the NETWORK_LINK parameter also identifies the database link pointing to the source server. The difference here is the objects are imported directly from the source into the local server without being written to a dump file. Although there is no need for a DUMPFILE parameter, a directory object is still required for the logs associated with the operation. Both the local and remote users require the IMP_FULL_DATABASE role granted to them.

impdp test/test@db10g tables=SCOTT.EMP network_link=REMOTE_SCOTT
     directory=TEST_DIR logfile=impdpSCOTT.log
    remap_schema=SCOTT:TEST
  • Ensure that the exporting user at the source database has the EXP_FULL_DATABASE role.This user must be specified when you create the database link.
  • Ensure that the importing user at the destination database has the IMP_FULL_DATABASE role.
  • Create and test a database link between the source and destination databases.
  • Run the following command, where import_user is the username for the importing user, and db_link is the name of the database link owned by the exporting user:
    IMPDP import_user/password NETWORK_LINK=db_link FULL=Y;
  • A log file for the import operation is written to the DATA_PUMP_DIR directory. You can discover the location of this directory by running the following command:
    SQL> select * from dba_directories where DIRECTORY_NAME like 'DATA_PUMP_DIR';

Source/Reference Links:

Switch to ASMM in 10g

Oracle Automatic Shared Memory Management is enabled by setting:

  1. A spfile used to specify init.ora values
  2. sga_target parameter is set to a non-zero value
  3. statistics_level parameter set to to TYPICAL (the default) or ALL
  4. shared_pool_size must be set to a non-zero value

Check different component sizes from v$parameter

select name,value
 from v$parameter
 where name in('shared_pool_size','large_pool_size',
 'java_pool_size','db_cache_size','sga_target');

Switch to ASMM:

$ sqlplus / as sysdba
SQL> alter system set sga_max_size=1G scope=spfile;
System altered.

SQL> alter system set sga_target=500m scope=both;
System altered.
SQL> ALTER SYSTEM SET SHARED_POOL_SIZE = 0;
System altered.
SQL> ALTER SYSTEM SET LARGE_POOL_SIZE = 0;
System altered.
SQL> ALTER SYSTEM SET JAVA_POOL_SIZE = 0;
System altered.
SQL> ALTER SYSTEM SET DB_CACHE_SIZE = 0;
System altered.
SQL> ALTER SYSTEM SET STREAMS_POOL_SIZE = 0;
System altered.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1223540 bytes
Variable Size 738198668 bytes
Database Buffers 327155712 bytes
Redo Buffers 7163904 bytes
Database mounted.
Database opened.

SQL> show parameter sga_target
NAME                TYPE                     VALUE
----------------  ------------------  -------------
sga_target       big integer           500M

SQL> show parameter sga_max_size
NAME                   TYPE                    VALUE
------------------ ------------------ -------------
sga_max_size   big integer         1G

Disable ASSM:

Disabling ASSM can be done by chaning SGA_TARGET parameter to zero value. But we need to set the below parameter with proper sizing when we disable ASMM

1. shared_pool_size
2. large_pool_size
3. java_pool_size
4. db_cache_size

SQL> alter system set sga_target=0 scope=both;
System altered.

We may need to disable ASMM if there is significant amount of memory resizing activity that cause the performance issue in database. This frequent memory resizing might happen when the environment is mixed of DSS and OLTP environment.

Source / Reference Links:

Manage Files On Cloud

Just searched for Amazon S3 clients and found two good file management solutions compared to other such apps.

1. S3 Fox – Mozilla Addon – S3Fox Organizer helps you organize/manage/store your files on Amazon S3. It is easy to install and use as it is integrated into the browser.

2. CloudBerry S3 Explorer – CloudBerry Explorer makes managing files in Amazon S3 storage easy. By providing a user interface to Amazon S3 accounts, files, and buckets, CloudBerry lets you manage your files on cloud just as you would on your own local computer.

Other alternatives:

Reference Links:

Point in Time Recovery Using RMAN

Point in time recovery using RMAN – untill a log sequence number

1.Recovery Objective

SQL> conn scott/tiger
 Connected.
 SQL> select count(*) from myobjects;
COUNT(*)
 ----------
 249410

2.Switch a logfile

SQL> conn / as sysdba
 Connected.
 SQL> alter system switch logfile;
System altered.

3.Note the current log sequence number (13)

SQL> archive log list
 Database log mode Archive Mode
 Automatic archival Enabled
 Archive destination /u02/ORACLE/opsdba/arch
 Oldest online log sequence 12
 Next log sequence to archive 14
 Current log sequence 14

4.Simulate an application failure – WRONG Delete!!

SQL> conn scott/tiger
 Connected.
 SQL> delete myobjects;
249410 rows deleted.
SQL> commit;
Commit complete.

The developer states that the wrong DML statement was made AFTER 8.15 AM and is positive about the same.

We need to determine the log sequence we need to recover until

select sequence#,first_change#, to_char(first_time,'HH24:MI:SS')
from v$log order by 3
 SQL> /
SEQUENCE# FIRST_CHANGE# TO_CHAR(
 ---------- ------------- --------
 13 2760463 07:49:36
 14 2761178 08:12:47
 15 2766622 08:18:49

Log sequence 14 was first written to at 8:12 AM so we should recover to a log sequence before this – i.e sequence# 13

5. Shutdown and mount the database

SQL> shutdown immediate;
 Database closed.
 Database dismounted.
 ORACLE instance shut down.

 SQL> startup mount;
 ORACLE instance started.
Total System Global Area 264241152 bytes
 Fixed Size 2070416 bytes
 Variable Size 163580016 bytes
 Database Buffers 92274688 bytes
 Redo Buffers 6316032 bytes
 Database mounted.

 RMAN> run {
 2> set until sequence=14; >>> add one to the sequence number we have to recover until
 3> restore database;
 4> recover database;
 5> }
executing command: SET until clause
Starting restore at 29-JAN-07
 allocated channel: ORA_DISK_1
 channel ORA_DISK_1: sid=154 devtype=DISK
 allocated channel: ORA_SBT_TAPE_1
 channel ORA_SBT_TAPE_1: sid=158 devtype=SBT_TAPE
 channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.2.4.0
channel ORA_DISK_1: starting datafile backupset restore
 channel ORA_DISK_1: specifying datafile(s) to restore from backup set
 restoring datafile 00001 to /u02/ORACLE/opsdba/system01.dbf
 restoring datafile 00002 to /u02/ORACLE/opsdba/undotbs01.dbf
 restoring datafile 00003 to /u02/ORACLE/opsdba/sysaux01.dbf
 restoring datafile 00004 to /u02/ORACLE/opsdba/users01.dbf
 restoring datafile 00005 to /u02/ORACLE/opsdba/users02.dbf
 restoring datafile 00006 to /u02/ORACLE/opsdba/users03.dbf
 restoring datafile 00007 to /u02/ORACLE/opsdba/users05.dbf
 restoring datafile 00008 to /u02/ORACLE/opsdba/users06.dbf
 restoring datafile 00009 to /u02/ORACLE/opsdba/users07.dbf
 restoring datafile 00010 to /u02/ORACLE/opsdba/users04.dbf
 restoring datafile 00011 to /u02/ORACLE/opsdba/drtbs1.dbf
 restoring datafile 00012 to /u02/ORACLE/opsdba/drtbs2.dbf

 channel ORA_DISK_1: reading from backup piece
 /opt/oracle/backup/opsdba/OPSDBA.20070129.161.1.1.613122551
 channel ORA_DISK_1: 
 restored backup piece 1
 piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070129.161.1.1.613122551 
 tag=TAG20070129T074911
 channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
 Finished restore at 29-JAN-07
Starting recover at 29-JAN-07
 using channel ORA_DISK_1
 using channel ORA_SBT_TAPE_1
starting media recovery
archive log thread 1 sequence 13 is already on disk as file 
 /u02/ORACLE/opsdba/arch/arch_1_13_613052894.dbf
 channel ORA_DISK_1: starting archive log restore to default destination
 channel ORA_DISK_1: restoring archive log
 archive log thread=1 sequence=12
 channel ORA_DISK_1: reading from backup piece 
 /opt/oracle/backup/opsdba/OPSDBA.20070129.162.1.1.613122577
 channel ORA_DISK_1: restored backup piece 1
 piece handle=/opt/oracle/backup/opsdba/OPSDBA.20070129.162.1.1.613122577 
 tag=TAG20070129T074937
 channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
 archive log filename=/u02/ORACLE/opsdba/arch/arch_1_12_613052894.dbf thread=1 sequence=12
 archive log filename=/u02/ORACLE/opsdba/arch/arch_1_13_613052894.dbf thread=1 sequence=13
 media recovery complete, elapsed time: 00:00:01
 Finished recover at 29-JAN-07
RMAN> sql 'alter database open resetlogs';
sql statement: alter database open resetlogs

6.Confirm that the recovery has worked

 opsdba:/opt/tivoli/tsm/client/oracle/bin64>sqlplus scott/tiger
SQL*Plus: Release 10.2.0.2.0 - Production on Mon Jan 29 09:43:14 2007
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
 Connected to:
 Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
 With the Partitioning, Oracle Label Security, OLAP and Data Mining options
SQL> select count(*) from myobjects;
COUNT(*)
 ----------
 249410

Source

Reference links:

Oracle Database Running Out of Space ?

Are you running out of space in the file system where database files (data files) are installed?

Try out this tip in order to move some of your data files from one drive to another and update the datafile location in your database.

The operating system: Oracle Enterprise Linux
The Database: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0

In our environment we have installed our database’s files (data files) in /oracle/oradata/mydb location. The drive is getting up to 99% of utilization. Now we will move the system01.dbf from the above mentioned location to a new location in /oracle/hdb1/oradata/mydb location. /oracle/hdb1 is mounted from /dev/hdb1. The drive is nearly empty, that’s why I chose it.
Now for the real moving part, we will perform the following steps:
Login to SQL* Plus and shutdown the database
Logout from SQL* Plus and move the files from the source directory to destination directory.
Login to SQL* Plus as /nolog
Connect as SYSDBA to the database to an idle instance.
Issue startup mount command to bring up the database.
Issue ALTER DATABASE RENAME command to rename the data files from the source directory to destination directory.
Now finally open the database.
The above mentioned are the brief steps by which we can achieve our goal. Now to demonstrate see the following commands as well so that it serves as a live example:

Step 1
oracle@astrn10: sqlplus /nolog
SQL> conn /as sysdba
SQL> shutdown immediate;
Troubleshooting: If you are not able to get the SQL prompt, check your ORACLE_HOME, and ORACLE_SID.
Step 2
SQL> exit;
oracle@astrn10: mv /oracle/oradata/mydb/system01.dbf /oracle/hdb1/oradata/mydb/
Now check whether the file have been moved or not by issuing the following command:
oracle@astrn10: ls /oracle/hdb1/oradata/mydb/
total 429924352
-rw-r----- 1 oracle oinstall 429924352 Feb 12 11:00 system01.dbf
Now we are ready for the next step.
Step 3
oracle@astrn10: sqlplus /nolog
SQL>
Step 4
SQL> conn /as sysdba
Connected to idle instance.
Step 5
SQL> startup mount;
Database mounted.
Step 6
SQL> alter database rename file '/oracle/oradata/mydb/system01.dbf' to '/oracle/hdb1/oradata/mydb/system01.dbf';
Database altered.
Step 7
SQL> alter database open;
Database opened.

That’s all. We are done with our agenda for moving data files from one drive to another. If this where Windows/any other operating system, then copy files as per your operating system commands/interface in Step 2.
In order to copy more files (in this demonstration we have moved only one file), then repeat Step #2 and Step # 6 for each file.
#End of tip

Source:

Running out of space? Want to move Oracle Datafiles? – Ask Anantha

draft*

Oracle Locks

Draft*

Query to identify the owner, object, object type, sid, serial number, status, OS user and machine to track locks:

select c.owner, c.object_name, c.object_type,b.sid, b.serial#, b.status, b.osuser, b.machine, a.session_id
 from v$locked_object a , v$session b, dba_objects c
where b.sid = a.session_id
and a.object_id = c.object_id;

More Queries:

#1
select
s1.username || '@' || s1.machine 
|| ' ( SID,S#=' || s1.sid || ',' || s1.serial# || ' ) is blocking '
|| s2.username || '@' || s2.machine
|| ' ( SID,S#=' || s2.sid || ',' || s2.serial# || ' )'
AS blocking_status
from
v$lock l1,
v$session s1,
v$lock l2,
v$session s2
where
s1.sid = l1.sid
and s2.sid = l2.sid
and l1.BLOCK = 1
and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2;
#2
select
 oracle_username
 os_user_name,
 locked_mode,
 object_name,
 object_type
from
 v$locked_object a,dba_objects b
where
 a.object_id = b.object_id

Kill the session causing lock:

alter system kill session 'SID,SERIAL';
Eg: alter system kill session '323,1609';

Source / Reference Links:

Automatic Shared Memory Management – ASMM

The Oracle Automatic Shared Memory Management is a feature that automatically readjusts the sizes of the main pools (db_cache_size, shared_pool_size, large_pool_size, java_pool_size) based on existing workloads for optimal performance.

About Automatic Shared Memory Management:

Automatic Shared Memory Management simplifies SGA memory management. You specify the total amount of SGA memory available to an instance using the SGA_TARGET initialization parameter and Oracle Database automatically distributes this memory among the various SGA components to ensure the most effective memory utilization. When automatic shared memory management is enabled, the sizes of the different SGA components are flexible and can adapt to the needs of a workload without requiring any additional configuration. The database automatically distributes the available memory among the various components as required, allowing the system to maximize the use of all available SGA memory. Oracle Database remembers the sizes of the automatically tuned components across instance shutdowns if you are using a server parameter file (SPFILE). As a result, the system does need to learn the characteristics of the workload again each time an instance is started. It can begin with information from the past instance and continue evaluating workload where it left off at the last shutdown.

The benefits of ASMM are:

  • Reduces the change of running out of shared pool memory
  • Uses available memory optimally
  • Improves database performance by constantly matching memory allocations and instance memory needs

Based on workload information, automatic shared memory tuning:

  • Captures statistics periodically in the background
  • Uses the different memory advisories
  • Performs ?what-if? analyses to determine best distribution of memory
  • Moves memory to where it is most needed
  • Has no need to configure parameters for the worst-case scenario
  • Resurrects component sizes from last shutdown if SPFILE is used

Oracle Automatic Shared Memory Management is enabled by setting:

  • a spfile used to specify init.ora values
  • sga_target parameter is set to a non-zero value
  • statistics_level parameter set to to TYPICAL (the default) or ALL (The other value of this parameter is BASIC, which will not allow changing the memory pools automatically.)
  • shared_pool_size must be set to a non-zero value

Oracle10g has introduced special double underscore hidden parameter to control ASMM:

  • __db_cache_size
  • __shared_pool_size
  • __large_pool_size

Once enabled, Oracle ASMM will morph the pool areas within the confines of sga_max_size.

When ASMM is enabled, then the following memory pools are automatically sized:

1 .Buffer cache (DB_CACHE_SIZE)
2. Shared pool (SHARED_POOL_SIZE)
3. Large pool (LARGE_POOL_SIZE)
4. Java pool (JAVA_POOL_SIZE)

The following pools are manually sized components and are not affected by ASMM.

1. DB_KEEP_CACHE_SIZE
2. DB_RECYCLE_CACHE_SIZE
3. DB_nK_CACHE_SIZE (non-default block size)
3. STREAMS_POOL_SIZE
4. LOG_BUFFER

Reference Links:

How to Find Sessions Generating Lots of Redo

When a transaction generates undo, it will automatically generate redo as well.

Methods to find sessions generating lots of redo:

1) Query V$SESS_IO.

This view contains the column BLOCK_CHANGES which indicates how much blocks have been changed by the session. High values indicate a session generating lots of redo.

SQL> SELECT s.sid, s.serial#, s.username, s.program,
 i.block_changes
 FROM v$session s, v$sess_io i
 WHERE s.sid = i.sid
 ORDER BY 5 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence of BLOCK_CHANGES. Large deltas indicate high redo generation by the session.

2) Query V$TRANSACTION.

These view contains information about the amount of undo blocks and undo records accessed by the transaction (as found in the USED_UBLK and USED_UREC columns).

SQL> SELECT s.sid, s.serial#, s.username, s.program,
 t.used_ublk, t.used_urec
 FROM v$session s, v$transaction t
 WHERE s.taddr = t.addr
 ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence of USED_UBLK and USED_UREC. Large deltas indicate high redo generation by the session.

You use the first query when you need to check for programs generating lots of redo when these programs activate more than one transaction. The latter query can be used to find out which particular transactions are generating redo.

Source: LOGGING OR NOLOGGING THAT IS THE QUESTION By Francisco Munoz Alvarez