Monitor Archive Logs Generated

Query on v$sysstat

Redo generated since instance started:

SELECT name, value
FROM v$sysstat
WHERE name like ‘%redo%’
order by name
/
redo blocks read for recovery 0
redo blocks written 6075463
redo buffer allocation retries 21
redo entries 11244851
redo log space requests 25
redo log space wait time 278
redo log switch interrupts 0
redo ordering marks 161393
redo size 2987873528
redo subscn max counts 0
redo synch time 5684
redo synch writes 96011
redo wastage 18833908
redo write time 43250
redo writer latching time 6
redo writes 72039

Daily Count and Size of Redo Log Space (Single Instance):

  • Number of log switches
  • Average log in MB generated per day
SELECT A.*,
 Round(A.Count#*B.AVG#/1024/1024) Daily_Avg_Mb
 FROM
 (
 SELECT
 To_Char(First_Time,'YYYY-MM-DD') DAY,
 Count(1) Count#,
 Min(RECID) Min#,
 Max(RECID) Max#
 FROM
 v$log_history
 GROUP
 BY To_Char(First_Time,'YYYY-MM-DD')
 ORDER
 BY 1 DESC
 ) A,
 (
 SELECT
 Avg(BYTES) AVG#,
 Count(1) Count#,
 Max(BYTES) Max_Bytes,
 Min(BYTES) Min_Bytes
 FROM
 v$log
 ) B;

Total Redo Log Size:

select sum(value)/1048576 redo_MB
from sys.gv_$sysstat
where name = 'redo size';

 REDO_MB
-----------------------
1938.911716461181640625

 Find the number of archivelogs generated each hour of the day:

col MidN format 999
col 1AM format 999
col 2AM format 999
col 3AM format 999
col 4AM format 999
col 5AM format 999
col 6AM format 999
col 7AM format 999
col 8AM format 999
col 9AM format 999
col 10AM format 999
col 11AM format 999
col Noon format 999
col 1PM format 999
col 2PM format 999
col 3PM format 999
col 4PM format 999
col 5PM format 999
col 6PM format 999
col 7PM format 999
col 8PM format 999
col 9PM format 999
col 10PM format 999
col 11PM format 999

select to_char(first_time,'mm/dd/yy') logdate,
sum(decode(to_char(first_time,'hh24'),'00',1,0)) "MidN",
sum(decode(to_char(first_time,'hh24'),'01',1,0)) "1AM",
sum(decode(to_char(first_time,'hh24'),'02',1,0)) "2AM",
sum(decode(to_char(first_time,'hh24'),'03',1,0)) "3AM",
sum(decode(to_char(first_time,'hh24'),'04',1,0)) "4AM",
sum(decode(to_char(first_time,'hh24'),'05',1,0)) "5AM",
sum(decode(to_char(first_time,'hh24'),'06',1,0)) "6AM",
sum(decode(to_char(first_time,'hh24'),'07',1,0)) "7AM",
sum(decode(to_char(first_time,'hh24'),'08',1,0)) "8AM",
sum(decode(to_char(first_time,'hh24'),'09',1,0)) "9AM",
sum(decode(to_char(first_time,'hh24'),'10',1,0)) "10AM",
sum(decode(to_char(first_time,'hh24'),'11',1,0)) "11AM",
sum(decode(to_char(first_time,'hh24'),'12',1,0)) "Noon",
sum(decode(to_char(first_time,'hh24'),'13',1,0)) "1PM",
sum(decode(to_char(first_time,'hh24'),'14',1,0)) "2PM",
sum(decode(to_char(first_time,'hh24'),'15',1,0)) "3PM",
sum(decode(to_char(first_time,'hh24'),'16',1,0)) "4PM",
sum(decode(to_char(first_time,'hh24'),'17',1,0)) "5PM",
sum(decode(to_char(first_time,'hh24'),'18',1,0)) "6PM",
sum(decode(to_char(first_time,'hh24'),'19',1,0)) "7PM",
sum(decode(to_char(first_time,'hh24'),'20',1,0)) "8PM",
sum(decode(to_char(first_time,'hh24'),'21',1,0)) "9PM",
sum(decode(to_char(first_time,'hh24'),'22',1,0)) "10PM",
sum(decode(to_char(first_time,'hh24'),'23',1,0)) "11PM"
from v$log_history
group by to_char(first_time,'mm/dd/yy')
order by 1
/

Segments causing redo log  generaion:

SELECT to_char(begin_interval_time,'YY-MM-DD HH24') snap_time,
 dhso.object_name,
 sum(db_block_changes_delta) BLOCK_CHANGED
 FROM dba_hist_seg_stat dhss,
 dba_hist_seg_stat_obj dhso,
 dba_hist_snapshot dhs
 WHERE dhs.snap_id = dhss.snap_id
 AND dhs.instance_number = dhss.instance_number
 AND dhss.obj# = dhso.obj#
 AND dhss.dataobj# = dhso.dataobj#
 AND begin_interval_time BETWEEN to_date('12-02-12 08:00','YY-MM-DD HH24:MI')
 AND to_date('12-02-13 08:00','YY-MM-DD HH24:MI')
 GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24'),
 dhso.object_name
 HAVING sum(db_block_changes_delta) > 0
ORDER BY sum(db_block_changes_delta) desc ;

Also Check:

  • SQL causing most redo log generation

Reference / Source Links :

Oracle error: “data got committed in another/same session, cannot update row.”

Oracle error: “data got committed in another/same session, cannot update row.”

When this error continued to pop up, the only solution that worked for me was this from OTN Discussion Forums:

Just a heads up, if you really need to get that column updated:
1. Copy and edit the value to a safe place such as notepad or another 
 sql developer worksheet, don't try to save in the table.
2. In a sql developer worksheet, make an update query to set the 
 BLOB/CLOB field to NULL
3. Run the update query and commit
4. refresh your table view, the BLOB/CLOB should be NULL now
5. Copy the value back in and commit
This worked very well for me but it must be done in the same sql developer 
instance that is throwing the error. I tried to have a coworker null the 
field and it did not affect my session at all even after refreshing the 
table. Also my coworker's sql developer showed the field to be NULL even 
before I asked them to try to edit it.

Source

Indexes on Foreign Keys

Foreign keys produce potentially damaging locking problems if the foreign key columns on the child table are not indexed. Below query lists all of the foreign keys that do not have the appropriate indexes in place on the child table. It shows the foreign key constraints that cause locking problems.

SELECT acc.owner||'-> '||acc.constraint_name||'('||acc.column_name
 ||'['||acc.position||'])'||' ***** Missing Index'
 FROM all_cons_columns acc, all_constraints ac
 WHERE ac.constraint_name = acc.constraint_name
 AND ac.constraint_type = 'R'
 AND (acc.owner, acc.table_name, acc.column_name, acc.position)
 IN
 (SELECT acc.owner, acc.table_name, acc.column_name, acc.position
 FROM all_cons_columns acc, all_constraints ac
 WHERE ac.constraint_name = acc.constraint_name
 AND ac.constraint_type = 'R'
 MINUS
 SELECT table_owner, table_name, column_name, column_position
 FROM all_ind_columns)
ORDER BY acc.owner, acc.constraint_name,
 acc.column_name, acc.position;

By creating an index on the foreign key of the child table,  “table-level” locks can be avoided.

Keep in mind that you will often be creating an index on the foreign keys in order to optimize join and queries. However, if you fail to create such a foreign key index and if the parent table is subject to updates, you may see heavy lock contention. If ever in doubt, it’s often safer to create indexes on ALL foreign keys, despite the possible overhead of maintaining unneeded indexes.

Source / Reference links:

Quick TKPROF

1. Find the session to be monitored

SELECT username, sid, serial#, program
FROM v$session
WHERE username = <User_Name>;

2. Enable SQL Tracing

a, Current Session:

ALTER SESSION SET sql_trace = TRUE;
or
execute dbms_session.set_sql_trace(true);

b, Different Session (as SYSDBA):

execute dbms_system.set_sql_trace_in_session(sid, serial#, sql_trace);
e.g.
execute dbms_system.set_sql_trace_in_session(114, 4667, TRUE);

3. Enable Oracle database to gather statistics

ALTER SYSTEM SET timed_statistics = true;  -- at system level
ALTER SESSION SET timed_statistics = true; -- at session level

4. Formatting the output with TKPROF 

TKPROF inputfile outputfile [OPTIONS]
e.g.
tkprof mydb_ora_11915.trc /tmp/tkprof1.txt SYS=NO

5. Find directory where trace file is generated

SELECT value
FROM v$parameter
WHERE name='user_dump_dest';

6. Identify trace file generated

SELECT s.username, s.SID, s.serial#, s.PROGRAM, p.spid
FROM v$session s,
 v$process p
WHERE p.addr = s.paddr and s.username = <User_Name>;

7. Disable tracing for the session

ALTER SESSION SET sql_trace = TRUE;
OR
EXECUTE dbms_system.set_sql_trace_in_session (<sid>, <serial#>, false);

Daily Check List

Daily Procedures:

1. Verify all instances are up or not

Make sure the database is available. Log into each instance and run daily reports or test scripts. Some sites may wish to automate this.

Optional implementation: use Oracle Enterprise Manager’s ‘probe’ event.

2. Check for new entries in alert log file

• Connect to each managed system.
• Use ‘telnet’ or comparable program.
• For each managed instance, go to the background dump destination, usually $ORACLE_BASE/<SID>/bdump.

Make sure to look under each managed database’s SID.
• At the prompt, use the Unix ‘tail’ command to see the alert_<SID>.log, or otherwise examine the most recent entries in the file.
• If any ORA-errors have appeared since the previous time you looked, note them in the Database Recovery Log and investigate each one. The recovery log is in <file>.

3. Verify DBSNMP is running

1. Log on to each managed machine to check for the ‘dbsnmp’ process.
For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP. (Some sites have this disabled on purpose; if this is the case, remove this item from your list, or change it to “verify that DBSNMP is NOT running”.)

4. Verify success of database backup

5. Verify success of database archiving to tape

6. Verify enough resources for acceptable performance

6.1 Verify free space in tablespaces.

For each instance, verify that enough free space exists in each tablespace to handle the day’s expected growth. As of <date>, the minimum free space for <repeat for each tablespace>: [ < tablespace > is < amount > ]. When incoming data is stable, and average daily growth can be calculated, then the minimum free space should be at least <time to order, get, and install more disks> days’ data growth.

a) Go to each instance, run free.sql to check free mb in tablespaces.
Compare to the minimum free MB for that tablespace. Note any low-space conditions and correct.

free.sql

select  tablespace_name as name,
sum ( blocks ) as free_blk,
trunc ( sum ( bytes ) / (1024*1024) ) as free_m,
max ( bytes ) / (1024) as big_chunk_k,
count (*) as num_chunks
from   dba_free_space
group by tablespace_name;

Output Example:


b) Go to each instance, run space.sql to check percentage free in tablespaces.
Compare to the minimum percent free for that tablespace. Note any low-space conditions and correct.

6.2 Verify rollback segment

Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a special rollback segment for large batch jobs whose normal status is OFFLINE.

a) Optional: each database may have a list of rollback segment names and their expected statuses.
b) For current status of each ONLINE or FULL rollback segment (by ID not by name), query on V$ROLLSTAT.
c) For storage parameters and names of ALL rollback segment, query on DBA_ROLLBACK_SEGS. That view’s STATUS field is less accurate than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and FULL statuses, showing these as OFFLINE and ONLINE respectively.

6.3 Identify bad growth projections.

Look for segments in the database that are running out of resources (e.g. extents) or growing at an excessive rate. The storage parameters of these segments may need to be adjusted. For example, if any object reached 200 as the number of current extents, AND it’s an object that is supposed to get large, upgrade the max_extents to unlimited.

a) To gather daily sizing information, run analyze5pct.sql. If you are collecting nightly volumetrics, skip this step.
b) To check current extents, run nr_extents.sql
c) Query current table sizing information
d) Query current index sizing information
e) Query growth trends

6.4 Identify space-bound objects.

Space-bound objects’ next_extents are bigger than the largest extent that the tablespace can offer. Space-bound objects can harm database operation. If we get such object, first need to investigate the situation. Then we can use ALTER TABLESPACE <tablespace> COALESCE. Or add another datafile.

a) Run spacebound.sql. If all is well, zero rows will be returned.

6.5  Processes to review contention for CPU, memory, network or disk resources.

a) To check CPU utilization, go to x:webphase2default.htm =>system metrics=>CPU utilization page. 400 is the maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate if CPU utilization keeps above 350 for a while.

7. Copy Archived Logs to Standby Database and Roll Forward

If you have a Standby Database, copy the appropriate Archived Logs to the expected location on the standby machine and apply those logs (roll forward the changes) to the standby database. This keeps the standby database up-to-date.

The copying of logs, the applying of them, or both, can in some cases be automated. If you have automated them, then your daily task should be to confirm that this happened correctly each day.

8. Read DBA manuals for one hour

Nothing is more valuable in the long run than that the DBA be as widely experienced, and as widely read, as possible. Readings should include DBA manuals, trade journals, and possibly newsgroups or mailing lists.

References:

  • Loney, Kevin Oracle8 DBA Handbook
  • Cook, David Database Management from Crisis to Confidence
  • Cox, Thomas B. The Database Administration Maturity Model

Automatic Workload Repository

Oracle 10g provides the Automatic Workload Repository. The AWR is a repository of performance information collected by the database to aid
in the tuning process for DBAs. The main focus for the Oracle database in version 10g is self-manageability. AWR is the place where the data to aid in self-management is stored.

AWR Features

The AWR is used to collect performance statistics including:

  • Wait events used to identify performance problems.
  • Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
  • Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
  • Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
  • Object usage statistics.
  • Resource intensive SQL statements

Identifies the SQL statements that have the

  • largest CPU consumption
  • most buffer gets
  • disk reads
  • most parse calls
  • shared memory

The data is fed from MMON. The information is stored in the sysaux tablespace.

Several of the automatic database tuning features require information from the AWR to function correctly, including:

  • Automatic Database Diagnostic Monitor
  • SQL Tuning Advisor
  • Undo Advisor
  • Segment Advisor

Snapshots Interval and Retention Time

Check snapshots interval and retention time from DBA_HIST_WR_CONTROL.

 SELECT snap_interval, retention
 FROM dba_hist_wr_control;
 SNAP_INTERVAL RETENTION
-------------------- --------------------
+00000 00:30:00.0 +00007 00:00:00.0

They can be modified through DBMS_WORKLOAD_REPOSITORY. By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the following procedure.

 BEGIN
 dbms_workload_repository.modify_snapshot_settings(
 interval => 60,
 retention => 10*24*60
 );
END;
/

Extra snapshots can be taken and existing snapshots can be removed, as shown below.

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;
BEGIN
 DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
 low_snap_id => 22,
 high_snap_id => 32);
END;
/

Snapshot information can be queried from the DBA_HIST_SNAPSHOT view.

Baselines

  • Pair of snapshots that represents a specific period of usage.
  • Compare current performance against similar periods in the past.
BEGIN
 DBMS_WORKLOAD_REPOSITORY.create_baseline (
 start_snap_id => 210,
 end_snap_id => 220,
 baseline_name => 'batch baseline');
END;
/

The pair of snapshots associated with a baseline are retained until the baseline is explicitly deleted.

BEGIN
 DBMS_WORKLOAD_REPOSITORY.drop_baseline (
 baseline_name => 'batch baseline',
 cascade => FALSE); -- Deletes associated snapshots if TRUE.
END;
/

Baseline information can be queried from the DBA_HIST_BASELINE view.

Automatic Workload Repository Reports

  • Generated in text or HTML
  • Database/Grid Control
  • Package DBMS_WORKLOAD_REPOSITORY
  • Scripts awrrpt.sql (current DBID and instance) or awrrpti.sql (DBID and instance are requested)
  • Very similar to Statspack reports
  • The script awrinfo.sql generates general information about AWR such as the size and data distribution
  •  awrrpti.sql allows you to select a single instance
@$ORACLE_HOME/rdbms/admin/awrrpt.sql
@$ORACLE_HOME/rdbms/admin/awrrpti.sql
The scripts prompt you to enter the report format (html or text), the start snapshot id, the end snapshot id and the report filename. The resulting report can be opend in a browser or text editor accordingly.
Time Model Statistics
  • High-level information about the used CPU and elapsed time for different database operations obtained from V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views
  • Amount of time is given in microseconds
  • The statistic “DB time” represents the total time spent in database calls; this doesn’t include the time spent by background processes
  • Since waits are part of the elapsed time, “DB time” can be much larger that the elapsed time since instance startup
  • Some of the information provided by these views is already available in V$SYSTAT and V$SESSTA

Workload Repository Views

The following workload repository views are available:

  • V$ACTIVE_SESSION_HISTORY – Displays the active session history (ASH) sampled every second.
  • V$METRIC – Displays metric information.
  • V$METRICNAME – Displays the metrics associated with each metric group.
  • V$METRIC_HISTORY – Displays historical metrics.
  • V$METRICGROUP – Displays all metrics groups.
  • DBA_HIST_ACTIVE_SESS_HISTORY – Displays the history contents of the active session history.
  • DBA_HIST_BASELINE – Displays baseline information.
  • DBA_HIST_DATABASE_INSTANCE – Displays database environment information.
  • DBA_HIST_SNAPSHOT – Displays snapshot information.
  • DBA_HIST_SQL_PLAN – Displays SQL execution plans.
  • DBA_HIST_WR_CONTROL – Displays AWR settings.

AWR from Enterprise Manager

The automated workload repository administration tasks have been included in Enterprise Manager. The “Automatic Workload Repository” page is accessed from the main page by clicking on the “Administration” link, then the “Workload Repository” link under the “Workload” section. The page allows you to modify AWR settings or manage snapshots without using the PL/SQL APIs.

Source / Reference Links:

trivadis.com

OracleBase

OracleUtilities.com

Jonathan Lewis

DRG-10700

Error:

Error DRG-10700: preference does not exist: CTXSYS.DEFAULT_LEXER

Reason:

This error indicates that you do not have any preferences installed. When installing the Oracle® server software, the InterMedia should get installed automatically with the default preferences. These are dependent on the operating system language. You can install the French preference if the language of your Oracle server is French, or the English one if the language of your Oracle server is English.

Action:

Install the appropriate language-specific default preferences. There are scripts in …/ctx/admin/defaults which create language-specific default preferences. They are named in the form of drdefXX.sql, where XX is the language code (from the Server Reference Manual).

To install the US defaults, for instance:

sqlplus ctxsys/<password> @defaults/drdefus.sql

To install the French defaults, for instance:

sqlplus ctxsys/<password> @defaults/drdeffrc.sql

Grant ‘all privileges’ to ctxsys within Oracle.

After these steps, the InterMedia Context server should start.

Source