Listing few online AWR analyzers:
1. Find the session to be monitored
SELECT username, sid, serial#, program FROM v$session WHERE username = <User_Name>;
2. Enable SQL Tracing
a, Current Session:
ALTER SESSION SET sql_trace = TRUE; or execute dbms_session.set_sql_trace(true);
b, Different Session (as SYSDBA):
execute dbms_system.set_sql_trace_in_session(sid, serial#, sql_trace); e.g. execute dbms_system.set_sql_trace_in_session(114, 4667, TRUE);
3. Enable Oracle database to gather statistics
ALTER SYSTEM SET timed_statistics = true; -- at system level ALTER SESSION SET timed_statistics = true; -- at session level
4. Formatting the output with TKPROF
TKPROF inputfile outputfile [OPTIONS] e.g. tkprof mydb_ora_11915.trc /tmp/tkprof1.txt SYS=NO
5. Find directory where trace file is generated
SELECT value FROM v$parameter WHERE name='user_dump_dest';
6. Identify trace file generated
SELECT s.username, s.SID, s.serial#, s.PROGRAM, p.spid FROM v$session s, v$process p WHERE p.addr = s.paddr and s.username = <User_Name>;
7. Disable tracing for the session
ALTER SESSION SET sql_trace = TRUE; OR EXECUTE dbms_system.set_sql_trace_in_session (<sid>, <serial#>, false);
Happy New Year 🙂 – dbpost
Databases in Cloud Patrick Schwanke
Lots of people talk about cloud nowadays. Working with Oracle databases for about 9 years, I wanted to see how Oracle database would work inside Amazon Web Services cloud and how to get this running from the scratch.
During the next days I will post some experiences and stumbling blocks and other howto related stuff:
- How to appropriately set up your AWS account and environment: Step 1
- Amazon Cloud Basics for Oracle DBAs: Step 2
- How to create an Oracle instance: Step 3
- How to access my Oracle instance in the cloud (SSH, RDP, File Transfer, Listener, SQL*Plus): Step 4
- How to do some very useful automation / scripting
- How about performance in the cloud?
- How to do backups inside the cloud or into the cloud
Related Articles:
Monitoring Oracle databases running on Amazon EC2
Monitoring Oracle databases running on Amazon RDS
Links:
Daily Procedures:
1. Verify all instances are up or not
Make sure the database is available. Log into each instance and run daily reports or test scripts. Some sites may wish to automate this.
Optional implementation: use Oracle Enterprise Manager’s ‘probe’ event.
2. Check for new entries in alert log file
• Connect to each managed system.
• Use ‘telnet’ or comparable program.
• For each managed instance, go to the background dump destination, usually $ORACLE_BASE/<SID>/bdump.
Make sure to look under each managed database’s SID.
• At the prompt, use the Unix ‘tail’ command to see the alert_<SID>.log, or otherwise examine the most recent entries in the file.
• If any ORA-errors have appeared since the previous time you looked, note them in the Database Recovery Log and investigate each one. The recovery log is in <file>.
3. Verify DBSNMP is running
1. Log on to each managed machine to check for the ‘dbsnmp’ process.
For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP. (Some sites have this disabled on purpose; if this is the case, remove this item from your list, or change it to “verify that DBSNMP is NOT running”.)
4. Verify success of database backup
5. Verify success of database archiving to tape
6. Verify enough resources for acceptable performance
6.1 Verify free space in tablespaces.
For each instance, verify that enough free space exists in each tablespace to handle the day’s expected growth. As of <date>, the minimum free space for <repeat for each tablespace>: [ < tablespace > is < amount > ]. When incoming data is stable, and average daily growth can be calculated, then the minimum free space should be at least <time to order, get, and install more disks> days’ data growth.
a) Go to each instance, run free.sql to check free mb in tablespaces.
Compare to the minimum free MB for that tablespace. Note any low-space conditions and correct.
free.sql
select tablespace_name as name,
sum ( blocks ) as free_blk,
trunc ( sum ( bytes ) / (1024*1024) ) as free_m,
max ( bytes ) / (1024) as big_chunk_k,
count (*) as num_chunks
from dba_free_space
group by tablespace_name;
Output Example:
b) Go to each instance, run space.sql to check percentage free in tablespaces.
Compare to the minimum percent free for that tablespace. Note any low-space conditions and correct.
6.2 Verify rollback segment
Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a special rollback segment for large batch jobs whose normal status is OFFLINE.
a) Optional: each database may have a list of rollback segment names and their expected statuses.
b) For current status of each ONLINE or FULL rollback segment (by ID not by name), query on V$ROLLSTAT.
c) For storage parameters and names of ALL rollback segment, query on DBA_ROLLBACK_SEGS. That view’s STATUS field is less accurate than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and FULL statuses, showing these as OFFLINE and ONLINE respectively.
6.3 Identify bad growth projections.
Look for segments in the database that are running out of resources (e.g. extents) or growing at an excessive rate. The storage parameters of these segments may need to be adjusted. For example, if any object reached 200 as the number of current extents, AND it’s an object that is supposed to get large, upgrade the max_extents to unlimited.
a) To gather daily sizing information, run analyze5pct.sql. If you are collecting nightly volumetrics, skip this step.
b) To check current extents, run nr_extents.sql
c) Query current table sizing information
d) Query current index sizing information
e) Query growth trends
6.4 Identify space-bound objects.
Space-bound objects’ next_extents are bigger than the largest extent that the tablespace can offer. Space-bound objects can harm database operation. If we get such object, first need to investigate the situation. Then we can use ALTER TABLESPACE <tablespace> COALESCE. Or add another datafile.
a) Run spacebound.sql. If all is well, zero rows will be returned.
6.5 Processes to review contention for CPU, memory, network or disk resources.
a) To check CPU utilization, go to x:webphase2default.htm =>system metrics=>CPU utilization page. 400 is the maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate if CPU utilization keeps above 350 for a while.
7. Copy Archived Logs to Standby Database and Roll Forward
If you have a Standby Database, copy the appropriate Archived Logs to the expected location on the standby machine and apply those logs (roll forward the changes) to the standby database. This keeps the standby database up-to-date.
The copying of logs, the applying of them, or both, can in some cases be automated. If you have automated them, then your daily task should be to confirm that this happened correctly each day.
8. Read DBA manuals for one hour
Nothing is more valuable in the long run than that the DBA be as widely experienced, and as widely read, as possible. Readings should include DBA manuals, trade journals, and possibly newsgroups or mailing lists.
References:
Oracle 10g provides the Automatic Workload Repository. The AWR is a repository of performance information collected by the database to aid
in the tuning process for DBAs. The main focus for the Oracle database in version 10g is self-manageability. AWR is the place where the data to aid in self-management is stored.
AWR Features
The AWR is used to collect performance statistics including:
Identifies the SQL statements that have the
The data is fed from MMON. The information is stored in the sysaux tablespace.
Several of the automatic database tuning features require information from the AWR to function correctly, including:
Snapshots Interval and Retention Time
Check snapshots interval and retention time from DBA_HIST_WR_CONTROL.
SELECT snap_interval, retention FROM dba_hist_wr_control; SNAP_INTERVAL RETENTION -------------------- -------------------- +00000 00:30:00.0 +00007 00:00:00.0
They can be modified through DBMS_WORKLOAD_REPOSITORY. By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the following procedure.
BEGIN dbms_workload_repository.modify_snapshot_settings( interval => 60, retention => 10*24*60 ); END; /
Extra snapshots can be taken and existing snapshots can be removed, as shown below.
EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot; BEGIN DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range ( low_snap_id => 22, high_snap_id => 32); END; /
Snapshot information can be queried from the DBA_HIST_SNAPSHOT view.
Baselines
BEGIN DBMS_WORKLOAD_REPOSITORY.create_baseline ( start_snap_id => 210, end_snap_id => 220, baseline_name => 'batch baseline'); END; /
The pair of snapshots associated with a baseline are retained until the baseline is explicitly deleted.
BEGIN DBMS_WORKLOAD_REPOSITORY.drop_baseline ( baseline_name => 'batch baseline', cascade => FALSE); -- Deletes associated snapshots if TRUE. END; /
Baseline information can be queried from the DBA_HIST_BASELINE view.
Automatic Workload Repository Reports
@$ORACLE_HOME/rdbms/admin/awrrpt.sql @$ORACLE_HOME/rdbms/admin/awrrpti.sql
Workload Repository Views
The following workload repository views are available:
AWR from Enterprise Manager
The automated workload repository administration tasks have been included in Enterprise Manager. The “Automatic Workload Repository” page is accessed from the main page by clicking on the “Administration” link, then the “Workload Repository” link under the “Workload” section. The page allows you to modify AWR settings or manage snapshots without using the PL/SQL APIs.
Source / Reference Links:
Job status and run count:
select job_name, owner, nvl(to_char(next_run_date, 'DD-MON-YYYY HH24:MI:SS'), schedule_name) as next_run_date, to_char(last_start_date, 'DD-MON-YYYY HH24:MI:SS') as last_run_date, job_class, run_count from dba_scheduler_jobs;
Reference Links:
An «ALL_OBJECTS» Table with 1’000’000 Rows
The following Code is from Tom Kyte (http://asktom.oracle.com)
— Create Table with same structure as ALL_TABLES from Oracle Dictionary
create table bigtab
as
select rownum id, a.*
from all_objects a
where 1=0;
alter table bigtab nologging;— Fill 1’000’000 Rows into the Table
declare
l_cnt number;
l_rows number := 1000000;
begin
— Copy ALL_OBJECTS
insert /*+ append */
into bigtab
select rownum, a.*
from all_objects a;
l_cnt := sql%rowcount;
commit;— Generate Rows
while (l_cnt < l_rows)
loop
insert /*+ APPEND */ into bigtab
select rownum+l_cnt,
OWNER, OBJECT_NAME, SUBOBJECT_NAME,
OBJECT_ID, DATA_OBJECT_ID,
OBJECT_TYPE, CREATED, LAST_DDL_TIME,
TIMESTAMP, STATUS, TEMPORARY,
GENERATED, SECONDARY
from bigtab
where rownum <= l_rows-l_cnt;
l_cnt := l_cnt + sql%rowcount;
commit;
end loop;
end;
/alter table bigtab add constraint
bigtab_pk primary key(id);
A Table with Random Data and same Size as ALL_OBJECTS
CREATE TABLE bigtab (
id NUMBER,
weight NUMBER,
adate DATE
);INSERT INTO bigtab (id, weight, adate)
SELECT MOD(ROWNUM,1000),
DBMS_RANDOM.RANDOM,
SYSDATE-1000+DBMS_RANDOM.VALUE(0,1000)
FROM all_objects
/
51502 rows created.
A Table which can be used for Partition Tests
The ID of the table can be used for Range Partitioning
create table bigtab (
id number(12,6),
v1 varchar2(10),
padding varchar2(50)
)
nologging — just to save a bit of time
/insert /*+ append ordered full(s1) use_nl(s2) */
into bigtab
select
3000 + trunc((rownum-1)/500,6),
to_char(rownum),
rpad(‘x’,50,’x’)
from
all_objects s1, — you’ll need the privilege
all_objects s2
where
rownum <= 1000000
/
commit;ID V1 PADDING
———- ———- ————————————————–
3000 1 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.002 2 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.004 3 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.006 4 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.008 5 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.01 6 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.012 7 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.014 8 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
3000.016 9 xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
A Table with Date’s which can be used for Partition Tests
This code is from http://www.oracle-base.com
CREATE TABLE bigtab (
id NUMBER(10),
created_date DATE,
lookup_id NUMBER(10),
data VARCHAR2(50)
);DECLARE
l_lookup_id NUMBER(10);
l_create_date DATE;
BEGIN
FOR i IN 1 .. 1000000 LOOP
IF MOD(i, 3) = 0 THEN
l_create_date := ADD_MONTHS(SYSDATE, -24);
l_lookup_id := 2;
ELSIF MOD(i, 2) = 0 THEN
l_create_date := ADD_MONTHS(SYSDATE, -12);
l_lookup_id := 1;
ELSE
l_create_date := SYSDATE;
l_lookup_id := 3;
END IF;INSERT INTO bigtab (id, created_date, lookup_id, data)
VALUES (i, l_create_date, l_lookup_id, ‘This is some data for ‘ || i);
END LOOP;
COMMIT;
END;
/
Verify:
SQL> select id,to_char(created_date,’DD.MM.YYYY’),
lookup_id, data
from bigtab where rownum < 10;ID TO_CHAR(CR LOOKUP_ID DATA
———- ———- ———- —————————–
1 21.08.2007 3 This is some data for 1
2 21.08.2006 1 This is some data for 2
3 21.08.2005 2 This is some data for 3
4 21.08.2006 1 This is some data for 4
5 21.08.2007 3 This is some data for 5
6 21.08.2005 2 This is some data for 6
7 21.08.2007 3 This is some data for 7
8 21.08.2006 1 This is some data for 8
9 21.08.2005 2 This is some data for 9
Error:
ORA-01127 database name ‘string‘ exceeds size limit of string characters
Reason:
The specified database name is too long.
Action:
Shorten the database name to eight characters or less.
Draft Article *
Buffer Hit Ratio
Consistent Gets – The number of accesses made to the block buffer to retrieve data in a consistent mode.
DB Blk Gets – The number of blocks accessed via single block gets (i.e. not through the consistent get mechanism).
Physical Reads – The cumulative number of blocks read from disk.
Logical reads are the sum of consistent gets and db block gets.
The db block gets statistic value is incremented when a block is read for update and when segment header blocks are accessed.
Hit Ratio should be > 80%, else increase DB_BLOCK_BUFFERS in init.ora
select sum(decode(NAME, 'consistent gets',VALUE, 0)) "Consistent Gets", sum(decode(NAME, 'db block gets',VALUE, 0)) "DB Block Gets", sum(decode(NAME, 'physical reads',VALUE, 0)) "Physical Reads", round((sum(decode(name, 'consistent gets',value, 0)) + sum(decode(name, 'db block gets',value, 0)) - sum(decode(name, 'physical reads',value, 0))) / (sum(decode(name, 'consistent gets',value, 0)) + sum(decode(name, 'db block gets',value, 0))) * 100,2) "Hit Ratio" from v$sysstat
Is the Oracle buffer hit ratio a useless metric for monitoring and tuning? – Bur
Data Dictionary Hit Ratio
Gets – Total number of requests for information on the data object.
Cache Misses – Number of data requests resulting in cache misses
Hit Ratio should be > 90%, else increase SHARED_POOL_SIZE in init.ora
select sum(GETS), sum(GETMISSES), round((1 - (sum(GETMISSES) / sum(GETS))) * 100,2) from v$rowcache
SQL Cache Hit Ratio
Pins – The number of times a pin was requested for objects of this namespace.
Reloads – Any pin of an object that is not the first pin performed since the object handle was created, and which requires loading the object from disk.
Hit Ratio should be > 85%
select sum(PINS) Pins, sum(RELOADS) Reloads, round((sum(PINS) - sum(RELOADS)) / sum(PINS) * 100,2) Hit_Ratio from v$librarycache
Library Cache Miss Ratio
Executions – The number of times a pin was requested for objects of this namespace.
Cache Misses – Any pin of an object that is not the first pin performed since the object handle was created, and which requires loading the object from disk.
Hit Ratio should be < 1%, else increase SHARED_POOL_SIZE in init.ora
select sum(PINS) Executions, sum(RELOADS) cache_misses, sum(RELOADS) / sum(PINS) miss_ratio from v$librarycache
Source / Reference Links: