By Mike Ault
In my 16 years of working with Oracle I have seen the database grow and change as new features are added and old ones removed or changed. However, even as things change, they remain the same in many respects. We still must index properly, set memory settings correctly and specify proper hardware.
You would think that by now everyone would have a pretty good idea what an average configuration for an Oracle small, medium and large database would be, but unfortunately these classifications also undergo mutation. It wasn’t too long ago that if you used several hundred megabytes of disks you had a large database. Now of course large databases are in the terabyte and larger size range. This makes setting tuning guidelines for Oracle databases somewhat of a moving target where the rules change almost as fast as you can derive them.
Given a changing environment for tuning Oracle is the only constant I will try to present in this paper the major issues I have found in the last few years while tuning several dozen different client databases. I will try to present a method or methods for determining set points that take into account this changing beast that is Oracle.
Major Issues of Concern
From a review of the tuning reports I have generated over the last couple of years I have come up with a list of 6 major areas that the DBA or tuning expert needs to pay attention to in an Oracle environment:
· Non-use of bind variables
· Improper Index usage
· Improper memory configuration
· Improper disk setup
· Improper initialization parameter usage
· Improper PGA setup
Each of the above areas can have a profound impact on the ability of an Oracle database to function properly. In the following sections we will cover how to measure the affect of these on your system, how to correct them once found and how to prevent them from being issues in the future. Each of the above can be, and probably is somewhere at this conference, a full hour topic, so we have a lot of ground to cover, let’s get started!
Non-Use of Bind Variables
You’ll hear it from PL/SQL tuners, you’ll hear from SQL tuners and you’ll hear from shared pool tuners, the biggest problem in many applications is the non-use of bind variables.
Why is this an issue? Well, Oracle uses a signature generation algorithm to assign a hash value to each SQL statement based on the characters in the SQL statement. Any change in a statement (generally speaking) will result in a new hash and thus Oracle assumes it is a new statement. Each new statement must be verified, parsed and have an execution plan generated and stored.
The activities needed to parse a statement and generate an execution plan are CPU intensive and generate recursive SQL against the data dictionary which may result in physical IO as well. The added statement and parse tree takes up space in the shared pool. I have seen several databases where the shared pool was over a gigabyte in size (one were it was 4 gig!) until bind variables were introduced, reducing the size to a couple of hundred megabytes at most.
A quick method of seeing whether code is being reused (a key indicator of proper bind variable usage) is to look at the values of reusable and non-reusable memory in the shared pool. A SQL for determining this comparison of reusable to non-reusable code is shown in figure 1.
ttitle 'Shared Pool Utilization'spool sql_garbageselect 1 nopr,to_char(a.inst_id) inst_id,a.users users,to_char(a.garbage,'9,999,999,999') garbage,to_char(b.good,'9,999,999,999') good,to_char((b.good/(b.good+a.garbage))*100,'9,999,999.999') good_percentfrom (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) goodfrom sys.gv_$sqlarea a, dba_users bwhere (a.parsing_user_id = b.user_id and a.executions<=1)group by a.inst_id, b.usernameunionselect distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Goodfrom dba_users b, sys.gv_$sqlarea cwhere (b.user_id=c.parsing_user_id and c.executions>1)group by c.inst_id, b.username) a, (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) goodfrom sys.gv_$sqlarea a, dba_users bwhere (a.parsing_user_id = b.user_id and a.executions<=1)group by a.inst_id,b.usernameunionselect distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Goodfrom dba_users b, sys.gv_$sqlarea cwhere (b.user_id=c.parsing_user_id and c.executions>1)group by c.inst_id, b.username) bwhere a.users=b.usersand a.inst_id=b.inst_idand a.garbage is not null and b.good is not nullunionselect 2 nopr,'-------' inst_id,'-------------' users,'--------------' garbage,'--------------' good,'--------------' good_percent from dualunionselect 3 nopr,to_char(a.inst_id,'999999'),to_char(count(a.users)) users,to_char(sum(a.garbage),'9,999,999,999') garbage,to_char(sum(b.good),'9,999,999,999') good,to_char(((sum(b.good)/(sum(b.good)+sum(a.garbage)))*100),'9,999,999.999') good_percentfrom (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) goodfrom sys.gv_$sqlarea a,SEE CODE DEPOT FOR FULL SCRIPTwhere (a.parsing_user_id = b.user_id and a.executions<=1)group by a.inst_id,b.usernameunionselect distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Goodfrom dba_users b, sys.gv_$sqlarea cwhere (b.user_id=c.parsing_user_id and c.executions>1)group by c.inst_id,b.username) a, (select a.inst_id, b.username users, sum(a.sharable_mem+a.persistent_mem) Garbage, to_number(null) goodfrom sys.gv_$sqlarea a, dba_users bwhere (a.parsing_user_id = b.user_id and a.executions<=1)group by a.inst_id,b.usernameunionselect distinct c.inst_id, b.username users, to_number(null) garbage, sum(c.sharable_mem+c.persistent_mem) Goodfrom dba_users b,SEE CODE DEPOT FOR FULL SCRIPTwhere (b.user_id=c.parsing_user_id and c.executions>1)group by c.inst_id, b.username) bwhere a.users=b.usersand a.inst_id=b.inst_idand a.garbage is not null and b.good is not nullgroup by a.inst_idorder by 1,2 desc/spool offttitle offset pages 22Figure 1: SQL Code to Show Shared verses Non-shared code
An example report is shown in Figure 2 for an instance with poor code reuse characteristics. The names have been changed to protect the innocent.
Date: 03/25/05 Page: 1Time: 17:51 PM Shared Pool Utilization SYSTEM whoville databaseusers Non-Shared SQL Shared SQL Percent Shared-------------------- -------------- -------------- --------------WHOAPP 532,097,982 1,775,745 .333SYS 5,622,594 5,108,017 47.602DBSNMP 678,616 219,775 24.463SYSMAN 439,915 2,353,205 84.250SYSTEM 425,586 20,674 4.633------------- -------------- -------------- --------------5 541,308,815 9,502,046 1.725Figure 2: Example report output for poor code reuse
As you can see from Figure 2 the majority owner in this application, WHOAPP is only showing 0.3 percent of reusable code by memory usage and is tying up an amazing 530 megabytes with non-reusable code! Let’s look at a database with good reuse statistics. Look at Figure 3.
Date: 11/13/05 Page: 1Time: 03:15 PM Shared Pool Utilization PERFSTAT dbaville database users Non-Shared SQL Shared SQL Percent Shared -------------------- -------------- -------------- -------------- DBAVILLAGE 9,601,173 81,949,581 89.513 PERFSTAT 2,652,827 199,868 7.006 DBASTAGER 1,168,137 35,468,687 96.812 SYS 76,037 5,119,125 98.536 ------------- -------------- -------------- -------------- 4 13,498,174 122,737,261 90.092
Figure 3: Example of Good Sharing of Code
Notice in Figure 3 how the two application owners, DBAVILLAGE and DBASTAGER show 89.513 and 96.812 reuse percentage by memory footprint for code.
So what else can we look at to see about code reusage, the above reports give us a gross indication, how about something with a bit more usability to correct the situation? The V$SQLAREA and V$SQLTEXT views give us the capability to look at the current code in the shared pool and determine if it is using, or not using bind variables. Look at Figure 4.
set lines 140 pages 55 verify off feedback offcol num_of_times heading 'Number|Of|Repeats'col SQL heading 'SubString - &&chars Characters'col username format a15 heading 'User'@title132 'Similar SQL'spool rep_out\&db\similar_sql&&charsselect b.username,substr(a.sql_text,1,&&chars) SQL,count(a.sql_text) num_of_times from v$sqlarea a, dba_users bwhere a.parsing_user_id=b.user_idgroup by b.username,substr(a.sql_text,1,&&chars) having count(a.sql_text)>&&num_repeatsorder by count(a.sql_text) desc/spool offundef charsundef num_repeatsclear columnsset lines 80 pages 22 verify on feedback onttitle off
Figure 4: Similar SQL report code
Figure 4 shows a simple script to determine, based on the first x characters (input when the report is executed) the number of SQL statements that are identifical up to the first xcharacters. This shows us the repeating code in the database and helps us to track down the offending statements for correction.
As you can see from Figure 5, the SQL text is pinpointed that needs fixing. Using a substring from the above SQL the V$SQLTEXT view can be used to pull an entire listing of the code.
Some may be asking: “What is a bind variable?” simply put, a bind variable is a variable inserted into the SQL code in the place of literal values. For example:
SELECT * FROM whousers WHERE first_name=’ANNA’;
Is not using bind variables. If we issued a second query:
SELECT * FROM whousers WHERE first_name=’GRINCH’;
Even though the queries are identical until the last bit where we specify the name, the Oracle query engine would treat them as two different queries. By using bind variables, as shown below, we allow Oracle to parse the statement once and reuse it many times.
SELECT * FROM whousers WHERE first_name=:whoname;
The colon in front of the variable “whoname” tells Oracle this is a bind variable that will be supplied at run time.
So, the proper fix for non-bind variable usage is to re-write the application to use bind variables. This of course can be an expensive and time consuming process, but ultimately it provides the best fix for the problem. However, what if you can’t change the code? Maybe you have time, budget or vendor constraints that prevent you from being able to do the “proper” thing. What are your options?
Oracle has provided the CURSOR_SHARING initialization variable that will automatically replace the literals in your code with bind variables. The settings for CURSOR_SHARINGare EXACT (the default), FORCE, and SIMILAR.
· EXACT – The statements have to match exactly to be reusable
· FORCE – Always replace literals
· SIMILAR – Perform literal peeking and replace when it makes sense
We usually suggest the use of the SIMILAR option for CURSOR_SHARING. You can tell if cursor sharing is set to FORCE or SIMILAR by either using the SHOW PARAMETER CURSOR_SHARING command or by looking at the code in the shared pool, if you see code that looks like so:
SELECT USERNAME FROM whousers WHERE first_name=:"SYS_B_0"
This tells you that CURSOR_SHARING is set to either FORCE or SIMILAR because of the replacement variable :”SYS_B_O”.
Improper Index Usage
In the good old days Oracle followed the rule based optimizer (RBO) and the rule based optimizer followed essentially one simple premise from which it’s rules were devised:
INDEXES GOOD! FULL TABLE SCANS BAD!
Unfortunately this simple rule basis led to many less than optimal execution plans so SQL tuners spent a lot of time doing things such as null value concatenation or 0/1 math (add zero, or multiple by 1) to eliminate index usage. Of course now we have the cost based optimizer (CBO) which always gives us the correct path…not!
In essence we now have to look for full table scans and examine the table size, available indexes and other factors to determine if the CBO has made the proper choice. In most cases where improper full table scans are occurring I have generally found that missing or improper indexes were the cause, not the optimizer.
Pre-9i determining full table scans was either done live by looking for full table scan related waits and backtracking to the objects showing the waits (as shown in Figure 6) or by periodically stripping the SQL from the V$SQLTEXT or V$SQLAREA views and performing explain plan commands on them into a table. The table was then searched for the plans that showed full table accesses. Neither of these were particularly user friendly.
rem fts_rep.sqlrem FUNCTION: Full table scan reportrem MRAremcol sid format 99999col owner format a15col segment_name format a30col td new_value rep_date noprintselect to_char(sysdate,'ddmonyyhh24mi') td from dual;@title80 'Full Table Scans &rep_date'spool rep_out\&db\fts_rep_&rep_dateSELECT DISTINCT A.SID,C.OWNER,C.SEGMENT_NAMEFROM SYS.V_$SESSION_WAIT A,SYS.V_$DATAFILE B,SYS.DBA_EXTENTS CWHERE A.P1 = B.FILE# ANDB.FILE# = C.FILE_ID ANDA.P2 BETWEEN C.BLOCK_ID AND(C.BLOCK_ID + C.BLOCKS) ANDA.EVENT = 'db file scattered read';spool offttitle offFigure 6: Example Realtime report to obtain object undergoing full table scans
You will be happy to know that starting with Oracle9i there is a new view that keeps the explain plans for all current SQL in the shared pool, this view, appropriately named V$SQL_PLAN allows DBAs to determine exactly what statements are using full table scans and more importantly how often the particular SQL statements are being executed. An example report against the V$SQL_PLAN table is shown in Figure 7.
rem fts reportrem based on V$SQL_PLAN tablecol operation format a15col object_name format a32col object_owner format a15col options format a20col executions format 999,999,999set pages 55 lines 132 trims on@title132 'Full Table/Index Scans'spool rep_out\&&db\ftsselect a.object_owner,a.object_name, rtrim(a.operation) operation,a.options, b.executions from v$sql_plan a, v$sqlarea bwhereSEE CODE DEPOT FOR FULL SCRIPTand a.operation IN ('TABLE ACCESS','INDEX')and a.options in ('FULL','FULL SCAN','FAST FULL SCAN','SKIP SCAN','SAMPLE FAST FULL SCAN')and a.object_owner not in ('SYS','SYSTEM','PERFSTAT')group by object_owner,object_name, operation, optionsorder by object_owner, operation, options, object_name/spool offset pages 20ttitle offFigure 7: Example SQL to get full table scan data from database
Notice that I didn’t limit myself to just full table scans, I also looked for expensive index scans as well. An example excerpt from this report is shown in Figure 8.
Date: 11/09/04 Page: 1Time: 08:31 PM Full Table/Index Scans PERFSTAT whoemail database HASH_VALUE OWNER OBJECT_NAME OPERATION OPTIONS EXECUTIONS BYTES FTS_MEG---------- ------ ------------------------- ------------- -------------- ---------- -------- -------4278137387 SDM DB_STATUS TABLE ACCESS FULL 30,303 1048576 303031977943106 SDM DB_STATUS TABLE ACCESS FULL 1,863 1048576 18633391889070 SDM FORWARD_RECIPIENTS TABLE ACCESS FULL 29,785 4194304 1191401309516963 SDM FORWARD_RECIPIENTS TABLE ACCESS FULL 3,454 4194304 138164017881007 SDM GLOBAL_SUPPRESSION_LIST TABLE ACCESS FULL 168,020 1048576 1680203707567343 SDM ORGANIZATION2 TABLE ACCESS FULL 6,008 1048576 60081705069780 SDM SP_CAMPAIGN_MAILING TABLE ACCESS FULL 1,306 10485760 130601047433976 SDM SS_LAST_SENT_JOB TABLE ACCESS FULL 572,896 1048576 5728963556187438 SDM SS_LAST_SENT_JOB TABLE ACCESS FULL 572,896 1048576 5728963207589632 SDM SS_SEND TABLE ACCESS FULL 32,275 20971520 645500 788044291 SDM SS_SENDJOBSTATUSTYPE_NNDX INDEX FAST FULL SCAN 25,655 20971520 5131001417625610 SDM SS_SENDJOBSTATUSTYPE_NNDX INDEX FAST FULL SCAN 11,802 20971520 2360402719565392 SDM SS_SENDJOBSTATUSTYPE_NNDX INDEX FAST FULL SCAN 11,379 20971520 2275801533235337 SDM SS_SENDJOBSTATUSTYPE_NNDX INDEX FAST FULL SCAN 10,981 20971520 2196203273441234 SDM SS_SENDJOBSTATUSTYPE_NNDX INDEX FAST FULL SCAN 10,594 20971520 2118801823729862 SDM SS_TASK_OWNER_NNDX INDEX FULL SCAN 3,992 10485760 399203673286081 SDM SYSTEM_SEED_LIST TABLE ACCESS FULL 81,249 1048576 81249 4712038 SDM TRACK_OPEN_MRRJ_LOCK_NNDX INDEX FULL SCAN 11,159 62914560 6695403797121012 SDM TRACK_OPEN_MRRJ_LOCK_NNDX INDEX FULL SCAN 11,159 62914560 6695401624438202 SDM TRACK_RAW_OPEN TABLE ACCESS FULL 29,786 31457280 8935804232130615 SDM TRACK_RAW_OPEN TABLE ACCESS FULL 29,786 31457280 8935803109457251 SDM TRACK_RAW_OPEN TABLE ACCESS FULL 29,785 31457280 8935503391889070 SDM TRACK_RAW_OPEN TABLE ACCESS FULL 29,785 31457280 8935501309516963 SDM TRACK_RAW_OPEN TABLE ACCESS FULL 3,454 31457280 1036203685251647 SDM TR_M_TOP_DOMAINS TABLE ACCESS FULL 5,925 10485760 592502318926907 SDM TR_R_CLICKSTREAM TABLE ACCESS FULL 1,155 10485760 11550 804017134 SDM TR_R_OPTOUT TABLE ACCESS FULL 2,375 10485760 237503707567343 SDM USER_INFO TABLE ACCESS FULL 6,008 10485760 600803328028760 SDM VIRTUAL_MTA_LOOKUP_PK INDEX FAST FULL SCAN 43,265 1048576 432651150816681 SDM VIRTUAL_MTA_LOOKUP_PK INDEX FAST FULL SCAN 20,193 1048576 20193
Figure 8: Example output from FTS Report.
Notice instead of trying to capture the full SQL statement I just grab the HASH value. I can then use the hash value to pull the interesting SQL statements using SQL similar to:
select sql_text
from v$sqltext
where hash_value=&hashorder by piece;Once I see the SQL statement I use SQL similar to this to pull the table indexes:
set lines 132col index_name form a30col table_name form a30col column_name format a30select a.table_name,a.index_name,a.column_name,b.index_typefrom dba_ind_columns a, dba_indexes bwhere a.table_name =upper('&tab')and a.table_name=b.table_nameand a.index_owner=b.ownerand a.index_name=b.index_nameorder by a.table_name,a.index_name,a.column_position/set lines 80
Once I have both the SQL and the indexes for the full scanned table I can usually quickly come to a tuning decision if any additional indexes are needed or, if an existing index should be used. In some cases there is an existing index that could be used of the SQL where rewritten. In that case I will usually suggest the SQL be rewritten. An example extract from a SQL analysis of this type is shown in Figure 9.
SQL> @get_itEnter value for hash: 605795936 SQL_TEXT----------------------------------------------------------------DELETE FROM BOUNCE WHERE UPDATED_TS < SYSDATE - 21 1 row selected. SQL> @get_tab_indEnter value for tab: bounceTABLE_NAME INDEX_NAME COLUMN_NAME INDEX_TYPE------------ -------------------------- -------------- ----------BOUNCE BOUNCE_MAILREPRECJOB_UNDX MAILING_ID NORMALBOUNCE BOUNCE_MAILREPRECJOB_UNDX RECIPIENT_ID NORMALBOUNCE BOUNCE_MAILREPRECJOB_UNDX JOB_ID NORMALBOUNCE BOUNCE_MAILREPRECJOB_UNDX REPORT_ID NORMALBOUNCE BOUNCE_PK MAILING_ID NORMALBOUNCE BOUNCE_PK RECIPIENT_ID NORMALBOUNCE BOUNCE_PK JOB_ID NORMAL 7 rows selected.
As you can see here there is no index on UPDATED_TS
SQL> @get_itEnter value for hash: 3347592868 SQL_TEXT----------------------------------------------------------------SELECT VERSION_TS, CURRENT_MAJOR, CURRENT_MINOR, CURRENT_BUILD,CURRENT_URL, MINIMUM_MAJOR, MINIMUM_MINOR, MINIMUM_BUILD, MINIMUM_URL, INSTALL_RA_PATH, HELP_RA_PATH FROM CURRENT_CLIENT_VERSION 4 rows selected.
Here there is no WHERE clause, hence a FTS is required.
SQL> @get_itEnter value for hash: 4278137387 SQL_TEXT----------------------------------------------------------------SELECT STATUS FROM DB_STATUS WHERE DB_NAME = 'ARCHIVE' 1 row selected. SQL> @get_tab_indEnter value for tab: db_status no rows selected Yep, no indexes will cause a FTS everytime…Figure 9: Example SQL Analysis
Of course even after you come up with a proposed index list you must thoroughly test them in a test environment as they may have other “side-effects” on other SQL statements, it would be a shame to improve the performance of one statement and shoot six others in the head.
Improper Memory Configuration
If you put too-small a carburetor on a car then even though the engine may be able to do 200 MPH, you are constraining it to much less performance. Likewise if you do not give enough memory to Oracle you will prevent it from reaching its full performance potential.
In this section we will discuss two major areas of memory, the database buffer area and the shared pool area. The PGA areas are discussed in a later section.
The Database Buffer Area
Just like the old adage you can’t fly anywhere unless you go through Atlanta, you aren’t going to get data unless you go through the buffer. Admittedly there are some direct-read scenarios, but for the most part anything that goes to users or gets into the database must go through the database buffers.
Gone are the days of a single buffer area (the default) now we have 2, 4, 8,, 16, 32 K buffer areas, keep and recycle buffer pools on top of the default area. Within these areas we have the consistent read, current read, free, exclusive current, and many other types of blocks that are used in Oracle’s multi-block consistency model.
The V$BH view (and it’s parent the X$BH table) are the major tools used by the DBA to track block usage, however, you may find that the data in the V$BH view can be misleading unless you also tie in block size data. Look at Figure 10.
rem vbh_status.sqlremrem Mike Ault -- Burlesonremcol dt new_value td noprintselect to_char(sysdate,'ddmmyyyyhh24miss') dt from dual;@title80 'Status of DB Block Buffers'spool rep_out\&db\vbh_status&&tdselect status,count(*) number_buffers from v$bh group by status;spool offttitle offclear columnsFigure 10: Simple V$BH Report
In the report in Figure 10 we see a simple version of a V$BH query. Figure 10 assumes only one buffer is in play, the default buffer, and doesn’t account for any of the multiple blocksize areas or the recycle or keep areas. By not accounting for other types of buffers that may be present the report in Figure 10 can overstate the number of free buffers available. Look at Figure 11.
STATU NUMBER_BUFFERS ----- -------------- cr 33931 free 15829 xcur 371374
Figure 11: Simple V$BH report listing
From the results in Figure 11 we would conclude we had plenty of free buffers, however we would be mistaken. Look at the report in Figure 12.
STATUS NUM --------- ---------- 32k cr 2930 32k xcur 29064 8k cr 1271 8k free 3 8k read 4 8k xcur 378747 free 10371
Figure 12: Detailed V$BH Status report
As you can see, while there are free buffers, only 3 of them are available to the 8k, default area and none are available to our 32K area. The free buffers are actually assigned to a keep or recycle pool area (hence the null value for the blocksize) and are not available for normal usage. The script to generate this report is shown in Figure 13.
set pages 50@title80 'All Buffers Status'spool rep_out\&&db\all_vbh_statusselect '32k '||status as status, count(*) as numfrom v$bhwhere file# in( select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=32768))group by '32k '||statusunionselect '16k '||status as status, count(*) as numfrom v$bhwhere file# in( select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=16384))group by '16k '||statusunionselect '8k '||status as status, count(*) as numfrom v$bhwhere file# in( select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=8192))group by '8k '||statusunionselect '4k '||status as status, count(*) as numfrom v$bhwhere file# in( select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=4096))group by '4k '||statusunionselect '2k '||status as status, count(*) as numfrom v$bhwhere file# in( select file_id from dba_data_files where tablespace_name in ( select tablespace_name from dba_tablespaces where block_size=2048))group by '2k '||statusunionselect status, count(*) as numfrom v$bhwhere status='free'group by statusorder by 1/spool offttitle offFigure 13: Script to get all Buffer Pool Status
As you can see, the script is wee bit more complex than the simple V$BH script. No doubt there is a clever way to simplify the script using X and K$ tables, but then we would have to use the SYS user to run it and I prefer to use lower powered users when I go to client sites.
So, if you see buffer busy waits, db block waits and the like and you run the above report and see no free buffers it is probably a good bet you need to increase the number of available buffers for the area showing no free buffers. You should not immediately assume you need more buffers because of buffer busy waits as these can be caused by other problems such as row lock waits, itl waits and other issues.
Luckily Oracle10g has made it relatively simple to determine if we have these other types of waits. Look at Figure 14.
-- Crosstab of object and statistic for an owner--col "Object" format a20set numwidth 12set lines 132set pages 50@title132 'Object Wait Statistics'spool rep_out\&&db\obj_stat_xtabselect * from(select DECODE(GROUPING(a.object_name), 1, 'All Objects', a.object_name) AS "Object",sum(case when a.statistic_name = 'ITL waits'then a.value else null end) "ITL Waits",sum(case when a.statistic_name = 'buffer busy waits'then a.value else null end) "Buffer Busy Waits",sum(case when a.statistic_name = 'row lock waits'then a.value else null end) "Row Lock Waits",sum(case when a.statistic_name = 'physical reads'then a.value else null end) "Physical Reads",sum(case when a.statistic_name = 'logical reads'then a.value else null end) "Logical Reads"SEE CODE DEPOT FOR FULL SCRIPTwhere a.owner like upper('&owner')group by rollup(a.object_name)) bwhere (b."ITL Waits">0 or b."Buffer Busy Waits">0)/spool offclear columnsttitle off
Figure 14: Object Statistic Crosstab Report
Figure 14 shows an object statistic cross tab report based on the V$SEGMENT_STATISTICS view. The cross tab report generates a listing showing the statistics of concern as headers across the page rather than listings going down the page and summarizes them by object. This allows us to easily compare total buffer busy waits to the number of ITL or row lock waits. This ability to compare the ITL and row lock waits to buffer busy waits lets us see what objects may be experiencing contention for ITL lists, which may be experiencing excessive locking activity and through comparisons, which are highly contended for without the row lock or ITL waits. An example of the output of the report, edited for length, is shown in Figure 15.
ITL Buffer Busy Row Lock Physical Logical
Object Waits Waits Waits Reads Reads
-------------- ----- ----------- -------- ---------- -----------
BILLING 0 63636 38267 1316055 410219712
BILLING_INDX1 1 16510 55 151085 21776800
...
DELIVER_INDX1 1963 36096 32962 1952600 60809744
DELIVER_INDX2 88 16250 9029 18839481 342857488
DELIVER_PK 2676 99748 29293 15256214 416206384
DELIVER_INDX3 2856 104765 31710 8505812 467240320
...
All Objects 12613 20348859 1253057 1139977207 20947864752
243 rows selected.
Figure 15: Example Object Cross Tab Report
In the above report the BILLING_INDX1 index has a large number of buffer busy waits but we can’t account for them from the ITL or Row lock waits, this indicates that the index is being constantly read and the blocks then aged out of memory forcing waits as they are re-read for the next process. On the other hand, almost all of the buffer busy waits for the DELIVER_INDX1 index can be attributed to ITL and Row Lock waits.
In situations where there are large numbers of ITL waits we need to consider the increase of the INITRANS setting for the table to remove this source of contention. If the predominant wait is row lock waits then we need to determine if we are properly using locking and cursors in our application (for example, we may be over using the SELECT…FOR UPDATE type code.) If, on the other hand all the waits are un-accounted for buffer busy waits, then we need to consider increasing the amount of database block buffers we have in our SGA.
As you can see, this object wait cross tab report can be a powerful addition to our tuning arsenal.
By knowing how our buffers are being used and seeing exactly what waits are causing our buffer wait indications we can quickly determine if we need to tune objects or add buffers, making sizing buffer areas fairly easy.
But what about the Automatic Memory Manager in 10g? It is a powerful tool for DBAs with systems that have a predictable load profile, however if your system has rapid changes in user and memory loads then AMM is playing catch up and may deliver poor performance as a result. In the case of memory it may be better to hand the system too much rather than just enough, just in time (JIT).