'; zhtm += ''; zhtm += '

' + pPage + ''; zhtm += ''; figDoc.write(zhtm); figDoc.close(); popUpWin.focus(); } //--> Using Oracle8 -- Chapter 21 - Identifying and Reducing Contention

Using Oracle8


Chapter 21

Identifying and Reducing Contention


Identifying Rollback Segment Contention

Rollback segments are concurrently used by one or more transactions; any delays caused by contention on rollback segments affect performance. More transactions per rollback segment cause more contention and use space more efficiently, whereas fewer transactions per rollback segment cause less contention and waste more space.

Interleaf rollback segments to reduce contention
Interleaf the order of the rollback segments in the initialization file's ROLLBACK_SEGMENTS parameter so that the first is in one table-space, the next one is in the other, and so on. This will reduce the contention for rollback segments.

Rollback segments hold transaction tables in their headers. Concurrent transactions contending for the same rollback segment will show up due to contention for the undo header.

Each time a transaction begins, it's assigned to a rollback segment. This assignment can be done automatically or manually:

You should create rollback tablespaces to hold rollback segments. In a heavy transaction-based system, you should have at least two rollback tablespaces on separate disks. Rollback segment contention occurs when transactions request rollback segment buffers and those buffers are still busy with previous transaction rollback information.

Rollback segment contention is reflected by contention for buffers that contain rollback segment blocks. V$WAITSTAT contains statistics for different classes of block. The following table shows the different classes of blocks tracked through this view for rollback information:

Block class
Description
System Undo HeaderBuffers containing header blocks of the SYSTEM rollback segment
System Undo BlockBuffers containing blocks of the SYSTEM rollback segment other than header blocks
Undo HeaderBuffers containing header blocks of the rollback segments other than the SYSTEM rollback segment
Undo BlockBuffers containing blocks (other than header blocks) of the rollback segments (other than the SYSTEM rollback segment)

Use the following queries to determine the number of requests for data and the number of waits for each class of block over a period of time.

The following query gives you the total number of data requests:

SELECT SUM (value)  "DATA REQUESTS"
FROM V$SYSSTAT
WHERE name IN ('db block gets', 'consistent gets');

This query's output might look like this:

DATA REQUESTS
------------------------
                  223759

The following query provides the number of waits on rollback segments:

SELECT class, count
FROM V$WAITSTAT
WHERE class LIKE '%undo%'
AND COUNT > 0;

This query's output might look like this:

CLASS                 COUNT
---------------------------
system undo header     3145
system undo block       231
undo header            4875
undo block              774

As seen from these results, the number of waits for system undo header is (3145 / 223759)*100 = 1.4%, and the number of waits for undo header is (4875 / 223759)* 100 = 2.1%. Contention is indicated by the number of waits for any class being greater than 1 percent of the total number of requests.

Contention is indicated by frequent occurrence of error ORA-01555, or when transaction table wait events are much greater than 0.

Reducing Contention for Rollback Segments

Increasing the number of rollback segments can reduce contention for rollback segments. After creating the new rollback segments, you must reference them in the init.ora parameter ROLLBACK_SEGMENTS and also make them online. Use the following table as a guideline for determining the number of rollback segments to allocate, depending on the number of concurrent transactions anticipated.

Number of concurrent transactions
Number of rollback segments
Fewer than 16 concurrent transactions
4
Concurrent transactions between 16 and 32
8
More than 32 concurrent transactionsNumber of transactions divided by 4

The following query will tell you whether the OPTIMAL setting for the rollback segments is appropriate:

SELECT substr(name, 1,20), extents, rssize, aveactive,
                           ave_shrink, extends, shrinks
    FROM v$rollname rn, v$rollstat rs
    WHERE rn.usn = rs.usn;

This query's output might look like this:

substr(name, 1, 20)
          extents rssize aveactive aveshrink extends shrinks
------------------------------------------------------------
SYSTEM          4 207639         0         0       0       0
RB1             2 207489         0         0       0       0
RB2             4 207698         0         0       0       0

OPTIMAL is set properly if the average size of rollback segments is close to the size set for OPTIMAL; otherwise, you need to change the value for OPTIMAL. Also, if the value in the shrinks column is very large, that means that the OPTIMAL setting is set improperly.

You can follow several other recommendations with respect to rollback segments to reduce contention:

     SELECT MAX(USED_UBLK)
     FROM v$transaction;

Use rollback segments wisely
Assign large rollback segments to long-running queries and to transactions that modify a large amount of data. Avoid dynamic expansion and reduction of roll-back space. Also, don't create more rollback segments than your instance's maximum number of concurrently active transactions.

Identifying and Reducing Contention for Multithreaded Server Processes (MTS)

The components of the MTS configuration consist of the processes on the system, communication software, and the shared global area (SGA). A system using MTS has the following processes (see Figure 21.1):

Figure 21.1 : The components of the MTS configuration consist of the processes on the system, communication software, and the shared global area (SGA).

Use Net8 to connect to shared servers
To use shared servers, a user process must connect through Net8, even if it's on the same machine as the Oracle instance.

When you configure Oracle8 to use Multithreaded Server architecture, you have to deal with the contention for dispatcher and shared server processes. You can configure the number of dispatchers and server processes; there's no direct relationship between the number of dispatchers and shared servers.

SGA enhancements include additional memory structures, such as request and response queues for handling service requests and returning responses to those requests. Session information is migrated from the PGA into the SGA (this section of the SGA is known as the user global area, or UGA), so that the correct response goes to the appropriate client.

PGA in an MTS environment
The program global area (PGA) of a shared-server process doesn't contain user-related data because this information needs to be accessible to all shared servers. The PGA of shared servers contains only stack space and process-specific variables. Session-related information is moved to the SGA, which should have enough space to store all session-specific information.

How MTS server processes are run

  1. When the listener is started, it starts listening on the listed addresses. It opens and establishes a communication path through which users connect to Oracle. The only services it's aware of are those defined in listener.ora.
  2. When an Oracle instance configured for MTS is started, each dispatcher gets its random listen address and gives the listener this address, at which the dispatcher listens for connection requests. The dispatcher calls the listener by using the address specified in the init.ora parameter MTS_LISTENER_ADDRESS.
  3. The listener adds the dispatcher's MTS_SERVICE and address to its list of known services.
  4. The network listener process waits for incoming connection requests and determines whether a shared server process can be used.
  5. If a dedicated server process is requested, it creates a dedicated server process and connects the user process to it; otherwise, it gives the user process the address of a dispatcher process with the lightest load. (Windows NT now supports only the TCP/IP protocol for MTS connections.)
  6. The user process connects to the dispatcher and remains connected to it throughout the life of the user process. After the connection is established, the dispatcher creates a virtual circuit, which it uses to communicate with the shared servers.
  7. The user process issues a request, which is placed by the dispatcher in the request queue in the SGA, where it's picked up by the next available shared server process. The request queue is common to all dispatchers.
  8. The shared server process does all the necessary processing and returns the results to the response queue of the dispatcher in the SGA. Each dispatcher has its own response queue.
  9. The dispatcher process returns the completed request to the user process.

Because dispatchers have few responsibilities, each dispatcher can serve many clients, allowing a significantly high number of clients to be connected to the server.

Identifying Contention for Dispatcher Processes

Contention for dispatcher processes can be identified by the following symptoms:

Privileges needed for views
By default, the V$DISPATCHER and V$QUEUE views are available only to the user SYS and to other users with the SELECT ANY TABLE system privilege.

Examining Busy Rates for Dispatcher Processes

You can examine the efficiency of the dispatchers by using the following query over a period of time with the application running:

SELECT name, network, owned, status, (busy /(busy + idle)) * 100 "% of time busy"
FROM  v$dispatchers

NAME      NETWORK         STATUS       % of time busy
--------- --------------- ------------ --------------
D000      TCP             WAIT             .141240092
D001      TCP             CONNECT          .146774237
D002      TCP             WAIT              .04165972
D003      TCP             WAIT              .04052496

Four dispatchers are running on this instance. All have the status WAIT except one (D001), whose status is CONNECTED. This dispatcher is servicing a connect request from a client that will remain connected to this dispatcher for the lifetime of its session. None of these dispatchers have a high percentage of busyness; you could conclude that we could do with fewer dispatchers.

You can use another query to determine the dispatcher that you need in the system:

SELECT network                            "PROTOCOL",
       ( SUM(busy) / (SUM(busy) + SUM(idle)) ) * 100
       "% TIME BUSY"
FROM v$dispatcher
GROUP BY network;

PROTOCOL    %  TIME BUSY
==========  ============
decnet         0.5676849
tcp           61.4379021

From this result, you can see the following:

Thus, you can conclude that there's contention for the TCP dispatchers. You can improve the performance by adding dispatchers for the TCP protocol.

Examining Contention for Dispatcher Process Response Queues

You can determine whether you have the correct number of dispatchers by finding the average wait time for the dispatchers of that protocol. You can use the following query for this purpose:

SELECT network            "PROTOCOL",
        DECODE( SUM(totalq), 0, 'No Responses',
        SUM(wait)/SUM(totalq) )
    "Average wait time per response (1/100th of seconds)"
FROM v$queue q, v$dispatcher d
WHERE q.type = 'DISPATCHER'
AND    q.paddr = d.paddr
GROUP BY network;

This query will return the average time, in hundredths of a second, that a response waits in the response queue. A steady increase in the wait time would indicate that you need to add more dispatchers.

Protocol Average wait time per response (1/100th of a second)
-------- ---------------------------------------------------
decnet   0.134180
tcp      235.38760

This result shows that the responses in the response queue of the DECnet dispatcher processes wait an average of 0.13 of a second, whereas the wait is much higher for the TCP dispatcher processes.

Reducing Contention for Dispatcher Processes

You have two options for reducing the contention of dispatchers:

Oracle8 provides an optional attribute POOL (or POO), which can be used with the parameter MTS_DISPATCHERS to enable the Net8 connection pooling feature in the init.ora file. The following example allows you to start the database with four TCP dispatchers and enables the Net8 connection pooling feature:

MTS_DISPATCHERS = "(PROTOCOL=TCP) (DISPATCHERS=4) (POOL)"

Limit the number of dispatchers
The MTS_MAX_DISPATCHERS parameter determines the total number of dispatcher processes across all protocols. The default value of this parameter is 5; the maximum value is operating-system dependent.

Identifying Contention for Shared Server Processes

Contention for shared server processes can be determined by monitoring the wait times for requests in the request queue (V$QUEUE and V$SHARED_SERVERS). A steady increase in the wait time indicates that there's contention and that you need to increase the number of shared server processes.

The query for examining the shared servers is similar to that of monitoring dispatchers but from the dynamic view V$SHARED_SERVERS:

SELECT name, status, requests, (busy /(busy + idle)) * 100 "% of time busy"
FROM v$shared_servers;

NAME        STATUS           REQUESTS   % of time busy
-------- ----------------- ------------ --------------
S000        WAIT(COMMON)         1122        .05956579
S001        WAIT(ENQ)               1       99.1001586
S002        WAIT(COMMON)            0       .000031547

Servers S000 and S002 are idle, whereas server S001 is very busy; its client could benefit from a dedicated server. The value of the REQUESTS column is cumulative since instance startup.

You can use the following query to determine the average wait time for the requests in the request queue:

SELECT DECODE( totalq, 0, 'No Requests',  wait/totalq )
   "Average wait time per requests (hundredths of seconds) "
FROM v$queue
WHERE type = 'COMMON';

Average wait time per requests (hundredths of seconds)
-----------------------------------------------------------
                           0.293734950

The result shows that there's an average wait time of 0.29 of a second in the request queue.

Reducing Contention for Shared Servers

Oracle8 automatically adjusts the number of shared servers to improve performance. You should try to identify requests that need to be serviced by dedicated servers and not shared servers.

Limit the number of shared servers
The MTS_MAX_SERVERS parameter determines the maximum number of shared server processes in the system. The default value of this parameter is 20; the maximum value is operating-system dependent.

Theoretically, any application can be used with MTS, but in reality some applications will benefit from its usage, whereas others won't. Client/server applications that don't require a lot of processing from the client will benefit from the usage of MTS. An example of such an application is Oracle's own BUG database. During the day, bugs may be filed, updated, and closed by various people working in any number of departments. These transactions are short and don't tie up the server for an extended period of time, allowing the server to work on many requests simultaneously. However, not all the BUG database's functionality will benefit from MTS, such as running weekend reports and complex SELECT statements, which can tie up the shared servers.

Suppose that an MTS instance has three dispatchers and two shared servers. If one client decides to submit such reports during the day, you are left with just one shared server, because the other will be tied up processing the reports. Now if a second client submits another such report, it will appear to other clients that the instance is locked up, because both shared servers are performing large reports. New requests won't be processed until a server becomes free. It will be better if the clients request a dedicated connection. As you can see, a mixed environment can usually give great benefits.

The following shows the init.ora file with MTS parameters:

mts_service= "YOUR_SID"
mts_listener_address="(ADDRESS=(PROTOCOL=tcp)(port=1521)
                      (host=your_machine))"
mts_dispatchers= "tcp, 1"
mts_max_dispatchers=10
mts_max_servers=10
mts_servers=4

Identifying and Reducing Contention for Parallel Server Processes

You can use the dictionary view V$PQ_SYSSTAT for determining the appropriate number of parallel server processes for an instance:

SELECT STATISTIC, VALUE
FROM v$pq_sysstat
WHERE
   statistic = "Servers Started " or
   statistic = "Servers Shutdown";

STATISTIC          VALUE
------------------------
Servers Started       48
Servers Shutdown      45

The following statistics can be helpful in identifying the contention for parallel server processes:

Use the following query to determine whether the set numbers of parallel server processes are really busy:

SELECT STATISTIC, VALUE
FROM v$pq_sysstat
WHERE statistic = "Servers Busy"

STATISTIC          VALUE
------------------------
Servers Busy          65

If the query's result shows that the number of busy parallel server processes is typically less than PARALLEL_MIN_SERVERS, you should consider decreasing the value of PARALLEL_MIN_SERVERS in initsid.ora. If servers are continuously starting and shutting down, however, consider increasing PARALLEL_MIN_SERVERS. This parameter should be set to the number of concurrent parallel operations multiplied by the average number of parallel server processes used by a parallel operation.

The maximum parallel servers should not exceed your machine capacity
The maximum number of parallel server processes for an instance depends heavily on the capacity of your CPUs and your I/O bandwidth. Set PARALLEL_MAX_SERVERS to the maximum number of concurrent parallel server processes that your machine can manage.

Identifying and Reducing Latch Contention

Oracle uses different types of locking mechanisms:

You can't control what latches to use and when to use them, but you can adjust certain init.ora parameters to tune Oracle to most efficiently use latches and reduce latch contention. Latches protect internal data structures by maintaining a defined method of accessing them. If a process can't obtain a latch immediately, it waits for the latch, resulting in a slowdown and additional CPU usage. The process, in the meantime, is "spinning."

Latch contention occurs when two or more Oracle processes attempt to obtain the same latch concurrently. You can detect latch contention by using the V$LATCH, V$LATCHHOLDER, and V$LATCHNAME data dictionary views.

The following queries can be used to provide latch information:

SELECT name
FROM v$latchname ln, v$latch l
WHERE l.addr = '&addr'
AND l.latch# = ln.latch# ;
SELECT ln.name, l.addr, l.gets, l.misses, l.sleeps,
l.immediate_gets, l.immediate_misses, lh.pid
FROM v$latch l , v$latchholder lh , v$latchname ln
WHERE l.addr = lh.laddr (+)
AND l.latch# = ln.latch#
ORDER BY l.latch# ;
SELECT ln.name, l.addr, l.gets, l.misses, l.sleeps,
l.immediate_gets, l.immediate_misses, lh.pid
FROM v$latch l , v$latchholder lh , v$latchname ln
WHERE l.addr = lh.laddr (+)
AND l.latch# = ln.latch#
AND ln.name like '%X%'
ORDER BY l.latch# ;

The following table lists all the latches that are of concern to Oracle DBAs:

Latch Number
Name
0
Latch wait list
1
Process allocation
2
Session allocation
3
Session switching
4
Session idle bit
5
Messages
6
Enqueues
7
Trace latch
8
Cache buffers chain
9
Cache buffers LRU chain
10
Cache buffer handles
11
Multiblock read objects
12
Cache protection latch
13
System commit number
14
Archive control
15
Redo allocation
16
Redo copy
17
Instance latch
18
Lock element parent latch
19
DML lock allocation
20
Transaction allocation
21
Undo global data
22
Sequence cache
23
Sequence cache entry
24
Row cache objects
25
Cost function
26
User lock
27
Global transaction mapping table
28
Global transaction
29
Shared pool
30
Library cache
31
Library cache pin
32
Library cache load lock
33
Virtual circuit buffers
34
Virtual circuit queues
35
Virtual circuits
36
Query server process
37
Query server free lists
38
Error message lists
39
Process queue
40
Process queue reference
41
Parallel query stats

The cache buffers chains latch is needed when user processes try to scan the SGA for database cache buffers. Adjusting the DB_BLOCK_BUFFERS parameter can reduce contention for this latch.

The cache buffers LRU chain latch is needed when user processes try to scan the LRU chain containing all the dirty blocks in the buffer cache. Increasing the DB_BLOCK_BUFFERS and DB_BLOCK_WRITE_BATCH parameters can reduce contention for this latch.

The row cache objects latch is needed when user processes try to access the cached data dictionary values. Tuning the data dictionary cache can reduce contention for this latch. Increasing the size of the shared pool (SHARED_POOL_SIZE) can be used to achieve this result.

Identifying and Reducing Contention for the LRU Latch

The least recently used (LRU) latch controls the replacement of buffers in the buffer cache. Contention for the LRU latch can be identified by querying V$LATCH, V$SESSION_EVENT, and V$SYSTEM_EVENT. V$LATCH shows cumulative values since instance startup.

Design your applications to reduce LRU contention
Application design can have a significant influence on LRU contention because the design will ultimately determine the queries and DML statements that in turn can affect contention for LRU latches.

The initialization parameter DB_BLOCK_LRU_LATCHES specifies the maximum number of LRU latches on your system. Each LRU latch controls a set of buffers. You can use the following guidelines to determine the value for DB_BLOCK_LRU_LATCHES:

LRU latches for SMP and non-SMP machines
For SMP machines, Oracle automatically sets the number of LRU latches to be one half the number of CPUs on the system. One LRU latch is sufficient for non-SMP machines.

Identifying and Reducing Contention for Space in Redo Log Buffers

Redo information must be written to the redo log before a transaction is completed. Any kind of bottleneck in the redo log can cause the performance of all the processes in the system to be affected. You should therefore check for contention of the redo log buffers and the redo log buffer latches.

The dynamic performance table V$SYSSTAT stores systemwide statistics that can be used to identify contention for various resources. The statistic REDO BUFFER ALLOCATION RETRIES reflects the number of times a user process waits for space in the redo log buffer. The process may have to wait if the LGWR process isn't fast enough to write the redo entries from the redo log buffer to a redo log file. These statistics are available only to the SYS user or users with the SELECT ANY TABLE privilege.

You can use the following query to determine contention for space in the redo log buffer:

SELECT name, value
FROM V$SYSSTAT
WHERE name = 'redo buffer allocation retries';

The value from this query must be almost 0. If it increases consistently, contention is indicated. You have several options for reducing the contention for space in the redo log buffer:

Identifying and Reducing Contention for Redo Log Buffer Latches

The V$LATCH dynamic performance view contains statistics for the activity of the various latches. A process can request a latch in one of two ways:

GETSNumber of successful willing-to-wait requests for a latch
MISSESNumber of unsuccessful, initial willing-to-wait requests for a latch
SLEEPSNumber of times a process waited and requested a latch after an initial request

IMMEDIATE GETSNumber of successful immediate requests for a latch
IMMEDIATE MISSESNumber of unsuccessful immediate requests for a latch

The following query can be used to monitor contention of the redo allocation and the redo copy latch:

SELECT ln.name, gets, misses, immediate_gets, immediate_misses
FROM V$LATCH l, V$LATCHNAME ln
WHERE ln.name IN (' redo allocation', 'redo copy')
AND ln.latch# = l.latch#;

This query's output might look like this:

NAME            GETS  MISSES  IMMEDIATE_GETS IMMEDIATE_MISSES
-------------- ----- ------- --------------- ---------------
redo alloc...  12580     215               5            0
redo copy         12       0            1223            2

Contention exists for a latch if either of the following is true:

The example shows a redo allocation latch contention; the ratio of misses to gets is 1.7 percent. The redo allocation latch controls space allocation for redo entries in the redo log buffer. An Oracle process must obtain the redo allocation latch before allocating space in the redo log buffer. There's only one redo allocation latch; therefore, only one process can allocate space in the redo log buffer at a time.

Latch contention doesn't occur on single-CPU machines
Only one process can be active at a given time on a single-CPU machine; therefore, latch contention rarely occurs.

You can reduce contention for this latch by minimizing copying on it, which in turn reduces the time that any single process holds the latch. To do so, decrease the value of the LOG_SMALL_ENTRY_MAX_SIZE parameter, which determines the number and size of redo entries copied on the redo allocation latch.

Whereas the redo allocation latch is held only for a short period of time, the redo copy latch is held for a longer amount of time because the user process first obtains the redo copy latch, and then the redo allocation latch. The process performs allocation and then releases the allocation latch. The copy is then performed under the copy latch, after which the redo copy latch is released.

On multiple CPU machines, the LOG_SIMULTANEOUS_COPIES parameter determines the number of redo copy latches. Multiple redo copy latches allow multiple processes to concurrently copy entries to the redo log buffer. The default value of this parameter is the number of CPUs available to the instance; the maximum value is twice the number of CPUs. To reduce contention, increase the value of LOG_SIMULTANEOUS_COPIES.

Another way in which you can reduce redo copy latch contention is by prebuilding the redo entry before requesting the latch. The LOG_ENTRY_PREBUILD_THRESHOLD parameter can be set to achieve this result. The default value for this parameter is 0. When this parameter is set, any redo entry of a smaller size than this parameter must be prebuilt.

Identifying and Reducing Contention for Library Cache Latches

The library cache latches are required to prevent multiple access to a shared library cache entry. There are three types of library cache latches:

The following query can be used to identify library cache latch contention:

SELECT count(*) number_of_waiters
FROM v$session_wait sw, v$latch l
WHERE sw.wait_time = 0
AND sw.event = 'latch free'
AND sw.p2 = l.latch#
AND l.name like 'library%';

In this query, a large number of waiters indicates that you have contention.

You can reduce contention for library cache latches in the following ways:

     SELECT sql_text, parse_calls, executions
     FROM v$sqlarea
     WHERE parse_calls > 100
     AND executions < 2*parse_calls;

Identifying Free-List Contention

Oracle maintains information in memory that allows instantaneous access to information on disk. Some of this information keeps track of the space available in the tablespaces for inserting new rows. You can indicate at table-creation time how much information to keep in memory for tracking available blocks for creating new records. This is the free list for the table.

The following code creates a table with 12 free lists:

create table test_table (id number,
                         desc varchar2(20))
                         freelist 12;

Free-list contention is generally indicated by excessive disk I/O for block ID requests to create new rows. This contention occurs when multiple processes are searching the free list at the same time.

The dynamic view V$WAITSTAT contains statistics for block contention. Free-list contention is reflected by contention for free data blocks in the buffer cache. You can use the following query to determine what percentage of the total requests was actually waiting for free blocks:

SELECT class,count
    FROM V$WAITSTAT
    WHERE class = 'free list';

This query's output might look like this:

CLASS       COUNT
-----------------
free list      39

You can use the following query to determine the number of requests for DB_BLOCK_BUFFERS and consistent reads:

SELECT name, value
    FROM v$sysstat
    WHERE name in ('db block gets', 'consistent gets');

This query's output might look like this:

NAME             VALUE
----------------------
db block gets    14236
consistent gets  10437

By using the preceding query, you can determine the number of waits for free lists:

free list wait events
  = ( (free list count) /
    (db block gets + consistent gets)) * 100
  = (39/ (14236 + 10437 ) * 100
  = 0.15

Identify free-list contention

  1. Query V$WAITSTAT for contention on DATA BLOCKS.
  2. Query V$SYSTEM_EVENT and determine BUFFER BUSY WAITS.
  3. For each buffer-busy wait, use V$SESSION_WAIT to determine the FILE, BLOCK, and ID.
  4. The following query shows the segment_name and segment_type for the objects and free lists with buffer busy waits:
SELECT segment_name "segment", segment_type "type"
FROM dba_extents
WHERE FILE_ID = file
AND BLOCK BETWEEN block_id and block_id + blocks;

  1. You can use the segment and type from the preceding query in the following query to determine the free lists having contention:
SELECT FREELISTS
FROM DBA_SEGMENTS
WHERE SEGMENT_NAME = segment
AND SEGMENT_TYPE = type;

If the number of free-list wait events is greater than 1 percent of the total number of requests, you have contention for free list.

Reducing Contention for Free Lists

Recreating the table with a larger value of the FREELISTS storage parameter can reduce contention for free lists of a table. There are alternative ways for recreating the table that has free-list contention. The first example uses the table TEST.

Using Export/Import to reduce free-list contention

  1. Export the table with the grants and indexes.
  2. Drop the table.
  3. Recreate the table with an increased value for FREELISTS.
  4. Import the table with IGNORE=y.

You should have at least enough free lists to accommodate the number of concurrent processes that will be inserting into the table.

Using CTAS (Create Table as SELECT) to reduce free-list contention

  1. Create a new table with the desired value for FREELISTS and the same structure as the old table.
  2. Select data from the old table into the new table.
  3. Drop the old table.
  4. rename the new table.

© Copyright, Macmillan Computer Publishing. All rights reserved.