Mar 22, 2024
Determining How the Memory Is Allocated- Memory Structures-2

This script sorts the big table T using automatic PGA memory management. Then, for that session, it captures all of the PGA/UGA memory settings as well as the sort-to-disk activity. In addition, the UNION ALL adds system-level metrics about the same (total PGA memory, total UGA memory, and so on). I ran that script against a database started with the following initialization settings:

memory_target=0 pga_aggregate_target=300m sga_target=1500m

These settings show I was using automatic PGA memory management

with a PGA_AGGREGATE_TARGET of 300MB, meaning I wanted Oracle to use up to about 300MB of PGA memory for sorting.

I set up another script to be run in other sessions to generate a large sorting load on the machine. This script loops and uses a built-in package, DBMS_ALERT, to see if it should continue processing. If it should, it runs the same big query, sorting the entire T table. When the simulation finished, a session could signal all of the sorting processes, the load generators, to “stop” and exit. Here’s the script (stored in a file named gen_load.sql) used to perform the sort:

To observe the differing amounts of RAM allocated to the session I was measuring, I initially ran the SELECT in isolation—as the only session. I captured the statistics and saved them into the SESS_STATS table, along with the count of active sessions. Then I added 25 sessions to the system (i.e., I ran the preceding benchmark script (gen_load. sql) with the for i in 1 .. 999999 loop in 25 new sessions). I waited a short period of time—one minute for the system to adjust to this new load—and then I created a new session and ran the single sort query from earlier, capturing the metrics the first time through the loop. I did this repeatedly, for up to 300 concurrent users.

Tip On the GitHub source code site for this book, you can download the scripts used for this experiment. In the ch04 directory, the run.sql script automates the test described in this section.

It should be noted that I asked the database instance to do an impossible thing here. At 300 users, we would be very close to the PGA_AGGREGATE_TARGET setting just by having them all logged in, let alone actually doing any work! This drives home the point that the PGA_AGGREGATE_TARGET is just that: a target, not a directive. We can and will exceed this value for various reasons.

Now we are ready to report on the finding; for reasons of space, we’ll stop the output at 275 users—since the data starts to get quite repetitive:

SQL> column active format 999
SQL> column pga format 999.9
SQL> column “tot PGA” format 999.9
SQL> column pga_diff format 999.99
SQL> column “temp write” format 9,999
SQL> column “tot writes temp” format 99,999,999
SQL> column writes_diff format 9,999,999
SQL> select active,

Before we analyze the results, let’s look at the query I used for reporting. My query uses a feature called pivot to pivot a resultset. Here’s an alternate way to write lines 11 through 22 of that SQL query (without the pivot feature):

select active,

max( decode(name,’session pga memory’,val) ) pga,

max( decode(name,’total: session pga memory’,val) ) as “tot PGA”, max( decode(name, ‘physical writes direct temporary tablespace’, val) ) as “temp write”,

This part of the query retrieved the records from the table of metrics when there were less than 275 active sessions, converted the metrics for memory (UGA/PGA memory) from bytes into megabytes, and then pivoted—turned rows into columns—on the four interesting metrics. Once we got those four metrics in a single record, we used analytics (the LAG() function specifically) to add to each row the prior rows’ total observed PGA and total observed I/O to temp so we could easily see the incremental differences in these values. Back to the data—as you can see, when I had a few active sessions, my sorts were performed entirely in memory.

For an active session count of 1 to somewhere less than 50, I could sort entirely in memory. However, by the time I had 50 users logged in and actively sorting, the database started reining in the amount of memory I was allowed to use at a time. It would have taken a couple of minutes before the amount of PGA being used fell back within acceptable limits (the 300MB request), but eventually it would at these low concurrent user levels.

The amount of PGA memory allocated to the session we were watching dropped from 15.2MB to 7.7MB and settled on around 5.2MB (remember, parts of that PGA are not for work area (sorting) allocations, but are for other operations; just the act of logging in created a .5MB PGA allocation).

The total PGA in use by the system remained within tolerable limits until somewhere around 126 users. At that point, I started to exceed on a regular basis the PGA_AGGREGATE_TARGET and continued to do so until the end of the test. I gave the database instance in this case an impossible­ task; the very act of having 126 users, most executing PL/SQL, plus the sort they were all requesting, just did not fit into the 300MB of RAM I had targeted. It simply could not be done.

Each session therefore used as little memory as possible, but had to allocate as much memory as it needed. By the time I finished this test, the active sessions were using a total of about 560MB of PGA memory—as little as they could.

Automatic PGA memory management was designed specifically to allow a small community of users to use as much RAM as possible when it was available. In this mode, it backs off on this allocation over time as the load increased and increases the amount of RAM allocated for individual operations over time as the load decreased.

More Details

Leave a Reply

Your email address will not be published. Required fields are marked *