Apr 22, 2024
Using PGA_AGGREGATE_TARGET to Control Memory Allocation- Memory Structures
Earlier, I wrote that “in theory” we can use the PGA_AGGREGATE_TARGET to control the overall amount of PGA memory used by the instance. We saw in the last example that this is not a hard limit, however. The instance will attempt to stay within the bounds of the PGA_AGGREGATE_TARGET, but if it can’t, it won’t stop processing; rather, it will just be forced to exceed that threshold.
Another reason this limit is a “theory” is because work areas, though large contributors to PGA memory, are not the only contributors to PGA memory. Many factors contribute to PGA memory allocation, and only the work areas are under the control of the database instance. If you create and execute a PL/SQL block of code that fills in a large array with data in dedicated server mode where the UGA is in the PGA, Oracle can’t do anything but allow you to do it.
Consider the following quick example. We’ll create a package that can hold some persistent (global) data in the server:
$ sqlplus eoda/foo@PDB1
SQL> create or replace package demo_pkg as type array is table of char(2000) index b binary_integer; g_data array; end;/
Package created.
Now we’ll measure the amount of memory our session is currently using in the PGA/UGA (I used a dedicated server in this example, so the UGA is a subset of the PGA memory):
SQL> select a.name, to_char(b.value, ‘999,999,999’) bytes,
Initially, we are using about 6MB of PGA memory in our session (as a result of compiling a PL/SQL package, running this query, etc.). Now, we’ll run our query against T again using the same 300MB PGA_AGGREGATE_TARGET (this was done in a recently restarted and otherwise idle instance; we are the only session requiring memory right now):
SQL> set autotrace traceonly statistics; SQL> select * from t order by 1,2,3,4; 72616 rows selected.
As you can see, the sort was done entirely in memory, and in fact if we peek at our session’s PGA/UGA usage, we can see how much we used:
SQL> select a.name, to_char(b.value, ‘999,999,999’) bytes,
We see about 17MB of RAM being used. Now we’ll fill up that CHAR array we have in the package (a CHAR datatype is blank-padded so each of these array elements is exactly 2000 characters in length):
Now, that is memory allocated in the PGA that the instance itself can’t control. We already exceeded the PGA_AGGREGATE_TARGET set for the entire instance in this single session—and there is quite simply nothing the database can do about it. It would have to fail our request if it did anything, and it will do that only when the OS reports back that there is no more memory to give (ORA-04030). If we wanted, we could allocate more space in that array and place more data in it, and the instance would just have to do it for us.
However, the instance is aware of what we have done. It does not ignore the memory it can’t control; it simply recognizes that the memory is being used and backs off the size of memory allocated for work areas accordingly. So if we rerun the same sort query, we see that this time we sorted to disk—the instance did not give us the 12MB or so of RAM needed to do this in memory since we had already exceeded the PGA_AGGREGATE_ TARGET:
SQL> set autotrace traceonly statistics; SQL> select * from t order by 1,2,3,4;67180 rows selected.
So, because some PGA memory is outside of Oracle’s control, it is easy to exceed the PGA_AGGREGATE_TARGET simply by allocating lots of really large data structures in our PL/ SQL code. I am not recommending you do this by any means. I’m just pointing out that the PGA_AGGREGATE_TARGET is more of a request than a hard limit. More Details