Blog Archives
_pga_max_size = 200M : Maximum size of the PGA memory for one process ?!?
For almost two years, I know a lot of people like to play UNSUPPORTED Oracle parameters for managing sql workarea size policy of Oracle. Although Mr Lewis has published tens of illustrations proving that it is impossible to put general arguments about those parameters, people are insisting on playing with those parameters and hoping that their proposals on the nature of those parameters are valid.
In this post I will be discussing PGA_AGGREGATE_TARGET (“PGA_AGGREGATE_TARGET specifies the target aggregate PGA memory available to all server processes attached to the instance”) and _PGA_MAX_SIZE (“Maximum size of the PGA memory for one process) parameters from the perspective of PL/SQL development by examplifying things. At the bottomline you will understand that _PGA_MAX_SIZE is not the maximum PGA memory for one process but it is the turn point for SQL workarea content to be purged on to a temporary tablespace. Moreover although PL/SQL local memory structures are also allocated within PGA, those parameters have nothing to do with the memory allocation of PL/SQL codes. They are for SQL workarea requirements like hash join, group by, sort, and direct path load buffers.
Filling up the World’s Simplest Collection
The machine I did those tests was a 4 CPU Sun Sparc machine with 64 GB memory and running Sun Solaris. The database version was 10.2.0.4 and PGA_AGGREGATE_TARGET parameter was 24M. All UNSUPPORTED parameters had their default values. So _PGA_MAX_SIZE was 200M for that instance. So once you read the definition you think that it is not allowed for a single Oracle process to exceed 200M of allocated PGA memory (or used PGA memory if you like). Look at the following demonstration:
declare type myarray_t is table of varchar2(4000); myarray myarray_t := myarray_t(); i pls_integer := 1; begin dbms_application_info.set_module('PGA USAGE TEST','SIMPLEST COLLECTION : 0'); loop myarray.extend; /** Longest string that dbms_random.string can generate is 4000 even if you set len parameter to higher values. */ myarray(i) := dbms_random.string('u', 4000); dbms_application_info.set_module('PGA USAGE TEST','SIMPLEST COLLECTION : '||to_char(i)); i := i + 1; end loop; end; /
While executing this script from one session , monitor the PGA usage from v$process view periodically by using
set linesize 500 column display_value format a5 select * from v$sql_workarea_active; select p.PGA_USED_MEM / 1024 / 1024 pga_used, p.PGA_ALLOC_MEM / 1024 / 1024 pga_alloc, p.PGA_FREEABLE_MEM / 1024 / 1024 pga_freeable, s.MODULE, s.ACTION from v$process p, v$session s where p.ADDR = s.PADDR and s.MODULE = 'PGA USAGE TEST' order by p.PGA_ALLOC_MEM desc;
If there is no other running task on the system, you won’t see any kind of workarea allocation by the server. So all those PGA parameters have nothing to do with your execution. And if you plot the all over PGA memory allocation for our execution you will see something similar to:
As you see the PGA usage of a single process can be much larger than magic 200M when it comes to PL/SQL. Those parameters have nothing to do with PL/SQL collections’ memory usage. More dramatically if you continue to extend the collection, at some point your OS starts to swap the Oracle memory onto disk. So this means an evil developer can lead in a system crash by just doing some careless memory allocation.
One final thing to be mentioned is the ratio between PGA_ALLOCATED and PGA_USED. I have noticed that as the size of collection increases in time, Oracle starts to converge to a fix ratio (in may case it was ~2.65) for PGA_ALLOCATED/PGA_USED. It is good in the sense that Oracle keeps a constant correlation between used PGA and allocated PGA for PL/SQL collections.
Conclusion
Sorry but Oracle doesn’t promise for anything it has not documented. Although x$ksppi.ksppdesc column describes _PGA_MAX_SIZE parameter as the maximum PGA allocated for an Oracle process, it doesn’t say anything specific about PL/SQL memory usage. So you should ensure that your code is not using TOO MUCH MEMORY. For example, you should guarantee that you are not trying to bulk fetch a very large table at a time in your code, or you are not assuming that the table you are fetching will not grow in time.