Paul,

If you don't set anything for project.max-shm-memory and don't set 
shmmax in /etc/system either, the default shmmax of 1/4 of physical 
memory will apply --- and it is the max size of single shared memory 
segment .... so everything will work just fine.

You also have an option of setting shmmax to 0xffffffffffffffff in 
/etc/system... and forget about any other shared memory restrictions --- 
this may or may not be acceptable but has been heavily used in the past 
and YES shmmax while obsoleted, still works and is available with 
Solaris 10 ...(remember someone decided to make its default value 1/4 of 
physical memory :-)

Now if you do set max-shm-memory then it is 'max size of all shared 
memory segments together under that specific project'. Ideally you would 
want to have one project per Oracle instance...this way you can 
'restrict' use of SGA if you wanted to with finer granularity.

My personal preference is to leave shmmax to either default or setup to 
the above unlimited value.

Regards,
Jay

Paul Kraus wrote On 01/16/07 07:36 AM,:

> First, thanks to those who responded privately with suggestions.
>
> Now, here is what we found, although I don't fully understand *why* yet.
>
> Once again, it is an E2900 with 12 CPUs and 64 GB RAM and 8 non-global
> zones (all running Oracle, either DB or App Server)
> -> vmstat shows about 32 GB RAM free, therefore 32 GB RAM in use,
> plenty of swap free
> -> prstat shows hundreds of GB of RAM in use (it doesn't take into
> account shared memory)
> -> Oracle DB startup is failing to allocate 6 GB shared memory segment
>
> Using ipcs I found that the vast majority of the memory in use was in
> shared memory segments, so we had over 30 GB allocated in shared
> memory over the entire system.
>
> I created a project for Oracle in the non-global zone we were working
> on, with a project.max-shm-memory of 16 GB (priv,17179869184,deny). I
> had the DBA login in again (new login to get into the new project) and
> try starting the DB. It started up just fine. So, we seem to *need* to
> create projects for Oracle even if Oracle is trying to allocate shared
> memory segments less than the max-shm-memory value.
>
> Now, doing some research, I ran accross a description of the
> project.max-shm-memory resource control as the maximum total shared
> memory for the project. This means that we need to make sure that we
> set this limit high enough in each non-global zone to accomadate *all*
> the Oracle instances running in that NG zone (or create separate
> projects for each instance).
>
> The value for max-shm-memory is 16 GB (1/4 the physical RAM) in the
> global zone,  but this does nto appear to work the same as the
> project.max-shm-memory resource. I say this because according to ipcs
> we have more than 16 GB allocated to shared memory. The behavior
> *feels* like there is a limit at 32 GB (1/2 the physical RAM), but I
> can;t find that documented anywhere. Nor can I find if the global zone
> max-shm-memory is a per shared memory segment limitation or a total
> shared memory limitation.
>
> Our fix is to now build a project for the Oracle group (so that
> anything the DBAs do indivudually is covered as well as the oracle
> user) in each non-global zone with sufficient project.max-shm-memory
> resources for that NG zone.
>
> I still don't have a very good understanding of *how* the shared
> memory resource controls work under Solaris 10, as they seem to behave
> slightly differently than the comparable system tunings from Solaris 9
> and earlier.
>

-- 
>>>>>>>>>>>       http://bizapp.sfbay       >>>>>>>>>>>>>>
Jay Sisodiya
US Systems Practice - Business Applications Team
Sun Microsystems
Phone: 650-804-8805
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

      Solaris - The #1 Platform for Oracle Database

>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>


Reply via email to