On 2021-01-20 08:37, Bob Friesenhahn wrote:
On Wed, 20 Jan 2021, Hung Nguyen Gia via openindiana-discuss wrote:

Regardless of it's good behavior or not, this does give Linux a huge advantage over us.
The different is significant.
If we want to continue to keep our Solaris heritage and continue to ridicule Linux, then OK, it's fine.

I did not see anyone here "ridiculing" Linux. Different decisions were made based on the target market. Solaris made decisions with a priority on robustness and
Linux made decisions with a priority to run on cheap hardware.

I use Linux on tiny hardware where there is tremendous memory over-commit (as much as 120x) and it is a wonder that apps run at all (sometimes they run exceedingly
slowly).  It is nice that this is possible to do.

It is possible to disable over-commit in Linux but then even many desktop systems
would not succeed with initializing at all.

Memory allocation via mmap() is useful but there is a decision point as to whether to allocate backing storage in the swap space or not. By default allocated pages are zero and actual memory is not used until something writes data to the page (at which point in time there is a "page fault" and the kernel allocates a real memory page and initializes it with zeroed bytes). Likewise memory which is "duplicated" by fork() and its COW principle is not used until it has been modified. So Linux (by default) is very optimistic and assumes that the app will not actually use the
memory it requested, or might not ever modify memory inherited by the forked
process.

If one is running a large database server or critical large apps then relying on
over-commit is not helpful since once the system even slightly runs out of
resources, either an app, or the whole system needs to die.

IBM's AIX was the earliest system I recall where over-commit was common and
processes were scored based on memory usage. When the system ran short of memory
it would usually kill the largest process.

Linux has followed this same strategy and computes an OOM score for each process. When the system runs out of "already" allocated memory, then a process has to die, or the system needs to panic and reboot, or new activities must be disallowed.
Another difference with Linux, at least as compared to FreeBSD; is that Linux favors disk backing. IOW they'd rather keep RAM free. Whereas the BSDs would rather use RAM. Granted, ZFS brings COW which helps. But when given a choice, why not choose RAM over
disk? It's *much* faster.

--Chris

Bob

--

_______________________________________________
openindiana-discuss mailing list
[email protected]
https://openindiana.org/mailman/listinfo/openindiana-discuss

Reply via email to