> My simple formula > swap needed = total memory need - physical memory size > works much better than the "twice physical memory" one.
Questionable. Aside from not being computable (everyone can easily tell what their physical memory size is, but few people know how much their "total memory need" is going to be when they're installing Linux) and vaguely defined (the unqualified term "memory" should almost never be used to mean anything but physical memory), it's not really accurate if by "total memory need," you mean "total VM space needed." The kernel takes up a chunk of wired memory for its own needs, and if less than, say, 10% of physical memory is available for buffering files on top of that, your performance is going to go through the floor. There's another, insidious problem with many VM systems (I don't honestly know if Linux's is included) that affects the above computation. Let's say I have 32MB of physical memory and 24MB of swap, and let's pretend that a constant 8MB of my physical memory is consumed by disk buffers and kernel overhead. Okay, you say, now I have 48MB of virtual memory to play with. Now let's say that all of the running processes on my system are long-running processes, and they consume a total of 25MB of virtual memory. Obviously, at any given time, a minimum of 1MB of that data has been paged out. Let's say that at some time we have 24MB of the pages resident and 1MB of swap space allocated to hold the 1MB of data that's been paged out. Now say that that 1MB needs to be paged back in. The system needs to page out another 1MB of data, so it allocates another 1MB of swap space and pages it out. (Obviously, this all happens one page at a time.) Now, here's the catch: under most VM systems, the old 1MB of swap space remains allocated for the old data in order to save time if those pages need to be swapped out again and haven't been dirtied by writes. So now we have 24MB resident and 2MB of swap space allocated. We can keep going this way, paging in the data that was just paged out, until finally the last 1MB of data needs to be paged out and all of the swap space has been allocated. At this point, a clever VM system will deallocate swap pages which aren't strictly needed in order to make room. But the VM systems I have experience with will simply fail catastrophically. So with only 25MB of data, we can swamp what was theoretically a 48MB playground. Even if the VM system is clever, your performance may (depending on your applications) go through the floor if your total amount of data exceeds the amount of swap space on disk, because it reduces your ability to leave paged-in data on disk and forces you to write pages out to disk which you would otherwise not need to write. My conclusion: allocate as much swap space as you believe your virtual memory needs will be. For most applications, twice your physical memory size is about the upper limit you can use without thrashing.