On 12/29/2012 11:18 AM, Ondřej Bílka wrote: > alloca caused segfault on oom condition and null pointer > access has equivalent behaviour.
alloca doesn't always cause a SEGV on out of memory. That's part of the problem that we're trying to cure. If alloca is given too large a number, it might SEGV, and it might not. malloca should not have this problem: it should reliably fail when out of memory. Unfortunately when out of memory the proposed use of malloca does not reliably SEGV. Here's a trivial example: size_t n = ... some big number ...; char *p = malloca (n); strcpy (p + n - 10, "x"); freea (p); This might not SEGV when malloca returns NULL, depending on the architecture; for example, if n happens to be 100000010 and (char *) 100000000 happens to be a valid address. > And on linux it will always succeed and be killed by oom later. Sorry, I'm not following, or perhaps I'm misunderstanding you. malloc does not always succeed on GNU/Linux; it sometimes returns NULL. malloc (SIZE_MAX) is a trivial example of this. > Of course. In this benchmark > http://kam.mff.cuni.cz/~ondra/malloca_benchmark.tar.bz2 > with my implementation is 20% faster than gnulib one. First, we need a correct implementation before we can start doing benchmark comparisons, as fixing the problems will slow things down, I expect. It's not just the SEGV problem mentioned above; it's also the problem with very large allocation requests that I mentioned earlier. Second, that benchmark invokes malloca on a constant. But actual code rarely does this: char *p = alloca (100); as what would be the point? It's more portable to do this: char buf[100]; char *p = buf; and one doesn't need either alloca or malloca in this case. A more-realistic benchmark would invoke malloca with a non-constant, as that's how alloca is typically used in practice.