On 4/7/07, Joe Buck <[EMAIL PROTECTED]> wrote:
Consider an implementation that, when given
Foo* array_of_foo = new Foo[n_elements];
passes __compute_size(elements, sizeof Foo) instead of
n_elements*sizeof Foo to operator new, where __compute_size is
inline size_t __compute_size(size_t num, size_t size) {
size_t product = num * size;
return product >= num ? product : ~size_t(0);
}
This counts on the fact that any operator new implementation has to
fail when asked to supply every single addressible byte, less one.
This statement is true only for linear address spaces. For segmented
address spaces, it is quite feasible to have a ~size_t(0) much smaller
than addressable memory.
The optimization above would be wrong for such machines because
the allocation would be smaller than the requested size.
It would appear that the extra cost, for the non-overflow case, is
two instructions (on most architectures): the compare and the
branch, which can be arranged so that the prediction is not-taken.
That is the dynamic count. The static count, which could affect
density of cache use, should also include the alternate return value.
--
Lawrence Crowl