Joe Buck writes:
>Consider an implementation that, when given
>
> Foo* array_of_foo = new Foo[n_elements];
>
>passes __compute_size(elements, sizeof Foo) instead of n_elements*sizeof Foo
>to operator new, where __compute_size is
>
>inline size_t __compute_size(size_t num, size_t size) {
> size_t product = num * size;
> return product >= num ? product : ~size_t(0);
>}
Yes, doing something like this instead would largely answer my concerns.
>This counts on the fact that any operator new implementation has to fail
>when asked to supply every single addressible byte, less one.
I don't know if you can assume "~size_t(0)" is equal to the number of
addressable bytes, less one. A counter example would be 16-bit 80x86
compilers where size_t is 16-bits and an allocation of 65535 bytes can
succeed, but I don't know if GCC supports any targets where something
similar can happen.
>I haven't memorized the standard, but I don't believe that this
>implementation would violate it. The behavior differs only when more
>memory is requested than can be delivered.
It differs because the actual amount of memory requested is the result
of the unsigned multiplication of "n_elements * sizeof Foo", using your
example above. Since this result of this caclulation isn't undefined,
even if it "overflows", there's no room for the compiler to calculate
a different value to pass to operator new().
Ross Ridge