James Kanze wrote:

> On Jun 18, 5:40 pm, Kai-Uwe Bux <(E-Mail Removed)> wrote:

>> James Kanze wrote:

>> > On Jun 18, 11:44 am, "Angel Tsankov" <(E-Mail Removed)-sofia.bg> wrote:

>> >> Does the C++ standard define what happens when the size

>> >> argument of void* operator new(size_t size) cannot represent

>> >> the total number of bytes to be allocated?

>

>> >> For example:

>

>> >> struct S

>> >> {

>> >> char a[64];

>> >> };

>

>> >> S* allocate(int size)

>> >> {

>> >> return new S[size]; // What happens here?

>> >> }

>

>> >> int main()

>> >> {

>> >> allocate(0x7FFFFFFF);

>> >> }

>

>> > Supposing that all values in an int can be represented in a

>> > size_t (i.e. that size_t is unsigned int or larger---very, very

>> > probably), then you should either get the memory, or get a

>> > bad_alloc exception (which you don't catch). That's according

>> > to the standard; a lot of implementations seem to have bugs

>> > here.

>

>> I think, you are missing a twist that the OP has hidden within

>> his posting: the size of S is at least 64. The number of S

>> objects that he requests is close to

>> numeric_limits<size_t>::max().

>

> It's not on the systems I usually use, but that's not the point.

>

>> So when new S[size] is translated into raw memory allocation,

>> the number of bytes (not the number of S objects) requested

>> might exceed numeric_limits<size_t>::max().

>

> And? That's the implementation's problem, not mine. I don't

> see anything in the standard which authorizes special behavior

> in this case.
The question is what behavior is "special". I do not see which behavior the

standard requires in this case.

>> I think (based on my understanding of [5.3.4/12]) that in such

>> a case, the unsigned arithmetic will just silently overflow

>> and you end up allocating a probably unexpected amount of

>> memory.

>

> Could you please point to something in §5.3.4/12 (or elsewhere)

> that says anything about "unsigned arithmetic".
I qualified my statement by "I think" simply because the standard is vague

to me. However, it says for instance

new T[5] results in a call of operator new[](sizeof(T)*5+x),

and operator new takes its argument at std::size_t. Now, whenever any

arithmetic type is converted to std::size_t, I would expect [4.7/2] to

apply since size_t is unsigned. When the standard does not say that usual

conversion rules do not apply in the evaluation of the expression

sizeof(T)*5+x

what am I to conclude?

> I only have a

> recent draft here, but it doesn't say anything about using

> unsigned arithmetic, or that the rules of unsigned arithmetic

> apply for this calcule, or even that there is a calcule.
It gives the formula above. It does not really matter whether you interpret

sizeof(T)*5+x

as unsigned arithmetic or as plain math. A conversion to std::size_t has to

happen at some point because of the signature of the allocation function.

If [4.7/2] is not meant to apply to that conversion, the standard should

say that somewhere.

> (It is

> a bit vague, I'll admit, since it says "A new-expression passes

> the amount of space requested to the allocation function as the

> first argument of type std:: size_t." It doesn't really say

> what happens if the "amount of space" isn't representable in a

> size_t.
So you see: taken litterally, the standard guarantees something impossible

to happen.

> But since it's clear that the request can't be honored,

> the only reasonable interpretation is that you get a bad_alloc.)
Hm, that is a mixure of common sense and wishfull thinking

I agree that a bad_alloc is clearly what I would _want_ to get. I do not

see, however, how to argue from the wording of the standard that I _will_

get that.

Best

Kai-Uwe Bux