Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > forcing new to fail (or throw an exception)

Reply
Thread Tools

forcing new to fail (or throw an exception)

 
 
H.S.
Guest
Posts: n/a
 
      07-25-2007

Hello,

Here is a little question. I was reading up on the FAQ on pointers:
http://www.parashift.com/c++-faq-lit....html#faq-16.6

and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
enough memory by trying to allocating huge amount. Here is what I am trying:
int main(){
double *ldP;
ldP = new double [2048*2048*2048];

delete ldP;
return 0;
}

It compiles okay. It runs okay too.

What am I missing here? How can I try to allocate memory huge enough
that new throws an exception?

thanks,
->HS
 
Reply With Quote
 
 
 
 
Victor Bazarov
Guest
Posts: n/a
 
      07-25-2007
H.S. wrote:
> Hello,
>
> Here is a little question. I was reading up on the FAQ on pointers:
> http://www.parashift.com/c++-faq-lit....html#faq-16.6
>
> and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
> enough memory by trying to allocating huge amount. Here is what I am
> trying: int main(){
> double *ldP;
> ldP = new double [2048*2048*2048];


Try

size_t s = 2048*2048*2048;
std::cout << "About to allocate " << s << " doubles" << std::endl;
double ldP = new double[s];

>
> delete ldP;


Should be

delete[] ldP;

> return 0;
> }
>
> It compiles okay. It runs okay too.
>
> What am I missing here? How can I try to allocate memory huge enough
> that new throws an exception?


Hard to say. Your program (due to wrong 'delete') had undefined
behaviour. Try fixing it.

V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask


 
Reply With Quote
 
 
 
 
H.S.
Guest
Posts: n/a
 
      07-25-2007
Victor Bazarov wrote:
> H.S. wrote:
>> Hello,
>>
>> Here is a little question. I was reading up on the FAQ on pointers:
>> http://www.parashift.com/c++-faq-lit....html#faq-16.6
>>
>> and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
>> enough memory by trying to allocating huge amount. Here is what I am
>> trying: int main(){
>> double *ldP;
>> ldP = new double [2048*2048*2048];

>
> Try
>
> size_t s = 2048*2048*2048;


which generates:
$> g++ -o testmem testmem.cc
testmem.cc: In function ‘int main()’:
testmem.cc:5: warning: overflow in implicit constant conversion
$> ./testmem
About to allocate 0 doubles

> std::cout << "About to allocate " << s << " doubles" << std::endl;
> double ldP = new double[s];
>
>> delete ldP;

>
> Should be
>
> delete[] ldP;


Thanks for the correction.

>> return 0;
>> }
>>
>> It compiles okay. It runs okay too.
>>
>> What am I missing here? How can I try to allocate memory huge enough
>> that new throws an exception?

>
> Hard to say. Your program (due to wrong 'delete') had undefined
> behaviour. Try fixing it.


So after removing my mistakes, and correcting the one in your code (sort
of), here is what throws the exception (this is on a Debian Testing
kernel, 2.6.21, since max memory allocation depends on the kernel
options(?)):

#include <iostream>
int main(){
double *ldP;
size_t s = 2048*2048*58;
std::cout << "About to allocate " << s << " doubles" << std::endl;
ldP = new double [s];

delete [] ldP;
return 0;
}

$> g++ -o testmem testmem.cc
$> ./testmem
About to allocate 243269632 doubles
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted

thanks,
->HS
 
Reply With Quote
 
BobR
Guest
Posts: n/a
 
      07-25-2007

H.S. <(E-Mail Removed)> wrote in message...
>
> #include <iostream>
> int main(){

// > double *ldP;
> size_t s = 2048*2048*58;
> std::cout << "About to allocate " << s << " doubles" << std::endl;

// > ldP = new double [s];
double *ldP( new double[ s ] );

> delete [] ldP;
> return 0;
> }
>
> $> g++ -o testmem testmem.cc
> $> ./testmem
> About to allocate 243269632 doubles
> terminate called after throwing an instance of 'std::bad_alloc'
> what(): std::bad_alloc
> Aborted
>
> thanks,
> ->HS


FYI: When you want a BIG number, try this:

// std::size_t big(-1); // compiler 'warning', but usually works (a)
int bigint(-1);
std::size_t big( bigint );
// #include <limits>
std::size_t big2( std::numeric_limits<std::size_t>::max() );

std::cout<<"size_t big()="<<big<<std::endl;
std::cout<<"size_t big2()="<<big2<<std::endl;

// out: size_t big()=4294967295
// out: size_t big2()=4294967295

Of course the '-1' trick (a) only works on 'unsigned' types
(, and may be UB on some systems (?)).
Use the 'numeric_limits<>' version.

--
Bob R
POVrookie


 
Reply With Quote
 
Robert Bauck Hamar
Guest
Posts: n/a
 
      07-26-2007
BobR wrote:
> FYI: When you want a BIG number, try this:
>
> // std::size_t big(-1); // compiler 'warning', but usually works (a)
> int bigint(-1);
> std::size_t big( bigint );
> // #include <limits>
> std::size_t big2( std::numeric_limits<std::size_t>::max() );
>
> std::cout<<"size_t big()="<<big<<std::endl;
> std::cout<<"size_t big2()="<<big2<<std::endl;
>
> // out: size_t big()=4294967295
> // out: size_t big2()=4294967295
>
> Of course the '-1' trick (a) only works on 'unsigned' types
> (, and may be UB on some systems (?)).


No, it's well defined. The result should be the least unsigned integer
congruent to -1 modulo 2**N (where ** means power, and N is the number of
bits in std::size_t), and that would be 2**N - 1.

> Use the 'numeric_limits<>' version.


The numeric_limits<> version also works on signed integers. And it doesn't
confuse readers who doesn't know that std::size_t is unsigned, or haven't
studied the technicalities of integral conversions.

And: std::size_t is defined in <cstddef> (and some of the other C headers).
It should be included to use std::size_t

--
rbh
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      07-26-2007
On Jul 25, 6:11 pm, "H.S." <(E-Mail Removed)> wrote:
> Victor Bazarov wrote:
> > H.S. wrote:
> >> Here is a little question. I was reading up on the FAQ on pointers:
> >>http://www.parashift.com/c++-faq-lit....html#faq-16.6


> >> and wanted to see what g++ (ver. 4.1.3) does if it cannot allocate
> >> enough memory by trying to allocating huge amount. Here is what I am
> >> trying: int main(){
> >> double *ldP;
> >> ldP = new double [2048*2048*2048];


> > Try


> > size_t s = 2048*2048*2048;


> which generates:
> $> g++ -o testmem testmem.cc
> testmem.cc: In function ?int main()?:
> testmem.cc:5: warning: overflow in implicit constant conversion
> $> ./testmem
> About to allocate 0 doubles


Curious that he didn't get the warning for his code. Or maybe
he didn't notice it. In fact, of course, according to the
standard, that shouldn't be a warning, but an error. (Strictly
speaking: the program is ill formed, and the compiler must issue
a diagnostic. Formally speaking, once the compiler has issued
the diagnostic, it can do whatever it likes, including reformat
your hard drive. From a quality of implementation point of
view, of course, either the program should not compile, or the
compiler should document this as an extension. In this case, at
any rate I'd definitly post a bug report to g++. Supposing 32
bit int's, of course.)

> > std::cout << "About to allocate " << s << " doubles" << std::endl;
> > double ldP = new double[s];


> >> delete ldP;


> > Should be


> > delete[] ldP;


> Thanks for the correction.


> >> return 0;
> >> }


> >> It compiles okay. It runs okay too.


> >> What am I missing here? How can I try to allocate memory huge enough
> >> that new throws an exception?


> > Hard to say. Your program (due to wrong 'delete') had undefined
> > behaviour. Try fixing it.


Come now. We both know that the wrong delete wasn't the
problem. The problem was the overflow, which would have been
undefined behavior if the expression hadn't been a constant
expression.

> So after removing my mistakes, and correcting the one in your code (sort
> of), here is what throws the exception (this is on a Debian Testing
> kernel, 2.6.21, since max memory allocation depends on the kernel
> options(?)):


> #include <iostream>
> int main(){
> double *ldP;
> size_t s = 2048*2048*58;
> std::cout << "About to allocate " << s << " doubles" << std::endl;
> ldP = new double [s];


> delete [] ldP;
> return 0;
> }


> $> g++ -o testmem testmem.cc
> $> ./testmem
> About to allocate 243269632 doubles
> terminate called after throwing an instance of 'std::bad_alloc'
> what(): std::bad_alloc
> Aborted


You still haven't tested much. (I know, because operator new
doesn't work correctly with the default configuration of Linux.)
Try smaller blocks, and then accessing the allocated memory.
For some configurations, you'll get a core dump. (It may be
hard to simulate if you have a lot of memory.)

Basically, operator new can fail for three reasons: there's not
enough space available in the address space of the process (what
you're seeing, probably), the allocation would cause the process
to exceed some artificially imposed system limits (e.g. with
ulimits -m under Linux), or there really isn't enough virtual
memory. In its default configuration, Linux doesn't work in
this last case: operator new (based on what the OS told it) will
return an apparently valid pointer, which will cause a core dump
when dereferenced. (Older versions of AIX had a similar
problem, and Linux can be configured so that it behaves
correctly, too.)

Note that in this last case, at least some configurations of
some versions of Windows will pop-up a Window, asking you to
stop some other programs in order to make more memory available.
(I think some other configurations will just silently increase
the size of the swap space, and silently continue.)

--
James Kanze (GABI Software) email:(E-Mail Removed)
Conseils en informatique oriente objet/
Beratung in objektorientierter Datenverarbeitung
9 place Smard, 78210 St.-Cyr-l'cole, France, +33 (0)1 30 23 00 34

 
Reply With Quote
 
BobR
Guest
Posts: n/a
 
      07-26-2007

Robert Bauck Hamar <(E-Mail Removed)> wrote in message...
> BobR wrote:
> > // std::size_t big(-1); // compiler 'warning', but usually works (a)

[snip]
> > Of course the '-1' trick (a) only works on 'unsigned' types
> > (, and may be UB on some systems (?)).

>
> No, it's well defined. The result should be the least unsigned integer
> congruent to -1 modulo 2**N (where ** means power, and N is the number of
> bits in std::size_t), and that would be 2**N - 1.


Thanks.

--
Bob R
POVrookie


 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
To throw or to throw not? Emanuele D'Arrigo Python 6 11-15-2008 04:12 PM
JNI's throw new does not throw an exception yarona@m-sys.com Java 15 09-08-2005 08:36 AM
Cannot display throw new exception error message Matt Java 2 07-01-2004 07:07 AM
Throw Exception Vs Throw New Exception Kerri ASP .Net 2 10-27-2003 02:13 PM
if (f() != FAIL) or if (FAIL != f())? Wenjie C Programming 3 07-31-2003 09:54 PM



Advertisments