Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > bad alloc

Reply
Thread Tools

bad alloc

 
 
Paul
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 2:36*am, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 3, 7:14*pm, James Kanze <(E-Mail Removed)> wrote:
>
> > On Sep 2, 5:01 am, Adam Skutt <(E-Mail Removed)> wrote:

>
> > > On Sep 1, 7:58 pm, James Kanze <(E-Mail Removed)> wrote:
> > > > If you can't write your output, then logging will fail. *But
> > > > that's a different problem from running out of memory.
> > > Not particularly. *Our goal here is to never fail. *It's by
> > > definition impossible.

>
> > Logging is not normally part of the "essential" job of the
> > application.

>
> If your goal is to improve robustness by logging the OOM condition,
> then it's not just essential, it's mandatory. *If you fail to do it,
> you failed to improve robustness.
>
> > I've yet to see a logging system in a large system which didn't
> > use custom streambuf. *(For that matter, at least on large scale
> > servers, I'd say that outputting to custom streambuf's is far
> > more frequent than outputting to the standard streambuf's.)

>
> > I'm not sure what you mean by "the facilities out of the box".
> > No language that I know provides the logging facilities
> > necessary for a large scale server "out of the box"; C++
> > actually does far better than most in this regard. *And no
> > language, to my knowledge, provides any sort of transaction
> > management (e.g. with rollback).

>
> Then you don't know many languages! *Java, Python, and many others
> provide robust, enterprise-grade logging facilities out of the box.
> Haskell, Erlang, and many others provide all sorts of transactional
> facilities, depending on exactly what you want, out of the box.
>
>
>
> > Or may not, if it is designed correctly. *(This is one of the
> > rare cases where just dumping the memory image of certain
> > struct's might be appropriate.)

>
> You're just making the cost/value proposition worse, not better.
> That's even more code I have to write!
>
> > I've not followed the discussion completely, but most of what
> > I've seen seems to concern returning an error for the request
> > triggering the problem, and continuing to handle other requests.

>
> No, that's not even how the discussion started. *One of the major
> advocates for handling OOM suggested this was not only possible, but
> trivial.
>
> > Unless the reason you've run out of memory is a memory leak,
> > you can.

>
> Nope. *All the other requests can die while you're trying to handle
> the OOM condition. *Or the other side could drop the request because
> they got tired of waiting. *The reality of the matter is that both
> will happen.
>
> > The effort isn't that hard,

>
> Yes, it is. *It requires me to rewrite a considerable number of
> language and sometimes even OS facilities, something you have admitted
> yourself! *The entire reason I'm using a programming language is
> because it provides useful facilities for me *As a result, it isn't
> the least bit unreasonable to conclude that rewriting language
> facilities is hard. *If I wanted to be doing to be writing language
> facilities, then I'd just write my own damn programming language in
> the first place!


But how much effort is it to just enclose all allocations in a try-
catch block?
It's only one try block and this is all provided as a language
feature.

>
> > and it's a basic requirement for
> > some applications. *If you don't handle OOM correctly, you're
> > application isn't correct.

>
> Applications that require a response to OOM other than terminate are
> an unsubstantial minority. *Systems that cannot permit termination as
> an OOM response are almost certainly broken.


Looking at a user application on windows, termination on low memory
condition is permitted but if the program were just to terminate at
the slightest whiff of low memory it wouldn't be a very well written
program IMHO. At least give an error message and inform the user why
you are closing before termination, or just terminate the process that
the memeory was requested for and keep the program alive thus allowing
the user to close other apps and free up some memory.

If some other program is leaking memory, your program will get the
blame as being a rogue program because it just crashed in an
unproffessional way. This alone is a good reason for error checking.
 
Reply With Quote
 
 
 
 
none
Guest
Posts: n/a
 
      09-06-2011
In article <(E-Mail Removed)>,
Adam Skutt <(E-Mail Removed)> wrote:
>On Sep 4, 1:56pm, James Kanze <(E-Mail Removed)> wrote:
>> On Sep 4, 1:03 am, Ian Collins <(E-Mail Removed)> wrote:
>>
>> > On 09/ 4/11 11:20 AM, James Kanze wrote:
>> > > On Sep 2, 6:45 am, Ian Collins<(E-Mail Removed)> wrote:
>> > >> On 09/ 2/11 04:37 PM, Adam Skutt wrote:
>> > > [...]
>> > >> I agree. On a descent hosted environment, memory exhaustion is usually
>> > >> down to either a system wide problem, or a programming error.
>> > > Or an overly complex client request.
>> > Not spotting those is a programming (or specification) error!

>>
>> And the way you spot them is by catching bad_alloc.
>>

>
>No, you set upfront bounds on allowable inputs. This is what other
>engineering disciplines do, so I'm not sure why computer programmers
>would do something different. Algorithms that permit bounded response
>to unbounded input are pretty rare in the grand scheme of things.
>Even when they exist, they may carry tradeoffs that make them
>undesirable or unsuitable (e.g., internal vs. external sort).


So, if I understand you correctly, you are saying that you must always
setup some artificial limits to the external inputs and set
artificially low so that no matter what is happening in the rest of
the system, the program will never run out of resources....

This seems like a very bad proposition to me. The only way to win is
to reserve and grab at startup time all of the resources you might
potentially ever need in order to meet the worse case scenario of your
inputs.

Yannick




 
Reply With Quote
 
 
 
 
Adam Skutt
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 9:43*am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
wrote:
> In article <(E-Mail Removed)..com>,
> Adam Skutt *<(E-Mail Removed)> wrote:
> So, if I understand you correctly, you are saying that you must always
> setup some artificial limits to the external inputs and set
> artificially low so that no matter what is happening in the rest of
> the system, the program will never run out of resources....


Yes, if you want to isolate failure processing one request from
another (esp. in a threaded system), you set limits on how much input
can be provided with each request. You reject requests that exceed
the limit.

However, this doesn't mean the limits are set artificially low.
Usually memory isn't your bounding constraint, so you'll run out of
database handles, CPU, etc. long before you run out of memory. Per-
request memory limits can be generous and not create an issue.

Of course, I'm personally fine with unbounded input as long as the
user understands the system will break at some point and they get to
keep both of the pieces.

> This seems like a very bad proposition to me. *The only way to win is
> to reserve and grab at startup time all of the resources you might
> potentially ever need in order to meet the worse case scenario of your
> inputs.


No, I'm not sure why you think this follows in the least. I also
think I've explained why this isn't the case several times already.
If you have per-request bounds and the OS can't give you memory when
you ask for it you either need to rewrite your code (so you'll be
terminating anyway) or the OS is likely to terminate (so you'll be
terminating anyway).

Adam
 
Reply With Quote
 
Paul
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 2:43*pm, yatremblay@bel1lin202.(none) (Yannick Tremblay)
wrote:
> In article <(E-Mail Removed)..com>,
> Adam Skutt *<(E-Mail Removed)> wrote:
>
>
>
>
>
> >On Sep 4, 1:56pm, James Kanze <(E-Mail Removed)> wrote:
> >> On Sep 4, 1:03 am, Ian Collins <(E-Mail Removed)> wrote:

>
> >> > On 09/ 4/11 11:20 AM, James Kanze wrote:
> >> > > On Sep 2, 6:45 am, Ian Collins<(E-Mail Removed)> wrote:
> >> > >> On 09/ 2/11 04:37 PM, Adam Skutt wrote:
> >> > > [...]
> >> > >> I agree. On a descent hosted environment, memory exhaustion is usually
> >> > >> down to either a system wide problem, or a programming error.
> >> > > Or an overly complex client request.
> >> > Not spotting those is a programming (or specification) error!

>
> >> And the way you spot them is by catching bad_alloc.

>
> >No, you set upfront bounds on allowable inputs. *This is what other
> >engineering disciplines do, so I'm not sure why computer programmers
> >would do something different. *Algorithms that permit bounded response
> >to unbounded input are pretty rare in the grand scheme of things.
> >Even when they exist, they may carry tradeoffs that make them
> >undesirable or unsuitable (e.g., internal vs. external sort).

>
> So, if I understand you correctly, you are saying that you must always
> setup some artificial limits to the external inputs and set
> artificially low so that no matter what is happening in the rest of
> the system, the program will never run out of resources....
>
> This seems like a very bad proposition to me. *The only way to win is
> to reserve and grab at startup time all of the resources you might
> potentially ever need in order to meet the worse case scenario of your
> inputs.
>

This is not possilbe in the situation where a program is limited by
system memory. As a crude example a text editor opening a new window
to display each text file, the number of windows is limited by
available system RAM.

The type of program you describe is not limited by system RAM, it is
limited the procedures it executes.. for example Freecell or
Minesweeper are limited in the amount of RAM they can consume.

I think this is why programs specify system requirments, but many
games and stuff specify it as a minimum and suggest that much better
performance will be achieved if more RAM is available.


The language provides us with a mechanism to catch errors and I don't
know why people shun it. I think it has a lot to do with laziness TBH.
 
Reply With Quote
 
Paul
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 2:58*pm, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 6, 9:43*am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
> wrote:
>
> > In article <(E-Mail Removed)>,
> > Adam Skutt *<(E-Mail Removed)> wrote:
> > So, if I understand you correctly, you are saying that you must always
> > setup some artificial limits to the external inputs and set
> > artificially low so that no matter what is happening in the rest of
> > the system, the program will never run out of resources....

>
> Yes, if you want to isolate failure processing one request from
> another (esp. in a threaded system), you set limits on how much input
> can be provided with each request. *You reject requests that exceed
> the limit.
>
> However, this doesn't mean the limits are set artificially low.
> Usually memory isn't your bounding constraint, so you'll run out of
> database handles, CPU, etc. long before you run out of memory. *Per-
> request memory limits can be generous and not create an issue.
>
> Of course, I'm personally fine with unbounded input as long as the
> user understands the system will break at some point and they get to
> keep both of the pieces.
>
> > This seems like a very bad proposition to me. *The only way to win is
> > to reserve and grab at startup time all of the resources you might
> > potentially ever need in order to meet the worse case scenario of your
> > inputs.

>
> No, I'm not sure why you think this follows in the least. *I also
> think I've explained why this isn't the case several times already.
> If you have per-request bounds and the OS can't give you memory when
> you ask for it you either need to rewrite your code (so you'll be
> terminating anyway) or the OS is likely to terminate (so you'll be
> terminating anyway).
>

I used play World of Warcraft and my system would sometimes become
slow and sluggish, when I added more RAM the game ran much faster and
smoother.

These programs must be designed in such a way that they operate using
the maximum resources without crashing. If termination was the answer
I wouldn't have been able to play this game, prior to upgrading my
RAM, without it crashing all the time. It didn't crash it played ok
but became a bit slow when resources were low.
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 2:20 am, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 3, 7:02 pm, James Kanze <(E-Mail Removed)> wrote:


> > On Sep 2, 1:25 am, Joshua Maurice <(E-Mail Removed)> wrote:
> > > For most processes which are not the ones
> > > being "abusive" but merely innocent bystanders, including system
> > > processes, I suspect they will behave similarly. With overcommit on,
> > > they will be killed with great prejudice. With overcommit off, when
> > > they get the malloc error, most will respond just the same and die a
> > > quick death.


> > That would be a very poorly written service that died just
> > because of a malloc failure.


> Then are you willing to make the claim most Linux, UNIX, OS X and
> Windows services are poorly written? Because that's what you just
> said, merely using different words.


I don't know about OS X, since I've never used it, but it's true
that Linux and Windows services are far from robust. Neither
are really what I'd take as a model. (I wouldn't use either
Linux or Windows if I needed a reliable server.)

> > All services, critical or not, should have some sort of
> > reasonable response to all possible error conditions.


> Crashing is a perfectly reasonable response to an error condition,
> really most error conditions. I'm not sure why anyone would think
> otherwise even for a second.


It depends on the application. For most systems, crashing is
the only acceptable response for a detected programming error.
But depending on the application, running out of memory isn't
necessary a programming error.

--
James Kanze
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 3:13 am, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 5, 9:20 pm, Adam Skutt <(E-Mail Removed)> wrote:


> > On Sep 3, 7:02 pm, James Kanze <(E-Mail Removed)> wrote:
> > > All of the Unices I know do implement per-process limits. Which
> > > can be useful in specific cases as well.


> > You only know a few then. POSIX specifies some crude (and useless)
> > per-/user/ limits, but per-process limits are not standard by any
> > stretch of the imagine. As I already explained, they also do nothing
> > to solve the problem, nor can they really.


> I should clarify this part. All Unicies provide some limits through
> ulimit, which provides mostly per-process limits that are user
> discretionary. Few provide the ability to enforce limits on all
> invocations of binary X, which is what I assumed you were talking
> about and what would be necessary.


That's not what I was talking about, and that's absolutely
unnecessary. Any reliable server will be running on a dedicated
machine, in a dedicated environment. It's impossible to have
any sort of reliable system otherwise.

> ulimit isn't helpful because the
> limits have to be high enough for the largest program the user wishes
> to run, and you cannot prevent the user from setting those limits for
> running other binaries.


That's just bullshit. A reliable server will be started from a
shell script, which will set ulimits just before starting it
(and will do nothing else which might use significant amounts of
memory).

--
James Kanze
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 2:36 am, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 3, 7:14 pm, James Kanze <(E-Mail Removed)> wrote:


[...]
> > I'm not sure what you mean by "the facilities out of the box".
> > No language that I know provides the logging facilities
> > necessary for a large scale server "out of the box"; C++
> > actually does far better than most in this regard. And no
> > language, to my knowledge, provides any sort of transaction
> > management (e.g. with rollback).


> Then you don't know many languages! Java, Python, and many others
> provide robust, enterprise-grade logging facilities out of the box.
> Haskell, Erlang, and many others provide all sorts of transactional
> facilities, depending on exactly what you want, out of the box.


I'm familiar with Java and Python, and neither provide any sort
of transactional management, nor adequate logging for large
scale applications.

> > Or may not, if it is designed correctly. (This is one of the
> > rare cases where just dumping the memory image of certain
> > struct's might be appropriate.)


> You're just making the cost/value proposition worse, not better.
> That's even more code I have to write!


You have to write it anyway, since it's part of making the
application robust.

> > I've not followed the discussion completely, but most of what
> > I've seen seems to concern returning an error for the request
> > triggering the problem, and continuing to handle other requests.


> No, that's not even how the discussion started. One of the major
> advocates for handling OOM suggested this was not only possible, but
> trivial.


Nothing in a large scale application is trivial, but handling
out of memory isn't more difficult than any number of other
things you have to do.

> > Unless the reason you've run out of memory is a memory leak,
> > you can.


> Nope. All the other requests can die while you're trying to handle
> the OOM condition. Or the other side could drop the request because
> they got tired of waiting. The reality of the matter is that both
> will happen.


The reality of the matter is that neither happens, if you
program correctly. I've written robust applications which
handled out of memory, and they worked.

> > The effort isn't that hard,


> Yes, it is.


It's no harder than a lot of other things necessary to write
correct software.

> It requires me to rewrite a considerable number of
> language and sometimes even OS facilities, something you have admitted
> yourself!


But so do logging, and transaction management, and a lot of
other things.

> The entire reason I'm using a programming language is
> because it provides useful facilities for me As a result, it isn't
> the least bit unreasonable to conclude that rewriting language
> facilities is hard. If I wanted to be doing to be writing language
> facilities, then I'd just write my own damn programming language in
> the first place!


But no languate has adequate logging facilities, nor transaction
management facilities, nor a lot of other things you need.

> > and it's a basic requirement for
> > some applications. If you don't handle OOM correctly, you're
> > application isn't correct.


> Applications that require a response to OOM other than terminate are
> an unsubstantial minority.


Finally something I can agree with. They're definitely a
minority. But they do exist. (I'd guess, on the whole, they
represent less than 10% of the applications I've worked on. But
it's hard to quantify, since a lot of lower level applications
I've worked on in the past didn't use dynamic memory, period.)

> Systems that cannot permit termination as
> an OOM response are almost certainly broken.


A system is broken if it doesn't meet its requirements.

> > I'm not sure either. But it is something that must be kept in
> > mind when you are writing an application which has to handle
> > OOM.


> And it makes justifying handling OOM only harder, not easier! You're
> making my case for me!


You're just ignoring the facts. Some applications (a minority)
have to handle OOM, at least in certain cases or configurations.
If they don't they're broken.

--
James Kanze
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 2:50 am, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 4, 1:56 pm, James Kanze <(E-Mail Removed)> wrote:


> > On Sep 4, 1:03 am, Ian Collins <(E-Mail Removed)> wrote:


> > > On 09/ 4/11 11:20 AM, James Kanze wrote:
> > > > On Sep 2, 6:45 am, Ian Collins<(E-Mail Removed)> wrote:
> > > >> On 09/ 2/11 04:37 PM, Adam Skutt wrote:
> > > > [...]
> > > >> I agree. On a descent hosted environment, memory exhaustion is usually
> > > >> down to either a system wide problem, or a programming error.
> > > > Or an overly complex client request.
> > > Not spotting those is a programming (or specification) error!


> > And the way you spot them is by catching bad_alloc.


> No, you set upfront bounds on allowable inputs.


That's another possible solution. Not acceptable for all
applications, and sometimes more difficult to implement than
handling OOM. Applications vary.

> This is what other
> engineering disciplines do,


Actually, they don't. There's a good reason why soldiers are
required to break step when crossing a bridge.

--
James Kanze
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      09-06-2011
On Sep 6, 6:00 am, Ian Collins <(E-Mail Removed)> wrote:
> On 09/ 5/11 05:56 AM, James Kanze wrote:


[...]
> > Seriously, the problem is very much like that of a compiler.
> > Nest parentheses too deep, and the compiler will run out of
> > memory. There are two solutions: specify an artificial nesting
> > limit, which you know you can handle (regardless of how many
> > connections are active, etc.), or react when you run out of
> > resources. There are valid arguments for both solutions, and
> > I've used both, in different applications.


> I have also seen both in compilers. For example last time I played, g++
> didn't have a recursion limit for templates (as used in
> meta-programming) while Sun CC does.


What you mean, of course, is that the recursion limit in g++ is
determined by the available resources; not that there isn't
one.

There are arguments for both strategies. I've used both,
depending on the application. There is no one "correct"
solution.

--
James Kanze
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: Catching exceptions, bad alloc revisited red floyd C++ 3 09-10-2011 03:11 PM
Bad media, bad files or bad Nero? John Computer Information 23 01-08-2008 09:17 PM
ActiveX apologetic Larry Seltzer... "Sun paid for malicious ActiveX code, and Firefox is bad, bad bad baad. please use ActiveX, it's secure and nice!" (ok, the last part is irony on my part) fernando.cassia@gmail.com Java 0 04-16-2005 10:05 PM
24 Season 3 Bad Bad Bad (Spoiler) nospam@nospam.com DVD Video 12 02-23-2005 03:28 AM
24 Season 3 Bad Bad Bad (Spoiler) nospam@nospam.com DVD Video 0 02-19-2005 01:10 AM



Advertisments