Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > bad alloc

Reply
Thread Tools

bad alloc

 
 
Noah Roberts
Guest
Posts: n/a
 
      08-31-2011
On Aug 30, 6:38*pm, Paul <(E-Mail Removed)> wrote:
> On Aug 31, 1:24*am, Leigh Johnston <(E-Mail Removed)> wrote:
>
>
>
>
>
>
>
> > On 31/08/2011 01:11, Paul wrote:

>
> > > On Aug 31, 12:56 am, Leigh Johnston<(E-Mail Removed)> *wrote:
> > >> On 31/08/2011 00:17, Paul wrote:

>
> > >>> On Aug 30, 7:18 pm, Leigh Johnston<(E-Mail Removed)> * *wrote:
> > >>>> On 30/08/2011 18:58, Leigh Johnston wrote:

>
> > >>>>> On 30/08/2011 18:31, Paul wrote:
> > >>>>>> On Aug 30, 4:15 pm, Leigh Johnston<(E-Mail Removed)> * *wrote:
> > >>>>>>> On 30/08/2011 16:00, Paul wrote:

>
> > >>>>>>>> *******s.
> > >>>>>>>> You have spouted bullshit and now you realise it you are trying to get
> > >>>>>>>> out of the arguement with some bullshit about noobish guff.

>
> > >>>>>>>> Its you thats noobish saying that the only sane option is to allow a
> > >>>>>>>> program to crash if an allocation fails, and you know it.

>
> > >>>>>>> I said the following:

>
> > >>>>>>> "The only sane remedy for most cases of allocation failure is to
> > >>>>>>> terminate the application which is what will happen with an uncaught
> > >>>>>>> exception."

>
> > >>>>>>> I stand by what I said so I am not trying to "get out of" anything. As
> > >>>>>>> usual you fail and continue to fail with your bullshit: note the words
> > >>>>>>> "most cases of" in what I said.

>
> > >>>>>> It's not the only sane remedy. It's the most insane remedy.

>
> > >>>>> In your own deluded, ill-informed opinion.

>
> > >>>>>> Imagine a program that controlled a robot:
> > >>>>>> One process for each limb, one for the head and one for the central
> > >>>>>> gyromatic balance system. The robot is in walk mode and you run out of
> > >>>>>> memory because a screeching parrot flies past.
> > >>>>>> What is the "sane" option your program should take:
> > >>>>>> Shut down eye and hearing modules and slow his walking speed?
> > >>>>>> Just let the program crash and allow the robot to fall over in aheap?

>
> > >>>>> Are you blind as well as dense? I said:

>
> > >>>>> "The only sane remedy for *most cases of* allocation failure is to
> > >>>>> terminate the application which is what will happen with an uncaught
> > >>>>> exception."

>
> > >>>>> *most cases of*

>
> > >>>>> If I was designing robot control software than I would design it in such
> > >>>>> a way that certain individual sub-systems allocate all their required
> > >>>>> memory up-front so that if some fatal error does occur then the robot
> > >>>>> can safely come to a stand still and then reboot its systems.

>
> > >>>>>> If you were working on critical systems such as aircraft controls
> > >>>>>> would you still think that allowing errors to go uncatched is the only
> > >>>>>> sane remedy?

>
> > >>>>> Critical systems such as aircraft controls will (I assume) allocate all
> > >>>>> the memory that is needed for all of their operations up-front sothe
> > >>>>> situation of a memory allocation failure during flight will neveroccur.

>
> > >>>> I actually have real world experience of doing something similar as I
> > >>>> was on the Telephony Application Call Control team for two different
> > >>>> smartphones. *If the user exhausts their memory running apps Telephony
> > >>>> would still need to be able to create a call object if there was an
> > >>>> incoming call so the call objects were created up-front rather than
> > >>>> on-demand. *This may be an over-simplification but you get the idea.

>
> > >>> I can understand a scenario where you might want to reserve some
> > >>> memory for a critical part of the program. But I don't see many
> > >>> programs that would be able to predict their memory requirements up
> > >>> front. Take your phone example, it would be impossible to predict
> > >>> every possible user scenario and there just wouldn't be enough memory
> > >>> to create all the objects required for every possible scenario or even
> > >>> a fractional representation of every possible scenario.

>
> > >> You don't know what you are talking about.

>
> > > Oh yes I do.

>
> > It is quite obvious that you don't

>
> > > A phone has limited memory. You cannot pre-allocate all the objects a
> > > user *may* require to run all apps on his/her phone.

>
> > Why are you so dense? *Where did I say that? *I said the Telephony App
> > could create all the objects it needs up front not other less critical apps.

>
> > > video jukebox, photo editor, soundblaster master etc etc.

>
> > Obviously these apps are not critical so wouldn't be required to create
> > any objects up front.

>
> But according to your programming style, if these apps did not check
> for allocation errors the phone would crash regularly thus incoming
> calls could be missed during phone reboots.


I'm confused now. I thought that was YOUR argument. Is this an
example of a self-induced reductio ad absurdum?
 
Reply With Quote
 
 
 
 
Adam Skutt
Guest
Posts: n/a
 
      08-31-2011
On Aug 30, 11:02*pm, Noah Roberts <(E-Mail Removed)> wrote:
> On Aug 30, 4:04*pm, Adam Skutt <(E-Mail Removed)> wrote:
>
> > Properly written, exception-safe C++ code will do the right thing when
> > std::bad_alloc is thrown, and most C++ code cannot sensibly handle
> > std::bad_alloc. *As a result, the automatic behavior, which is to let
> > the exception propagate up to main() and terminate the program there,
> > is the correct behavior for the overwhelming majority of C++
> > applications. *As programmers, we win anytime the automatic behavior
> > is the correct behavior.

>
> What if you're within a try block well before main in the call stack?


Doesn't change anything and why would it?

> What if the catch for that block attempts to do something like save
> what the user's working on, or do some other seemingly reasonable
> attempt at something that seems reasonable?


It will almost certainly fail. But that's OK, what that could happen
anyway. Consider entry due to another exception, std::bad_alloc being
thrown somehow during the write-out process. You're stuck with the
exact same result: the attempt to save the state fails.

If anything, this is why catch blocks (especially for root exceptions)
shouldn't try to do much at all. In many cases, the exception lacks
the information to determine what behaviors are safe after the
exception has occurred. Determining what behavior is safe is often
quite difficult. std::bad_alloc is not unique in this regard.

>
> A lot of people will catch std::exception. *I don't see a big problem
> with this but I do see potential in what you're talking about to cause
> problems with that. *If you get a bad_alloc and then attempt to do
> something like write to disk or whatnot as a response to that
> exception, you may get the same problem again only now you're writing
> to disk, something that's inherently unreversible from the program's
> point of view. *This could result in corruption of important user data
> in some cases.


Who's to say I wasn't already in the process of writing to disk when
bad_alloc was initially raised? The argument 'catch blocks can have
side effects too' is not an argument against letting bad_alloc bubble
up.

Adam
 
Reply With Quote
 
 
 
 
Goran
Guest
Posts: n/a
 
      08-31-2011
On Aug 30, 4:00*pm, Paul <(E-Mail Removed)> wrote:
> On Aug 30, 2:30*pm, Leigh Johnston <(E-Mail Removed)> wrote:
>
>
>
>
>
>
>
> > On 30/08/2011 13:54, Paul wrote:

>
> > > On Aug 30, 11:26 am, Goran<(E-Mail Removed)> *wrote:
> > >> On Aug 30, 8:26 am, Paul<(E-Mail Removed)> *wrote:

>
> > >>> There are numerous C++ examples and code snippets which use STL
> > >>> allocators in containers such as STL vector an string.
> > >>> It has come to my attention that nobody ever seems to care about
> > >>> checking the allocation has been successfull. As we have this
> > >>> exception handling why is it not used very often in practise?.
> > >>> Surely it should be common practise to "try" all allocations, or do
> > >>> people just not care and allow the program to crash if it runs out of
> > >>> memory?

>
> > >> Good-style exception-aware C++ code does __NOT__ check for allocation
> > >> failures. Instead, it's written in such a manner that said (and other)
> > >> failures don't break it __LOGICALLY__. This is done through a careful
> > >> design and observation of exception safety guarantees (http://
> > >> en.wikipedia.org/wiki/Exception_handling#Exception_safety) of the
> > >> code. Some simple example:

>
> > >> FILE* f = fopen(...);
> > >> if (!f) throw whatever();
> > >> vector<int> *v;
> > >> v.push_back(2);
> > >> fclose(f);

>
> > >> This snippet should have no-leak (basic) exception safety guarantee,
> > >> but it doesn't (possible resource leak: FILE*, if there's an exception
> > >> between vector creation and fclose. For example, push_back will throw
> > >> if there's no memory to allocate internal vector storage.

>
> > >> To satisfy no-leak guarantee, the above should be:

>
> > >> FILE* f = fopen(...);
> > >> if (!f) throw whatever();
> > >> try
> > >> {
> > >> * vector<int> *v;
> > >> * v.push_back(2);
> > >> * fclose(f);}

>
> > >> catch(...)
> > >> {
> > >> * fclose(f);
> > >> * throw;

>
> > >> }

>
> > >> The above is pretty horrible, hence one would reach for RAII and use
> > >> fstream in lieu of FILE* and no try/catch would be needed for the code
> > >> to function correctly.

>
> > >> Another example:

>
> > >> container1 c1;
> > >> container2 c2;
> > >> c1.add(stuff);
> > >> c2.add(stuff);

>
> > >> Suppose that "stuff" needs to be in both c1 and c, otherwise something
> > >> is wrong. If so, the above needs strong excetion safety. Correction
> > >> would be:

>
> > >> c1.add(stuff);
> > >> try
> > >> {
> > >> * c2.add(stuff);}

>
> > >> catch(...)
> > >> {
> > >> * *c1.remove(stuff);
> > >> * *throw;

>
> > >> }

>
> > >> Again, writing this is annoying, and for this sort of things there's
> > >> an application of RAII in a trick called "scope guard". Using scope
> > >> guard, this should turn out as:

>
> > >> c1.add(stuff);
> > >> ScopeGuard guardc1 = MakeObjGuard(c1,&container1::remove,
> > >> ByRef(stuff));
> > >> c2.add(stuff);
> > >> guardc1.Dismiss();

>
> > >> Similar examples can be made for other exception safety levels but IMO
> > >> the above two happen in vaast majority of cases.

>
> > > I am not familiar with the scopeguard objects I will need to look them
> > > up.

>
> > You are not familiar with scope guard objects as you are not familiar
> > with RAII hence your n00bish question. *At least you have admitted a gap
> > in your knowledge for once.

>
> > >>> I think if people were more conscious of this error checking the
> > >>> reserve function would be used more often.

>
> > What is so special about reserve? *See below.

>
> > >> I am very convinced that this is a wrong way to go reasoning about
> > >> error checking with exception-aware code. First off, using reserve
> > >> lulls into a false sense of security. So space is reserved for the
> > >> vector, and elements will be copied in it. What if copy ctor/
> > >> assignment throws in the process? Code that is sensitive to exceptions
> > >> will still be at risk. Second, it pollutes the code with gratuitous
> > >> snippets no one cares about. There's a better way, see above.

>
> > >> What's so wrong with this reasoning? The idea that one needs to make
> > >> sure that each possible failure mode is looked after. This is
> > >> possible, but is __extremely__ tedious. Instead, one should think in
> > >> this manner: here are pieces of code that might throw (that should be
> > >> a vaaaaaast majority of total code). If they throw, what will go wrong
> > >> with the code (internal state, resources etc)? (e.g. FILE* will leak,
> > >> c1 and c2 won't be in sync...) For those things, appropriate cleanup
> > >> action should be taken (in vaaast majority of cases, said cleanup
> > >> action is going to be "apply RAII"). Said cleanup action must be a no-
> > >> throw operation (hence use of destructors w/RAII). There should also
> > >> be a clear idea where no-throw areas are, and they should be a tiny
> > >> amount of the code (in C++, these are typically primitive type
> > >> assignments, C-function calls and use of no-throwing swap).

>
> > > Granted there are other possible exceptions that could be thrown and
> > > should also be considered. I was using the reserve as an example to
> > > guard against program crashing from OOM. I would have thought at least
> > > inform the user then close the program in a civilised manner or
> > > postone the current operation and inform the user of the memory
> > > situation and handle this without closing the program (i.e: by freeing
> > > other recources)

>
> > Why does reserve help? *If it is a std::vector of std::string for
> > example then reserve will only allocate space for the std::string
> > object; it will not allocate space for what each std::string element may
> > subsequently allocate. *Also not all containers have reserve.

>
> Well reserve would cause the vector to allocate space. Thus you only
> need to put the reserve operation in the try-catch block and if this
> doesn't throw you know the allocation was successfull and the program
> can continue.


Not necessarily. For example:

// Adds stuff to v, return false if no memory
bool foo(vector<string>& v)
{
try { v.reserve(whatever); }
catch(const bad_alloc&) { return false; }
// Yay, we can continue!
v.push_back("whatever");
return true;
}

The above is a hallmark example of wrong C++. Suppose that reserve
worked, but OOM happened when constructing/copying a string object:
foo does not return false, but throws. IOW, foo lies about what it
does.

The problem? Programmer set out to guess all failure modes and failed.
My contention is: programmer will, by and large, fail to guess all
possible failure modes. Therefore, programmer is best off not doing
that, but thinking about handling code/data state in face of
unexpected failures (that is, apply exception safety stuff).

BTW, given that programmer will fail to guess all failure modes,
programmer could wrap each function into a try/catch. That, however:

1. is a masive PITA
2. will fail to propagate good information of what went wrong, and
that is almost just as bad as not reporting an error at all.

Goran.
 
Reply With Quote
 
Goran
Guest
Posts: n/a
 
      08-31-2011
On Aug 31, 1:04*am, Adam Skutt <(E-Mail Removed)> wrote:
> On Aug 30, 6:26*am, Goran <(E-Mail Removed)> wrote:
>
>
>
>
>
>
>
>
>
> > There's a school of thought that says that allocation failure should
> > simply terminate everything. This is based on the notion that, once
> > there's no memory, world has ended for the code anyhow. This notion is
> > false in a significant amount of cases (and is not in the spirit of C
> > or C++; if it were, malloc or new would terminate()). Why is notion
> > wrong? Because normally, code goes like this: work, work, allocate
> > some resources, work (with those), free those, allocate more, work,
> > allocate, work, free, free etc. That is, for many-a-code-path, there's
> > a "hill climb" where resources are allocated while working "up", and
> > they are deallocated, all or at least a significant part, while going
> > "down" (e.g. perhaps calculation result is kept allocated. So once
> > code hits a brick wall going up, there's a big chance there will be
> > plenty of breathing room once it comes down (due to all those
> > freeing). IOW, there's no __immediate need__ to die. Go down, clean up
> > behind you and you'll be fine. It's kinda better to go back and say "I
> > couldn't do X due to OOM" is kinda better than dying at the spot,
> > wouldn't you say?

>
> For most code, on most platforms, the two will be one and the same.
> The OS cleans up most resources when a process dies, and most process
> have no choice but to die.


I disagree (obviously). Here's the way I see it: it all depends on the
number of functions program executes. A simple programs who only does
one thing (e.g. a "main" function with no "until something external
says stop" in the call chain) in C++ benefits slightly from "die on
OOM" approach (in C, or something else without exceptions, benefit is
greater because there, error checking is very labor-demanding). In
fact, it benefits from "die on any problem" approach.

Programs that do more than one function are at a net loss with "die on
OOM" approach, and the loss is bigger the more the functions there are
(and the more important they are. Imagine an image processing program.
So you apply a transformation, and that OOMs. You die, your user loses
his latest changes that worked. But if you go back the stack, clean
all those resources transformation needed and say "sorry, OOM", he
could have saved (heck, you could have done it for the user, given
that we hit OOM). And... Dig this: trying to do the same at the spot
you hit OOM is a __mighty__ bad idea. Why? Because memory, and other
resources, are likely already scarce, and an attempt to do anything
might fail do to that.

Or imagine an HTTP server. One request OOMs, you die. You terminate
and restart, and you cut off all other concurrent request processing
not nice, nor necessary. And so on.

> Straightforward C++ on most implementations will deallocate memory as
> it goes, so when the application runs out of memory, there won't be
> anything to free up: retrying the operation will cause the code to
> fail in the same place. *Making more memory available requires
> rewriting the code to avoid unnecessarily holding on to resources that
> it no longer needs.


That is true, but only if peak memory memory use is actually used to
hold program state (heap fragmentation plays it's part, too). My
contention is that this the case much less often that you make it out
to be.

> Even when there's memory to free up, writing an exception handler that
> actually safely runs under an out-of-memory condition is impressively
> difficult.


I disagree with that, too. First off, when you actually hit the top-
level exception handler, chances are, you will have freed some memory.
Second, OOM-handling facilities are already made not to allocate
anything. E.g. bad_alloc will not try to do it in any implementation
I've seen. I've also seen OOM exception objects pre-allocated
statically in non-C++ environments, too (what else?). There is
difficulty, I agree with that, but it's actually trivial: keep in mind
that, once you hit that OOM handler (most likely, some top-level
exception handler not necessarily tied to OOM), you have all you might
need prepared upfront. That's +/- all. For a "catastrophy", prepare
required resources upfront.

> Properly written, exception-safe C++ code will do the right thing when
> std::bad_alloc is thrown, and most C++ code cannot sensibly handle
> std::bad_alloc. *As a result, the automatic behavior, which is to let
> the exception propagate up to main() and terminate the program there,
> is the correct behavior for the overwhelming majority of C++
> applications. *As programmers, we win anytime the automatic behavior
> is the correct behavior.


Yeah, I agree that one cannot sensibly "handle" bad_alloc. It can
sensibly __report__ it though. The thing is though, a vaaaast majority
of exceptions, code can't "handle". It can only report them, and in
rare cases, retry upon some sort o operator's reaction (like, check
the network and retry saving a file on a share). That makes OOM much
less special than any other exception, and less of a reason to
terminate.

Goran.
 
Reply With Quote
 
Goran
Guest
Posts: n/a
 
      08-31-2011
On Aug 31, 5:33*am, Adam Skutt <(E-Mail Removed)> wrote:
> On Aug 30, 11:02*pm, Noah Roberts <(E-Mail Removed)> wrote:
>
> > On Aug 30, 4:04*pm, Adam Skutt <(E-Mail Removed)> wrote:

>
> > > Properly written, exception-safe C++ code will do the right thing when
> > > std::bad_alloc is thrown, and most C++ code cannot sensibly handle
> > > std::bad_alloc. *As a result, the automatic behavior, which is to let
> > > the exception propagate up to main() and terminate the program there,
> > > is the correct behavior for the overwhelming majority of C++
> > > applications. *As programmers, we win anytime the automatic behavior
> > > is the correct behavior.

>
> > What if you're within a try block well before main in the call stack?

>
> Doesn't change anything and why would it?
>
> > What if the catch for that block attempts to do something like save
> > what the user's working on, or do some other seemingly reasonable
> > attempt at something that seems reasonable?

>
> It will almost certainly fail. *But that's OK, what that could happen
> anyway. *Consider entry due to another exception, std::bad_alloc being
> thrown somehow during the write-out process. *You're stuck with the
> exact same result: the attempt to save the state fails.


Why would bad_alloc be thrown while writing to disk? Guess: because
writing is made in such a way as to modify program state. Doesn't seem
all that logical.

(Repeating myself) there's two factors that play:

1. walking down the stack upon OOM (or other exception) normally frees
resources.
2. top-level error is not a place to do anything resource-sensitive,
exactly due to OOM possibility.

It's not as complicated as you make it out to be. Going "nice" in case
of OOM might not be worth it in all cases, but is not an and-all
response to all concerns.

Goran.
 
Reply With Quote
 
Asger-P
Guest
Posts: n/a
 
      08-31-2011

Hi Paul

Please, please, please, please, please, please
delete some of all that unneeded quotation it is so much
waste of time all the scrolling...

P.s. You are of course not the only one.

Thanks in advance
Best regards
Asger-P
 
Reply With Quote
 
none
Guest
Posts: n/a
 
      08-31-2011
In article <(E-Mail Removed)>,
Goran <(E-Mail Removed)> wrote:
>On Aug 31, 5:33*am, Adam Skutt <(E-Mail Removed)> wrote:
>> On Aug 30, 11:02*pm, Noah Roberts <(E-Mail Removed)> wrote:
>>
>> > On Aug 30, 4:04*pm, Adam Skutt <(E-Mail Removed)> wrote:

>>
>> > > Properly written, exception-safe C++ code will do the right thing when
>> > > std::bad_alloc is thrown, and most C++ code cannot sensibly handle
>> > > std::bad_alloc. *As a result, the automatic behavior, which is to let
>> > > the exception propagate up to main() and terminate the program there,
>> > > is the correct behavior for the overwhelming majority of C++
>> > > applications. *As programmers, we win anytime the automatic behavior
>> > > is the correct behavior.

>>
>> > What if you're within a try block well before main in the call stack?

>>
>> Doesn't change anything and why would it?
>>
>> > What if the catch for that block attempts to do something like save
>> > what the user's working on, or do some other seemingly reasonable
>> > attempt at something that seems reasonable?

>>
>> It will almost certainly fail. *But that's OK, what that could happen
>> anyway. *Consider entry due to another exception, std::bad_alloc being
>> thrown somehow during the write-out process. *You're stuck with the
>> exact same result: the attempt to save the state fails.

>
>Why would bad_alloc be thrown while writing to disk? Guess: because
>writing is made in such a way as to modify program state. Doesn't seem
>all that logical.
>
>(Repeating myself) there's two factors that play:
>
>1. walking down the stack upon OOM (or other exception) normally frees
>resources.
>2. top-level error is not a place to do anything resource-sensitive,
>exactly due to OOM possibility.
>
>It's not as complicated as you make it out to be. Going "nice" in case
>of OOM might not be worth it in all cases, but is not an and-all
>response to all concerns.


I absolutely agree with Goran and disagree that terminate on OOM is
*always* the best approach. There may be programs where it is the
best approach but it is far from always the case.

A concrete example:

Network server using a standard pattern of one listener/producer and
multiple worker/consumer threads. The listener receives and job
request and hands done the processing of the job to one of the worker
thread.

It is a very much possible that processing one particular job might
actually require too much memory for the system. The correct thing to
do in that case is to stop processing this one oversized job, release
all the resources acquired to process this job, mark it as error and
continue processing further jobs.

Since this is a persistent server than needs to be on and alive 24/7,
it would be totally innapropriate to permanently terminate the server.
Even if there was an additional monitoring process that restart the
server if it dies, this would not be a good thing because you would
loose current progress in the currently running worker threads and
would also kill current external client connections.

Yan

 
Reply With Quote
 
none
Guest
Posts: n/a
 
      08-31-2011
In article <(E-Mail Removed)>,
Paul <(E-Mail Removed)> wrote:
>On Aug 30, 9:58*pm, Dombo <(E-Mail Removed)> wrote:
>> Op 30-Aug-11 11:14, Krice schreef:
>>
>> > On 30 elo, 09:26, Paul<(E-Mail Removed)> *wrote:

>> That is the real question: what do you do when you have run out of
>> memory? The application could inform the user that the operation failed
>> because there was not enough memory and keep on running. If a program
>> has cached a lot of information would flush its caches to free some
>> memory and retry *the operation. But chances are that it will never get
>> to that point. My experience is that in a typical PC environment the
>> system tends to become non-responding or worse long before memory
>> allocation fails. Before that happens the user has probably restarted
>> his PC.

>
>This also depends how the memory is fragmented. Say for example a
>system has 1GB RAM with 500MG free, 500MB used. Even though you have
>500MB free an attempt to allocate 40MB may fail because there is no
>contiguous block of 40MB free memory.


Google "Virtual Memory". The two of you are talking totally unrelated
things. And given that all modern OS use virtual memory, the describe
situation will never happen as such. But once the app starts
swapping, things get very slow.



 
Reply With Quote
 
Paul
Guest
Posts: n/a
 
      08-31-2011
On Aug 31, 4:07*am, Noah Roberts <(E-Mail Removed)> wrote:
> On Aug 30, 6:38*pm, Paul <(E-Mail Removed)> wrote:

<snip>
>
> I'm confused now. *I thought that was YOUR argument. *Is this an
> example of a self-induced reductio ad absurdum?- Hide quoted text -
>

My argument was that bad_alloc exceptions should be handled. I don't
see a reason to ignore them when all the mechanics are in place to
catch such exceptions and handle them in some appropriate way.

 
Reply With Quote
 
Paul
Guest
Posts: n/a
 
      08-31-2011
On Aug 31, 10:28*am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
wrote:
> In article <(E-Mail Removed)..com>,

<snip>
>
> >This also depends how the memory is fragmented. Say for example a
> >system has 1GB RAM with 500MG free, 500MB used. *Even though you have
> >500MB free an attempt to allocate 40MB may fail because there is no
> >contiguous block of 40MB free memory.

>
> Google "Virtual Memory". *The two of you are talking totally unrelated
> things. *And given that all modern OS use virtual memory, the describe
> situation will never happen as such. *But once the app starts
> swapping, things get very slow.- Hide quoted text -
>

I was talking about fragmentation here.

If a large allocation fails it is often not the case that the system
is completely OOM, there will usually be a few free bytes scatterred
around in fragmented memory.

For example if a 1KB allocation fails , this doesn't mean a 8byte
allocation will fail. Also four allocations of 250bytes will not
necessary fail.

Example u=used, f=free:
[uuuuuuuuffffuuuuuuuuffffuuuuffff]
Above memory shows 12bytes free but any allocation over 4bytes will
fail.

allocate(//will throw exception.

allocate(4)//ok 12 bytes free
allocate(4)
allocate(4)
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: Catching exceptions, bad alloc revisited red floyd C++ 3 09-10-2011 03:11 PM
Bad media, bad files or bad Nero? John Computer Information 23 01-08-2008 09:17 PM
ActiveX apologetic Larry Seltzer... "Sun paid for malicious ActiveX code, and Firefox is bad, bad bad baad. please use ActiveX, it's secure and nice!" (ok, the last part is irony on my part) fernando.cassia@gmail.com Java 0 04-16-2005 10:05 PM
24 Season 3 Bad Bad Bad (Spoiler) nospam@nospam.com DVD Video 12 02-23-2005 03:28 AM
24 Season 3 Bad Bad Bad (Spoiler) nospam@nospam.com DVD Video 0 02-19-2005 01:10 AM



Advertisments