Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Re: C++ Memory Management Question

Reply
Thread Tools

Re: C++ Memory Management Question

 
 
Victor Bazarov
Guest
Posts: n/a
 
      12-31-2011
On 12/31/2011 8:56 AM, Datesfat Chicks wrote:
> I'm just learning C++ (after being a 19-year C veteran), so forgive
> any naive questions or concerns.
>
> It seems to me that C++ has more inherent reliance on dynamic memory
> allocation than C. It doesn't have to be that way in all cases (many
> classes would be implemented without dynamic allocation), but it seems
> more natural in C++ that dynamic allocation would appear in programs.


The reality is simpler: it's easier to do dynamic memory properly in C++
(purposefully so, actually), and that's the reason more people resort to
it. In fact I've written plenty of code that never allocated anything
in the free store (except by any library mechanisms that I had no
control over). You don't have to use dynamic memory if you don't need it.

> Here are my questions:
>
> a)I'm assuming that malloc() and free() are still available in C++?


That's not a question, you know. And, they are.

> b)Do these draw from the same memory pool as "new" and "delete"?


Usually. Unless you've overridden 'new' and 'delete' and made THEM
"draw" from some other place.

> My reason for asking that question is that I'm looking for a graceful
> strategy if dynamic memory gets exhausted.


Good luck.

> The obvious strategy that comes to mind is:
>
> a)When the program starts, grab a good chunk of memory using malloc().


And do what with it? Just sit on it? That's a serious waste of memory.

> b)If the dynamic memory gets exhausted via "new" and friends, handle
> the exception. In the exception handler, free() the memory allocated
> by (a) and set a flag to signal the program to gracefully exit as soon
> as it can.


Explain "gracefully exit". I mean, explain it to yourself, keeping in
mind that your process was *interrupted* by a *failed attempt* to get
more memory.

> c)Hopefully the memory released by free() would be enough to allow the
> program to exit gracefully (rather than a sudden less controlled
> exit).


So, your strategy is "reserve [a preset amount of] some resource until
it's desperately needed, then release it and pray that it's enough to
kill the process", right?

> Is this sane?


How about I don't answer this, and you instead get a second look at it
at your convenience?

> Is there a better way?


This question is asked here countless times every year. How about *you*
google for it and try to find out how successful your predecessors were?
And if you find something that suits you, by all means use it. If you
don't, *and* you have spare time, by all means look for it. But my
advice (based on experience trying to handle it and seeing others trying
to handle it) is, don't waste your time. If your process has run out of
memory, the best solution is to restart it with *more memory*.

And, again, good luck!

V
--
I do not respond to top-posted replies, please don't ask
 
Reply With Quote
 
 
 
 
Carlo Milanesi
Guest
Posts: n/a
 
      12-31-2011
Il 31/12/2011 15:44, Victor Bazarov ha scritto:
> > Is there a better way?

>
> my
> advice (based on experience trying to handle it and seeing others trying
> to handle it) is, don't waste your time. If your process has run out of
> memory, the best solution is to restart it with *more memory*.


So, if your word processor cannot load a huge document, or your video
editor cannot load a huge piece of footage, you'd like the application
terminate at once without saving anything
I think it would be better to undo the last command, and inform the user
that the command failed for the memory shortage (compared to the size of
the document).

To grant a graceful exit, I would ensure that the shutdown routines do
not allocate any memory, by reserving beforehand all the needed memory
space.

--

Carlo Milanesi
http://carlomilanesi.wordpress.com/
 
Reply With Quote
 
 
 
 
Victor Bazarov
Guest
Posts: n/a
 
      12-31-2011
On 12/31/2011 12:18 PM, Carlo Milanesi wrote:
> Il 31/12/2011 15:44, Victor Bazarov ha scritto:
>> > Is there a better way?

>>
>> my
>> advice (based on experience trying to handle it and seeing others trying
>> to handle it) is, don't waste your time. If your process has run out of
>> memory, the best solution is to restart it with *more memory*.

>
> So, if your word processor cannot load a huge document, or your video
> editor cannot load a huge piece of footage, you'd like the application
> terminate at once without saving anything


No, not without saying anything. But saying anything often does NOT
require memory allocation, nor do I consider "saying anything" a
graceful exist.

Neither a word processor nor a video editor should even attempt loading
anything if it determines that the attempt might cause it to run out of
memory. If they can't determine that before opening the file, they
aren't worth our time to discuss them.

Now, the usual way WRT handling memory constraints is *not to let the
process to run out of memory* in the first place, instead of trying to
do anything after it has happened.

> I think it would be better to undo the last command, and inform the user
> that the command failed for the memory shortage (compared to the size of
> the document).


Undoing the last command can require allocating of memory...

> To grant a graceful exit, I would ensure that the shutdown routines do
> not allocate any memory, by reserving beforehand all the needed memory
> space.


That's good. What happens if the process runs out of memory while
trying to allocate the memory for those shutdown routines?

V
--
I do not respond to top-posted replies, please don't ask
 
Reply With Quote
 
Carlo Milanesi
Guest
Posts: n/a
 
      12-31-2011
On 31/12/2011 19:04, Victor Bazarov wrote:
>> terminate at once without saving anything

>
> No, not without saying anything. But saying anything often does NOT
> require memory allocation, nor do I consider "saying anything" a
> graceful exist.


I wrote "saving" not "saying". I mean, if an application has several
documents open and one document cannot be loaded, it's better to abort
only the loading of that document, without closing all the documents,
and possibly lose changes to them.

> Neither a word processor nor a video editor should even attempt loading
> anything if it determines that the attempt might cause it to run out of
> memory. If they can't determine that before opening the file, they
> aren't worth our time to discuss them.


Is there a way to determine if the next memory allocation will succeed?

To execute Adobe Flash applets, the Firefox Web browser executes a
process named "plugin-container.exe". I've seen that if there is not not
enough memory to run one Adobe Flash applet, all Adobe Flash applets
running in different Firefox windows terminate at once, showing an error
message in a gray area.
I don't think that is a good behavior, but I think it is rather typical.
It's good that Firefox itself doesn't terminate.

> Now, the usual way WRT handling memory constraints is *not to let the
> process to run out of memory* in the first place, instead of trying to
> do anything after it has happened.


With some programs that is not possible (or it is quite hard to do), as
it is the user that decides the size of the data to keep in memory.

>> I think it would be better to undo the last command, and inform the user
>> that the command failed for the memory shortage (compared to the size of
>> the document).

>
> Undoing the last command can require allocating of memory...


Undoing a partially performed (uncommitted) document load may mean just
unrolling the stack and let destructors do their deallocations. And this
is already done by the exception mechanism. If all has been properly
loaded, then a pointer to the loaded contents is inserted in the document.

>> To grant a graceful exit, I would ensure that the shutdown routines do
>> not allocate any memory, by reserving beforehand all the needed memory
>> space.

>
> That's good. What happens if the process runs out of memory while trying
> to allocate the memory for those shutdown routines?


That allocation should happens at startup of the application, before
larger allocations. If that fails, nothing can be done.

--

Carlo Milanesi
http://carlomilanesi.wordpress.com/
 
Reply With Quote
 
MikeWhy
Guest
Posts: n/a
 
      12-31-2011
"Datesfat Chicks" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)...
> For the type of applications I have in mind, the task would already
> have been run with the maximum amount of memory available, and short
> of rephrasing the computation or using a different computer, no
> options would be available to the user.


What system? The virtual address space for a 32-bit Windows app is 2 GB;
much larger on 64-bit. Are you really in danger of running out of heap? What
is your process doing?

 
Reply With Quote
 
Jens Thoms Toerring
Guest
Posts: n/a
 
      12-31-2011
Datesfat Chicks <(E-Mail Removed)> wrote:
> The reason for releasing memory when "new" fails is that without doing
> that, the application couldn't even make it to a point where it could
> finish writing log files and let the user know they're copulated.


There could be at least two things that might throw a spanner
into the works when you try to "pre-allocate" memory. First,
on multi-tasking systems when you release the pre-allocated
memory you're not always guaranteed that you will get it back
when you then call 'new' - a different process may have been
run in between and just grabbed the memory you released (if
this can happen depends a lot on the system and how it hand-
les memory). And then there's something called "memory over-
commitment", i.e. even though malloc() returns succesfully
you still might have your process killed when you try to use
it. This feature was added on some systems because there are
a lot of programs that do pre-allocation of memory they never
use, so the system signals that memory is available but may
not actually give it to you when you need it and memory has
become exhausted. This can only be avoided by either getting
the administrator of the machine to switch off overcommitment
or by "using" the memory you got (by writing to it). Of cour
se, the details again will be rather system specific.

On the other hand, is a scenario where memory is that exhaus-
ed that you won't get enough even for simple clean-up tasks
really something you have to worry that much about? Normally
you will run out of memory when you need really lots of it
and then there will typically still be enough available for
the clean-up (as long as they don't also need huge amounts).
So, is that a problem you need to solve in a way that there
is not the slightest chance that it can ever occur? As I
read in one of Tanenbaum's books sometimes the best solu-
tion to a rare problem can be to stick your head in the
sand and pretend that it won't happen (see also "ostrich
algorithm"
Regards, Jens
--
\ Jens Thoms Toerring ___ http://www.velocityreviews.com/forums/(E-Mail Removed)
\__________________________ http://toerring.de
 
Reply With Quote
 
Goran
Guest
Posts: n/a
 
      01-01-2012
On Dec 31 2011, 7:04*pm, Victor Bazarov <(E-Mail Removed)>
wrote:
> On 12/31/2011 12:18 PM, Carlo Milanesi wrote:
>
> > Il 31/12/2011 15:44, Victor Bazarov ha scritto:
> >> > Is there a better way?

>
> >> my
> >> advice (based on experience trying to handle it and seeing others trying
> >> to handle it) is, don't waste your time. If your process has run out of
> >> memory, the best solution is to restart it with *more memory*.

>
> > So, if your word processor cannot load a huge document, or your video
> > editor cannot load a huge piece of footage, you'd like the application
> > terminate at once without saving anything

>
> No, not without saying anything. *But saying anything often does NOT
> require memory allocation, nor do I consider "saying anything" a
> graceful exist.
>
> Neither a word processor nor a video editor should even attempt loading
> anything if it determines that the attempt might cause it to run out of
> memory. *If they can't determine that before opening the file, they
> aren't worth our time to discuss them.


I disagree. There's no guarantee whatsoever that any allocation will
succeed, so theoretically, your idea is unsound. Practically, it's
difficult to implement more often than not. I was doing it, and
looking back, it was an error. E.g. it's easy only if you merely load
file contents into memory. I don't believe that happens all that
often.

What you're saying is equivalent to going to supermarket to see
whether it has mayonnaise, then going back for money, then coming back
to buy.

Goran.
 
Reply With Quote
 
Victor Bazarov
Guest
Posts: n/a
 
      01-01-2012
On 1/1/2012 2:25 AM, Goran wrote:
> On Dec 31 2011, 7:04 pm, Victor Bazarov<(E-Mail Removed)>
> wrote:
>> Neither a word processor nor a video editor should even attempt loading
>> anything if it determines that the attempt might cause it to run out of
>> memory. If they can't determine that before opening the file, they
>> aren't worth our time to discuss them.

>
> I disagree. There's no guarantee whatsoever that any allocation will
> succeed, so theoretically, your idea is unsound. Practically, it's
> difficult to implement more often than not. I was doing it, and
> looking back, it was an error. E.g. it's easy only if you merely load
> file contents into memory. I don't believe that happens all that
> often.
>
> What you're saying is equivalent to going to supermarket to see
> whether it has mayonnaise, then going back for money, then coming back
> to buy.


Not at all. There is always some reasonable portion that, when
allocated fresh, is likely to be granted to the process. And if
allocating that reasonable portion fails, quit and tell the user he
can't use that system for that operation without any changes (e.g.
making some more resources available by either adding spare ones or
releasing the ones currently used). I am saying that if you come to the
store to buy mayonnaise, there is (a) no sense to reach into your pocket
if the mayo is not on the shelf (file is missing), (b) no need to buy
the lifetime's worth in advance (even if you know what your lifetime is
going to be). We buy mayo in portions, consume, then come to buy some
more. Besides, we have replenishment of money, so it's a bad analogy
anyway.

In 1960s programmers wrote solving large systems of linear equations
(hundreds and even thousands) on a system that only had 100 cells of
memory. How did they manage? What is so different about it when it's
not 100 but 100 billion cells? There will be problems that aren't going
to fit in memory whole. Another example: on MS-DOS there was a text
editor called MultiEdit that managed to deal with files much larger than
the available 640KB. How did they do it?

What I am saying is that there is always a way to deal with large
amounts of data without having to load it all in memory.

V
--
I do not respond to top-posted replies, please don't ask
 
Reply With Quote
 
Goran
Guest
Posts: n/a
 
      01-01-2012
On Jan 1, 3:39*pm, Victor Bazarov <(E-Mail Removed)> wrote:
> On 1/1/2012 2:25 AM, Goran wrote:
>
>
>
>
>
>
>
>
>
> > On Dec 31 2011, 7:04 pm, Victor Bazarov<(E-Mail Removed)>
> > wrote:
> >> Neither a word processor nor a video editor should even attempt loading
> >> anything if it determines that the attempt might cause it to run out of
> >> memory. *If they can't determine that before opening the file, they
> >> aren't worth our time to discuss them.

>
> > I disagree. There's no guarantee whatsoever that any allocation will
> > succeed, so theoretically, your idea is unsound. Practically, it's
> > difficult to implement more often than not. I was doing it, and
> > looking back, it was an error. E.g. it's easy only if you merely load
> > file contents into memory. I don't believe that happens all that
> > often.

>
> > What you're saying is equivalent to going to supermarket to see
> > whether it has mayonnaise, then going back for money, then coming back
> > to buy.

>
> Not at all. *There is always some reasonable portion that, when
> allocated fresh, is likely to be granted to the process. *And if
> allocating that reasonable portion fails,


But in your previous post you said

>> Neither a word processor nor a video editor should even attempt loading
>> anything if it determines that the attempt might cause it to run out of
>> memory.


.... and now you're saying that you will try to allocate memory.

That doesn't make good sense.

> quit and tell the user he
> can't use that system for that operation without any changes (e.g.
> making some more resources available by either adding spare ones or
> releasing the ones currently used). *I am saying that if you come to the
> store to buy mayonnaise, there is (a) no sense to reach into your pocket
> if the mayo is not on the shelf (file is missing), (b) no need to buy
> the lifetime's worth in advance (even if you know what your lifetime is
> going to be). *We buy mayo in portions, consume, then come to buy some
> more. *Besides, we have replenishment of money, so it's *a bad analogy
> anyway.


I disagree. We're possibly lacking mayonnaise, not money. In my
analogy, it is assumed that one does have money, just like it is
assumed that one can _call_ allocation function. You changed my
analogy!

> In 1960s programmers wrote solving large systems of linear equations
> (hundreds and even thousands) on a system that only had 100 cells of
> memory. *How did they manage? *What is so different about it when it's
> not 100 but 100 billion cells? *There will be problems that aren't going
> to fit in memory whole. *Another example: on MS-DOS there was a text
> editor called MultiEdit that managed to deal with files much larger than
> the available 640KB. *How did they do it?
>
> What I am saying is that there is always a way to deal with large
> amounts of data without having to load it all in memory.


I agree with that. But that's another problem, really, one of design.
Question here is clearly how to handle the situation where expected
resource isn't here __without__ changing the design (not right away,
at least).

Goran.
 
Reply With Quote
 
Jorgen Grahn
Guest
Posts: n/a
 
      01-01-2012
On Sat, 2011-12-31, Victor Bazarov wrote:
> On 12/31/2011 8:56 AM, Datesfat Chicks wrote:
>> I'm just learning C++ (after being a 19-year C veteran), so forgive
>> any naive questions or concerns.
>>
>> It seems to me that C++ has more inherent reliance on dynamic memory
>> allocation than C. It doesn't have to be that way in all cases (many
>> classes would be implemented without dynamic allocation), but it seems
>> more natural in C++ that dynamic allocation would appear in programs.

>
> The reality is simpler: it's easier to do dynamic memory properly in C++
> (purposefully so, actually), and that's the reason more people resort to
> it. In fact I've written plenty of code that never allocated anything
> in the free store (except by any library mechanisms that I had no
> control over). You don't have to use dynamic memory if you don't need it.


Another angle on the same answer:

I tend to use much less explicit dynamic allocation in C++ than in C,
but that's because the standard containers like std::vector,
std::string and std::map replace malloc()ed arrays (and the nasty
fixed-size arrays).

I believe I do /more/ dynamic allocation in C++ -- if you count the
ones done for me by the standard containers.

/Jorgen

--
// Jorgen Grahn <grahn@ Oo o. . .
\X/ snipabacken.se> O o .
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Project management / bug management Floris van Haaster ASP .Net 3 09-23-2005 08:36 PM
queue management with "application failure management" pouet Java 2 07-30-2004 09:59 PM
CatOS web management or CiscoView management ? Martin Bilgrav Cisco 1 12-20-2003 01:49 PM
perl memory management - does @array = () free the memory? Matt Oefinger Perl Misc 0 06-25-2003 09:11 PM



Advertisments