Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > bad alloc

Reply
Thread Tools

bad alloc

 
 
Adam Skutt
Guest
Posts: n/a
 
      09-15-2011
On Sep 15, 10:31*am, Goran <(E-Mail Removed)> wrote:
>
> No, it's __not__ difficult writing a piece of no-throw code. If
> nothing else, it's try{code here}catch(...){swallow}. It's actually
> trivial.


Now I remember why I choose to stop holding a discussion with you.
This does not provide a robust no-throw exception safety guarantee. A
robust no-throw guarantee means the code always succeeds (or has no
effect) and never throws an exception. Swallowing the exception does
not achieve this standard and is only done when no other alternative
is possible. Again, robustness is the goal, so this is not a solution.

There's no point in discussing exception safety in any context with
someone who doesn't even know what the exception safety guarantees!
Doubly so when the person is making statements that are obviously
completely disconnected from reality. I won't make the mistake of
responding to you again.

Adam
 
Reply With Quote
 
 
 
 
Goran
Guest
Posts: n/a
 
      09-16-2011
On Sep 15, 4:59*pm, Adam Skutt <(E-Mail Removed)> wrote:
> On Sep 15, 10:31*am, Goran <(E-Mail Removed)> wrote:
>
>
>
> > No, it's __not__ difficult writing a piece of no-throw code. If
> > nothing else, it's try{code here}catch(...){swallow}. It's actually
> > trivial.

>
> Now I remember why I choose to stop holding a discussion with you.
> This does not provide a robust no-throw exception safety guarantee. *A
> robust no-throw guarantee means the code always succeeds (or has no
> effect) and never throws an exception.


Obviously, I disagree with you, and so does Wikipedia. (http://
en.wikipedia.org/wiki/Exception_handling#Exception_safety). "Failure
transparency" is the name rightly chosen by article authors. Indeed, I
told you that already in my previous post, but you chose to disregard.
There's a difference between "no-throw" and "robust". Your mixing them
up doesn't serve you well, you just aren't realizing it.

> *Swallowing the exception does
> not achieve this standard and is only done when no other alternative
> is possible. Again, robustness is the goal, so this is not a solution.


Of course that swallowing an exception does not achieve robustness,
and I never claimed otherwise. I have chosen my words carefully. And I
addressed the robustness argument, too:

"Second, even robustness is not as hard as you're making it out to be,
not once you accept low resources. You simply prepare resources you
need need up-front and make sure you don't do something really dumb.
IOW, you lower your expectations and are done with it."

IOW... Code is __always__ subject to failures outside it's control,
and that can be made orthogonal to resource problems (see quote
above).

Here, an example: see e.g. that last-ditch fprintf call that has been
evoked before? You only have so much guarantee that it will work at
any given time. In fact, in common systems, you have equal amount of
guarantee that it will work regardless of whether you OOM-ed or not,
and you know it, don't you?

> There's no point in discussing exception safety in any context with
> someone who doesn't even know what the exception safety guarantees!
> Doubly so when the person is making statements that are obviously
> completely disconnected from reality. *I won't make the mistake of
> responding to you again.


Yeah, yeah... I believe you won't be responding because you're out of
arguments. Note how you didn't, AT ALL, address the crux of my
argument in previous post. I'll repeat it, in case that wasn't clear:

You: "Except OOM is not in the same bag as all other exceptions. "

Me: "As far as writing exception-safe code goes, this is very, very
wrong.
It's wrong because exception safety almost exclusively deals with
recovery and cleanup techniques, who almost never require additional
resources."

(Wait, I changed my mind: I believe that you DO understand at least
some of this, but are refusing to acknowledge it merely because it
doesn't fit your world.)


Goran.
 
Reply With Quote
 
 
 
 
none
Guest
Posts: n/a
 
      09-16-2011
In article <(E-Mail Removed)>,
Adam Skutt <(E-Mail Removed)> wrote:
>On Sep 14, 11:11am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
>wrote:
>> In article <(E-Mail Removed)>,
>> Adam Skutt <(E-Mail Removed)> wrote:
>>
>> >On Sep 13, 1:06pm, yatremblay@bel1lin202.(none) (Yannick Tremblay)
>> >wrote:
>> >Which is what I told you. It's still not sufficiently general to
>> >support the counterargument, "You only have to handle some of them".

>>
>> You are being obtuse and refuse to even try to understand.
>>

>
>No, I understand your code and your point perfectly. You're simply
>wrong, because you have zero evidence that any real application will
>behave as your little example.


You are amazing. You post here accusing peoples that actually post
code samples and providing "zero evidence" while you provide
absolutely no evidence of your claims!

Do you have mirrors in your house?

>Real applications can behave in a
>myriad of ways, including in the two I suggested: total failure to
>allocate memory memory and where "little" allocations fail while
>"large" allocations do not.
>Your example plainly does not work as intended in either of those two
>situations. More importantly, it does not achieve your original
>stated goal: to improve the robustness of the application and isolate
>the failure of one processing from impacting the others.


It works perfectly as intended in the scenario it is intended to work.
Obviously, it does not protect against failures it is not designed to
protect from.

>Your lack of understanding about how programs, runtimes, and operating
>systems allocate memory and refusal to consider the situations I've
>posed does not make me obtuse.


I see. Since you can't supply any evidence of anything whatsoever,
just make unsupported claims that the other party is ignorant.
Interesting technique but I prefer not using it personally.

>The fact you're unwilling or unable to accept that the code may never
>throw std::bad_alloc where you claim it will is simply gobsmacking.
>You've provided zero evidence to believe that the code will fail where
>you claim it will with any regularity. As such, we have no reason
>whatsoever to believe your design will improve robustness. Even if we
>believed you have managed to place the catch block correctly (you
>didn't), the entire discussion started over the difficulty of writing
>what goes inside the catch block!


No, the discussion started with a question if bad_alloc can be handled
or should be handled.

>What you've written is plainly
>insufficient,


It works! It does not cover every possible failrue scenarios but it
improve robustness (i.e. following a specific type of OOM error, the
application is still up and running and able to process further inputs)

Obviously it is not a full application but the principle will work in
a large application.

> so you still haven't met the burden of proof upon you.


You claim that there is burden of proof upon me. But you
seem totally happy to assume that there is absolutely no burden of
proof whatsoever upon you. You are amazing.

>Until you actually present something that will demonstrably improve
>robustness in a real application under a wide variety of failure cases
>(or demonstrate their irrelevance), there's no reason to discuss this
>any further than this e-mail. You've been presented with plenty of
>reasoning as to why your views are invalid and your example will not
>work in a real application. If what you've been given isn't sufficient
>proof, then I doubt anything will be sufficient proof.


I will try to be polite here despite you:

There ain't no silver bullet. There are no simple solution that
will improve robustness of applications under a wide variety of
unknown and unspecified failure cases.

The best way to improve robustness of an application is by uing
multiple techniques that are useful for a limited type of error
cases.

The design I posted does improve the robustness of real applications
under a particular type of failure scenario. It never intended nor
tried to solve 100% of all possible failure scenario.

If we go back to the cleaning lady pulling of the plug example, does
this mean that because you can write code to protect against the
cleaning lady, you should not bother to attempt to handle any error
whatsoever?

>> There is not need to be able to recover from all allocation failure.
>> You seem to be the one that claims that because it is impossible to
>> recover from all possible allocation failures, then you should never
>> ever under any circumstances even consider attempting to recover from
>> any allocation failure whatsoever.

>
>No, I say that only handling some failures doesn't buy you anything
>because you have no way to tell which allocations will fail.


Why?
Why is it impossible to be able to estimate which allocations are
more likely to fail?
Is it impossible to be able to know which allocations are likely to be
large and which allocation are likely to be small?
Why is it impossible to design a system in such a way to some
allocations are more likely to be larger than than other?

Obviously if your programs always leak one byte at a time, any
allocation anywhere may fail. But a better design might not be in the
same situation.

>> 1- It works
>> 2- Some things is being done in the catch block
>>
>> No problem. Not difficult. I am not sure what is your problems with
>> it.

>
>It doesn't work if there is no memory remaining, because the iostream
>might allocate behind your back.


Did you run the program? Did it work? Did iostream fail?
Please supply the input value that makes iostream fail.

>Ergo, it is not robust. Just
>because you believe your code proves there must be memory remaining
>does not mean it has actually done so. You have not tested all the
>possibilities yet.


The code is designed to cope with error due to input complexity being to
great for available resources.

It is more robust that without the try/catch since without the try
catch, the program would terminate immediately following first complex
input. This program instead can cope with complex input, return an
error to the user and keep processing future inputs.

Obviously, the code will not protect against all other possible
failure causes.

>Inductive fallacies are not proof, and they are
>just another reason I see no point in discussing this topic with you
>further.


Bah.... pots and keetles again.

>> I have no idea what claim you claim that I made that you claim that I
>> cannot support.

>
>All of them, ignoring the fact you've seriously backed off your
>original claims. Which is what I still want to see, because they
>would actually be interesting as opposed to what you are trying to do
>now.


Please tell me what my original claim was?
I never claimed that it is easy and simple to protect against all
failure scenario. You must have imagined that.

>> As far as I can understand, you seem to claim the generic that
>> recovering from any allocation failures whatsoever is impossible to do
>> and never worth doing.

>
>I've never claimed any such thing. I've claimed it's almost never
>worth doing because it's simply too hard to do robustly, and it's far
>easier to improve robustness in other ways.


So I demonstrated that for at least one specific scenario: there is
memory available but the quantity of memory available is not
sufficient to process a complex input, then it is relatively simple to
design the program in such a way that it can handle OOM errors that
are due to input complexity.

I have also demonstrated that for this particular problem, it is
easy to improve robustness by catching the bad_alloc.

>I've also claimed that it
>is considerably harder than its proponents suggest and demonstrated
>the issues with the proposed solutions and why they are not actually
>robust.


I have no issue with your claim that correctly handling all types of
OOM errors is very difficult. I have issue with your refusal to admit
that there are circumstances where OOM errors can be handled.

>> In the system as designed, you *know* that in this particular location,
>> the allocation size is possibly too large (because it depends on external
>> input). You *know* that because you designed it that way.
>>
>> Hence two possibilities:
>>
>> 1- The allocation failure was due to the requested allocation being
>> too large. The the recovery attempt will succeed and it is fine to
>> continue.
>>
>> 2- The allocation failure was not due to the requested allocation
>> being too large and instead due to allocation being totally impossible
>> on the system now. Then the recovery attempt will fail and the
>> program will terminate.
>>

>
>Those are not the only two possibilities, which is part of the problem
>with your reasoning. Consider:
>3. The allocation succeeds but then causes all future allocations to
>fail due to its size (i.e., it filled the freestore).


Yes, there is a possibility that a request fills memory to 99.9999999%
forcing the next allocation to fail.

The next allocation with either be inside the try block and will be
handled albeit the recovery attempt may fail. Or it will be elsewhere
in the program. In which case the program terminate.

In either case, the program is more robust than if it did not include
the try/catch. You are blocked onto handle 100% or nothing is worth
doing. I maintain that even 1% is better than 0.

>This should be plainly be handled in the same manner as the first
>possibility, since the size of the user's request caused the failure.
>However, your code example will not handle this possibility correctly.


There's two problems with your argument:
1- What is the probability of an input being the exact size required
to fill memory to 99.99999999% (succeeding but making all further
allocation fail). You are assuming that the probability is large. I
am assuming that by good understanding of the problem space and good
design, it is quite possible that this probability is small.

If you were willing you demontrate and prove any of your claims, you
could try to reset multiplier to 1 and demonstrate that you can input
a specific value that makes "int * q = new int[5];" fail.

2- you are still assuming that OOM handling is an all or nothing. you
either are able to handle all possible OOM errors at all possible
places in the program for any possible reason or it is not worth doing
anything whatsoever. IMO, this reasoning is wrong.

>> Can we at least agree that the application can't know if an allocation
>> failed because of heap fragmentation, allocator limitation, maximum
>> per-process OS enforced limits or OS actually having run out of memory
>> altogether? The visible result for the application will be the same.

>
>Your first statement is right and your second statement is wrong. The
>visible result for the application will plainly not be the same. If
>you've reached the commit limit, then all requests for more memory
>from the OS will fail until the limit is increased or memory is
>returned the operating system. If a request fails because it's simply
>too large (e..g, it's bigger than your virtual address space), then
>reasonable requests will still succeed.


No.

If you have reached the commit limit, then the max allocation possible
is 0 bytes. If you request more than 0 bytes, the allocation will
fail. If you haven't reach the commit limit, then the maximum
allocation possible is X, if you request more than X, the allocation
will fail.

From the application point of view, it doesn't know. All it knows is
that an request failed. It has no idea if X was 0 or anything else.

Any subsequent allocation request will be in the exact same position.
The application can have no guarantee that any allocations will ever
succeed nor can it ever be certain that any allocation will fail
regardless of the result of the previous allocation request.

>> You choose to quit at the first hurdle. I choose to at least attempt
>> to jump. If I fail, no loss. I am no worse than you. If I succeed,
>> I live to fight another day.

>
>No, you're far worse off because you've spend a large amount of money
>designing, implementing, and (badly) testing code that's effectively
>dead. I'm better off because I got to keep all that money. Writing
>code is not free.


So are you the one that wrote this web browser that crashes when
there's a large picture on the page rather than display an empty box?

>> I am not the designer of your application so I can't know what are all
>> the factors that need to be considered for *your* design. I have no
>> ideas of *your* requirements.

>
>Then you cannot possibly claim it's generally worthwhile to handle OOM
>conditions, yet you've attempted to do precisely that several times
>over!


This is a lie and you know it. You are attempting to claim that I
claimed a generality when I never claimed such a thing.

I strongly disagree with your generalities. I claim that under
specific circumstances with intelligent design you can handle some OOM
errors.

>> >Just for grins, try allocating all that space one byte at a time (go
>> >ahead and leak it), so you actually fill up the freestore before
>> >making the failed allocation. Then see how much space you have
>> >available, if you don't outright crash your computer[2][3].

>
>> So you are advocating that I should design my application in such a
>> way that OOM error are always fatal. IN such a way that I
>> purposefully micro-allocate lots and lots of memory so that failure
>> will most likely happen at totally random places. Euh?!?

>
>No, I'm suggesting you try test cases that better exercise all of the
>conditions under which OOM can occur. Many applications include many
>small allocations to go along with their large allocations. Many
>application may never make singular large allocations, perhaps they
>structure their data using a linked list or a tree instead of an
>array.

I told you and fully agree with you that the design can't protect
against byte-by-byte allocation. It's not meant to do it.

Are you claiming that because some applications include many small
allocations, no applications will ever do large allocations?

>Your example is simply not realistic, and I gave one example
>of how it is unrealistic. The large allocation may be the last
>allocation in a long string of small allocations. It may succeed,
>filling the address space, causing all future allocations to fail.


Are you are the one claiming that other are unrealistic!
The situation you describe is possible. Is it really probable?
(please supply the input value in the example code that make it
happen, the example code does have some small allocations and a
potentially large one).

In any case, the result is as I keep telling you:
If this happens the application would terminate.

Why are you so convince that terminating on all OOM Errors is a good
thing. But terminating only on OOM errors that you are not able to
handle is a bad thing?

>> Given your claimed superior knowledge of allocators, please enlight us
>> on why the same pattern would fail on a multithreaded setup?

>
>> Can you clarify why thread #1 *failing* to allocate a large block of
>> memory directly stop thread #2 from being able to allocate a small
>> block of memory? Are your allocator not-thread safe?

>
>If the first allocation failed because the virtual address space is
>full, the second allocation will fail for the same reason. Your
>original claim was that you could isolate failure in one thread from
>failure in others simply by catching OOM in the first thread. This is
>obviously not possible when the reason for failure has nothing to do
>with the size of the request.


Let me get this straight. There is never any point in an application
life where available memory to the application is less than 0.

So if currently available memory is X and thread A attempt to allocate
Y bytes with Y is potentially large and happens to be larger X. The
allocation request will fail. Immediately following this failed
allocation request, the application has had no direct effect on the
quantity of available memory. It will either still be X or have been
modified for external reasons (multi-task OS).

*failing* to allocate memory does not directly decrease the amount of
memory available to an application. Thread A failing to allocate
memory does not directly decrease the amount of memory available to
thread B.

And please, I agree with you, the design will not protect against 0
bytes available. It protects against a specific request that happens
to be large requiring too much memory.

>> >[3] This is of course one reason that restarting is inherently
>> >superior to handling OOM: if your program does leak memory, then
>> >restarting the program will get that memory back. Plus, you will
>> >eventually have to take it out of service anyway to plug the leak for
>> >good.

>
>> If you program leaks memory, you should fix it.

>
>Yes, I agree. Doing that means restarting the program. That means I
>must have built a system that can tolerate restarting the program.
>That seriously diminishes the value of writing code with the sole
>purpose of avoiding program restarts.


Crashing application or crashing server are often undesireable even if
they will be restarted afterwards. You must have very tolerant
users.

Note: I am not arguiing against planning a recovery mechanism, I am
arguing against using the existence of a recovery mechanism to avoid
doing due diligence in avoiding crashes and writing quality code.


 
Reply With Quote
 
Adam Skutt
Guest
Posts: n/a
 
      09-16-2011
On Sep 16, 11:41*am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
wrote:
> In article <(E-Mail Removed)>,
> Adam Skutt *<(E-Mail Removed)> wrote:
>
> >On Sep 14, 11:11am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
> >wrote:
> >> In article <(E-Mail Removed)>,
> >> Adam Skutt <(E-Mail Removed)> wrote:

>
> >> >On Sep 13, 1:06pm, yatremblay@bel1lin202.(none) (Yannick Tremblay)
> >> >wrote:
> >> >Which is what I told you. It's still not sufficiently general to
> >> >support the counterargument, "You only have to handle some of them".

>
> >> You are being obtuse and refuse to even try to understand.

>
> >No, I understand your code and your point perfectly. *You're simply
> >wrong, because you have zero evidence that any real application will
> >behave as your little example. *

>
> You are amazing. *You post here accusing peoples that actually post
> code samples and providing "zero evidence" while you provide
> absolutely no evidence of your claims! *
>


I'm responding to this against my better judgement. Don't make me
regret it.

I've provided plenty of evidence, but most of my claims are rather
basic, elementary facts. They don't really need any substantiation,
since they can be trivially verified by using Google and/or some basic
reasoning.

What facts do you not believe aren't basic nor elementary?
* That's there's no generally way to tell why std::bad_alloc was
thrown?
* That the odds of a kernel panic or other seriously bad behavior on
the part of the OS when the commit limit is reached are good?
* That overcommitting operating systems will only give you
std::bad_alloc on virtual address space exhaustion (or overcommit
limit exhaustion)? That overcommitting operating systems may kill
your program when out of memory without ever causing a std::bad_alloc
to occur?
* That operating systems and language runtimes can treat allocations
of different types and sizes differently, so you can see 'unintuitive'
behavior such as large allocations succeeding when little ones fail?
That you can see behavior such as the runtime asking for more memory
from the OS than strictly necessary to fulfill the programmer's
request? That heap fragmentation can cause similar issues? That
merely stack unwinding in respond to an OOM condition doesn't
necessarily ensure other parts of the code can leverage the
deallocated resources?
* That modern language runtimes and libraries, including C++, treat
memory allocation as a common and generally innocuous side-effect?
Meaning that, generally speaking, you cannot assume whether a given
function allocates memory or not?
* That the default behavior when std::bad_alloc is thrown is program
termination in some fashion?
* That most programs aren't failing on OOM because they tried to
allocate a singular array, in a singular allocation that's too large?
(This list isn't exhaustive, mind you)

From all of that, I concluded (my original text):
> "Properly written, exception-safe C++ code will do the right thing when
> std::bad_alloc is thrown, and most C++ code cannot sensibly handle
> std::bad_alloc. As a result, the automatic behavior, which is to let
> the exception propagate up to main() and terminate the program there,
> is the correct behavior for the overwhelming majority of C++
> applications. As programmers, we win anytime the automatic behavior
> is the correct behavior. "


I also noted earlier (my original text):
> Even when there's memory to free up, writing an exception handler that
> actually safely runs under an out-of-memory condition is impressively
> difficult. In some situations, it may not be possible to do what you
> want, and it may not be possible to do anything at all. Moreover, you
> may not have any way to detect these conditions while your code is
> running. Your only recourse may be to crash and crash hard.


Later I noted, to you no less (my original text):
> This can happen anyway, so your solution must simply be prepared to
> deal with this eventuality. Making your code handle memory allocation
> failure gracefully does not save you from the cleaning staff. If your
> system can handle the cleaning staff, then it can handle memory
> allocation failure terminating a process, too.


> I don't know why people think it's interesting to talk about super
> reliable software but neglect super reliable hardware too. It's
> impossible to make hardware that never fails (pesky physics) so why
> would I ever bother writing software that never fails? Software that
> never crashes is useless if the cleaning staff kicks out the power
> cable every night.


Note this is a general-case practical argument based on the economic
cost. There's three thrusts:
* Most code lacks a sensible response to memory allocation failure.
This may be due to the fact that there's just no sensible way to
continue onward, or it may be due to the fact that allocation failure
only occurs in a catastrophic situation.
* Even when a sensible response exists, the difficulty (cost) involved
in writing a sensible response is too hard. This includes all the
code changes necessary to ensure the program can continue onward.
These are not free, nor are they small.
* When a response is necessary, higher-level mechanisms to achieve
reliability, isolation, and/or problem avoidance are typically
superior because they handle other situations that you care about as
well. I'll repeat two: properly redundant and clustered server
instances also avoid power interruptions; smart limits on input values
also defend against malicious inputs.

In short, it's too hard to do and even when it can be done, the cost
doesn't outweigh the benefit. I could go one step further and note
that it's impossible or even more difficult in many common languages
other than C++, so techniques that benefit those languages as well are
clearly of more value than inherently C++ specific techniques.
Unsurprisingly, most, if not all, of what I've suggested one do is
language agnostic.

Since it's a general-case and not absolute argument, specific
counterexamples cannot disprove it. You need to either defeat my
assumptions (e.g., the cost of a handler is too high generally) or
show I'm mistaken in the generality of my claims (i.e., in reality I'm
talking about a niche).

Showing you can provide a sensible response to a singular large array
is not interesting. I write a considerable amount of scientific code
professionally. Such applications frequently use large arrays in the
bulk of their processing. In my applications, none of them use a
single array, there's always two at a minimum: one for the input and
one for output, as much of the stuff I write lacks easy in-place
algorithms. Much of the time there's several inputs being combined
into a single output, so there are many arrays. Many of them need
temporary copies in one state or another, so there's even more arrays
beyond the input and output. But on top of that, my code still has to
load data from disk and write it back out to disk. That involves a
multitude of smaller objects, many of which I don't directly create or
control: I/O buffers, strings to hold individual lines and portions of
lines, state objects for my parsers, application objects holding
command-line parameters and other configuration items, etc.

All of that is involved in performing a single operation. Related to
your original case, almost all of that would be unique to each request
if my application were threaded in a per-request manner. I don't
think the assumption that generally, most applications make many
"extra" allocations in support of their main algorithm is
unreasonable. Nor do I think it's reasonable to assume that
generally, the main algorithm is using some big array or that the
array gets allocated all at once. Moreover, if you disagree with
these generalities, I really have no interest in discussing this with
you at all, as you're too clearly colored by your own perceptions to
have a reasonable discussion.

Let me pose it to you another way: glibc specializes small allocations
by including a small block allocator that's intentionally designed to
be fast for small allocations. If small allocations weren't frequent,
what would be the value of specifically optimizing for them?
Moreover, large allocations are very slow: each one involves a system
call to mmap(2) on request and again to munmap(2) on release. System
calls are very, very slow on modern computers. Likewise, the Python
interpreter intentionally optimizes for the creation of several small
objects. Finally, some generational garbage collectors consider
allocation size when determining generation for the object. So do you
really believe your example is that general purpose, and that
relevant, when so much time is spent on optimizing the small? Do you
really believe all of these people are mistaken for focusing on the
small?

> It works perfectly as intended in the scenario it is intended to work.


Bully. I don't know any other way to explain to you that the scenario
that your scenario is uninteresting and not a valid rebuttal. More
importantly, it can't ever be a valid rebuttal to what I've said.

> You claim that there is burden of proof upon me. *But you
> seem totally happy to assume that there is absolutely no burden of
> proof whatsoever upon you. *You are amazing. *


The burden of proof upon me is substantially less since I'm arguing
for the status quo. Simply because you find my statements
controversial doesn't mean they are actually controversial.

>
> The best way to improve robustness of an application is by uing
> multiple techniques that are useful for a limited type of error
> cases.
>


No, I don't think so. As I've noted, plenty of systems with safety-
critical requirements take a unilateral action to all abnormal
conditions, and that response is frequently to terminate and restart.
As I've noted, there are plenty of techniques to improve robustness
that cover a bevy of error conditions.

> The design I posted does improve the robustness of real applications
> under a particular type of failure scenario. *


No, it does not, because you haven't proven that failure scenario
actually occurs. The issue is not, "It doesn't work if X happens,"
the issue is, "X 'never' happens". Improving robustness requires
showing that you've actually solved a real problem that occurs
regularly. Thus far, you have not shown it will happen regularly as
you describe. You haven't even shown most applications behave as you
describe.

> It never intended nor
> tried to solve 100% of all possible failure scenario. *
>
> If we go back to the cleaning lady pulling of the plug example, does
> this mean that because you can write code to protect against the
> cleaning lady, you should not bother to attempt to handle any error
> whatsoever?
>


Yes it absolutely can, from a cost/benefit perspective. Especially
when the error in question is rare. Again, my argument is almost
entirely cost/benefit. If I have to build a system that tolerates
power failure of individual servers, then it also handles individual
processes running out of memory. As such, the benefit of handling out
of memory specially must be quite large to justify further costs, or
the cost must be incredibly cheap.

> Why? *
> Why is it impossible to be able to estimate which allocations are
> more likely to fail?


Because _size_ has very little to do with the probability. They're
just not strongly correlated factors, unless you're talking about
allocations that can never succeed.

> Is it impossible to be able to know which allocations are likely to be
> large and which allocation are likely to be small?


> Why is it impossible to design a system in such a way to some
> allocations are more likely to be larger than than other?


No and it's not, but it's not enough information to determine anything
useful. Moreover, if you use a structure such as a linked list or a
tree, all of your allocations may be the same size!

Again, you would be saying something interesting if you would stop
talking about a singular allocation. Even using a std::vector means
you have to contend with more than one allocation.

> Please tell me what my original claim was?
> I never claimed that it is easy and simple to protect against all
> failure scenario. *You must have imagined that.


No, but you claimed it was easy and simple to isolate failure in one
thread from failure in the others. In your defense, you were probably
talking only about the case you coded in your example, but you hardly
made that clear from the outset. Regardless, it only hold while your
assumptions hold, and as I've said many times, your assumptions are
very, very poor.

> I have no issue with your claim that correctly handling all types of
> OOM errors is very difficult. *I have issue with your refusal to admit
> that there are circumstances where OOM errors can be handled.
>


I have never done the latter at any point. I don't know why you
persist in believing that I have. There are obviously situations in
which it can be done, because it has been done in the past.

>
> >Then you cannot possibly claim it's generally worthwhile to handle OOM
> >conditions, yet you've attempted to do precisely that several times
> >over!

>
> This is a lie and you know it. *You are attempting to claim that I
> claimed a generality when I never claimed such a thing.
>
> I strongly disagree with your generalities. *I claim that under
> specific circumstances with intelligent design you can handle some OOM
> errors. *
>


If you disagree with my generalities then you too must be talking in
generalities in order to have anything worth discussing at all.

>
> >Your example is simply not realistic, and I gave one example
> >of how it is unrealistic. *The large allocation may be the last
> >allocation in a long string of small allocations. *It may succeed,
> >filling the address space, causing all future allocations to fail.

>
> Are you are the one claiming that other are unrealistic!
> The situation you describe is possible. *Is it really probable?
> (please supply the input value in the example code that make it
> happen, the example code does have some small allocations and a
> potentially large one).


Instead of using an array, just use a set, map, or linked list with a
type of some fixed, small size. Such cases are quite probable. I'm
not sure why you're so resistant to the notion that most applications
do not see std::bad_alloc until they are almost out of memory or the
operating system is almost out of memory.

Like I said, on a common 64-bit operating system and platforms, your
program has terabytes of virtual address space. The operating system
probably only has gigabytes to tens of gigabytes of commit to serve
that space. That means your singular request has to be 2^32 or so
in size, minimum, in order to trigger std::bad_alloc for the /sole/
reason you can handle. Are you honestly telling me, with a straight
face, you believe that is more common than what I suggest happens?

Even on a 32-bit operating system, that request would have to be
3*2^30 or so in order to trigger the response you can handle
robustly. That's a lot of memory. On my system right now, the
largest singular process is Eclipse at ~1.4GiB virtual size. The
second largest is Firefox at ~210MiB virtual size. All of the rest
are considerably under 100MiB.

Now sure, I could cause a large allocation (in aggregate) to occur in
both by attempting to open a very big document. However, even if
they respond to the failure in the fashion you describe, they still
haven't managed to do what I want: open the document. In order to
open that document, they're going to have to be rewritten to support
handling files that are very large. There's three important
consequences to this reality:
* I will have to start a new version of the application in order to do
what I want: open the document.
* Since the application couldn't do what I want, I'm going to being
closing it anyway. As such, while termination without my input is
hardly the best response, it's also not that divorced from what was
going to happen anyway. As such, the negativity associated with
termination isn't nearly as bad as it could be.

You'll probably point out that both can handle multiple inputs, and
that failure to process one shouldn't impact the others. I'll note
that Firefox has tab isolation (now) and session management for tabs,
so that loss work is minimized on failure. Firefox needs this
functionality /anyway/, since it can crash for reasons entirely beyond
its control (e.g., a plugin forcibly kills the process). Eclipse isn't
nearly as wise, but Eclipse has a cavalier attitude towards my data in
many, many aspects.

Moreover, back to the original point, if those applications behave
like that every day (and they do), then why would I believe that
singular large allocations are common events?

> >Yes, I agree. *Doing that means restarting the program. *That means I
> >must have built a system that can tolerate restarting the program.
> >That seriously diminishes the value of writing code with the sole
> >purpose of avoiding program restarts.

>
> Crashing application or crashing server are often undesireable even if
> they will be restarted afterwards. *You must have very tolerant
> users.


No, I have mechanisms in place so the loss of a server is generally
transparent to the users.

> Note: I am not arguiing against planning a recovery mechanism, I am
> arguing against using the existence of a recovery mechanism to avoid
> doing due diligence in avoiding crashes and writing quality code.


Again, the notion that code is of lower quality merely because it
crashes is simply not true.
It's easily the most absurd notion, by far and away, proffered in this
whole thread.

Adam
 
Reply With Quote
 
Joshua Maurice
Guest
Posts: n/a
 
      09-16-2011
On Sep 14, 11:17*am, Adam Skutt <(E-Mail Removed)> wrote:
> Inductive fallacies are not proof, and they are
> just another reason I see no point in discussing this topic with you
> further.


Sorry, this is just one of my pet peeves, so let me segue here. There
are formal logic proofs, otherwise known as deductive proofs. Then
there is science. Science is the art and practice of learning by
inductive reasoning. Inductive proofs are /not/ fallacies. IMHO they
are the only way to learn about the world around us. Everything you
know about reality is from an inductive proof.

Of course, inductive proofs have differing layers of strength. If I
try some "hack" in code and it works once, not a very strong inductive
proof. If I try it 1000 times, then I have a stronger inductive proof
that it will work the next time I try it.

Do note how standards come into play with this. If I could test a hack
on every system a billion times, each under different configurations
and kinds of load, then I would feel quite comfortable relying on it.
However, no one can reasonably do that. Instead, the people who make
the hardware know what does and does not work (again through physics,
aka a kind of inductive reasoning), and they document it. We learn
about the guarantees from the trusted hardware people (trust is itself
a form of inductive reasoning), and then we apply deductive reasoning
to the facts we learned with inductive reasoning. This tends to work
out better in practice. Notice the words I used - "tends to work out
better in practice". That is itself an inductive argument. I code to
standards because /that is what works/!

PS: Also, there's the problem that the hardware makers, when
publishing their standards, want to leave wiggle room so that they can
weaken the guarantees of what the hardware actually does now in future
hardware. Thus, even if you test all possible configurations many
times, a new piece of hardware may break your code. Another reason to
code to standards. Again, because it works.

PPS: to add to the analysis, you claimed that it's impossible to
recover from all out of memory conditions. Code was presented which
showed that you could recover from an out of memory condition. This is
not an inductive proof. This is a counterexample to a universal claim.
Entirely different things. The first is generalized from a small
subset to a larger set, and the second is falsifying a global claim
with a counterexample.

PPPS: I might be open to the term "inductive fallacy" if used more
restrictively, specifically when referring to very weak inductive
arguments, such as "I saw two brown horses. Ergo all horses are
brown.".
 
Reply With Quote
 
Adam Skutt
Guest
Posts: n/a
 
      09-17-2011
On Sep 16, 6:38*pm, Joshua Maurice <(E-Mail Removed)> wrote:
> On Sep 14, 11:17*am, Adam Skutt <(E-Mail Removed)> wrote:
>
> > Inductive fallacies are not proof, and they are
> > just another reason I see no point in discussing this topic with you
> > further.

>
> Sorry, this is just one of my pet peeves, so let me segue here. There
> are formal logic proofs, otherwise known as deductive proofs. Then
> there is science. Science is the art and practice of learning by
> inductive reasoning. Inductive proofs are /not/ fallacies.


They are when presented as a proof one can solve the general case.
He's making the assumption that because he can solve one trivial case,
he can solve most cases. That may not have been his intent, but it is
certainly what happened. That is the textbook definition of an
inductive fallacy.

> PPS: to add to the analysis, you claimed that it's impossible to
> recover from all out of memory conditions.


No, I did nothing of the sort. If you're going to criticize what I
said, then make sure you're criticizing what I actually said. I said
it was generally impossible to do such a thing, using impossible as
intentionally hyperbole to underscore the difficultly. I've been
quite consistent in making it clear that I'm making a general-case
argument, even if I've failed to do it every last time.

>
> PPPS: I might be open to the term "inductive fallacy" if used more
> restrictively, specifically when referring to very weak inductive
> arguments, such as "I saw two brown horses. Ergo all horses are
> brown.".


The term has a widely accepted and known definition. Just because you
don't know what it is doesn't give you cause to rally against its
use.

Adam
 
Reply With Quote
 
Noah Roberts
Guest
Posts: n/a
 
      09-17-2011
On Sep 16, 12:19*am, Goran <(E-Mail Removed)> wrote:
> On Sep 15, 4:59*pm, Adam Skutt <(E-Mail Removed)> wrote:
>
> > On Sep 15, 10:31*am, Goran <(E-Mail Removed)> wrote:

>
> > > No, it's __not__ difficult writing a piece of no-throw code. If
> > > nothing else, it's try{code here}catch(...){swallow}. It's actually
> > > trivial.

>
> > Now I remember why I choose to stop holding a discussion with you.
> > This does not provide a robust no-throw exception safety guarantee. *A
> > robust no-throw guarantee means the code always succeeds (or has no
> > effect) and never throws an exception.

>
> Obviously, I disagree with you, and so does Wikipedia. (http://
> en.wikipedia.org/wiki/Exception_handling#Exception_safety). "Failure
> transparency" is the name rightly chosen by article authors. Indeed, I
> told you that already in my previous post, but you chose to disregard.
> There's a difference between "no-throw" and "robust". Your mixing them
> up doesn't serve you well, you just aren't realizing it.


Wrong resource. See this:

http://www.boost.org/community/exception_safety.html

In that list he does simply say the "no-throw" *guarantee* does not
throw. However, later when he's going into more depth he also says,
"[I]t always completes successfully." He also calls it the strongest
guarantee and the guarantee upon which the other two depend. I think
it could be well argued that this means the operation the function is
supposed to perform was accomplished since if something went wrong the
program can't be guaranteed to be in a consistent state by the basic
and strong guarantees.

Even your resource actually calls "Failure transparency" the "no-throw
guarantee" and claims that, "Operations are guaranteed to succeed and
satisfy all requirements even in presence of exceptional situations."

I think that anyone serious about C++ should learn about the no-throw
*guarantee* and how it differs from a function that simply does not
throw. Even if you've never used the term, it is quite common since
one of the biggest names in C++ codified it with his 3 exception
guarantees.
 
Reply With Quote
 
Goran
Guest
Posts: n/a
 
      09-18-2011
On Sep 17, 7:13*pm, Noah Roberts <(E-Mail Removed)> wrote:
> On Sep 16, 12:19*am, Goran <(E-Mail Removed)> wrote:
>
>
>
>
>
>
>
>
>
> > On Sep 15, 4:59*pm, Adam Skutt <(E-Mail Removed)> wrote:

>
> > > On Sep 15, 10:31*am, Goran <(E-Mail Removed)> wrote:

>
> > > > No, it's __not__ difficult writing a piece of no-throw code. If
> > > > nothing else, it's try{code here}catch(...){swallow}. It's actually
> > > > trivial.

>
> > > Now I remember why I choose to stop holding a discussion with you.
> > > This does not provide a robust no-throw exception safety guarantee. *A
> > > robust no-throw guarantee means the code always succeeds (or has no
> > > effect) and never throws an exception.

>
> > Obviously, I disagree with you, and so does Wikipedia. (http://
> > en.wikipedia.org/wiki/Exception_handling#Exception_safety). "Failure
> > transparency" is the name rightly chosen by article authors. Indeed, I
> > told you that already in my previous post, but you chose to disregard.
> > There's a difference between "no-throw" and "robust". Your mixing them
> > up doesn't serve you well, you just aren't realizing it.

>
> Wrong resource. *See this:
>
> http://www.boost.org/community/exception_safety.html
>
> In that list he does simply say the "no-throw" *guarantee* does not
> throw. *However, later when he's going into more depth he also says,
> "[I]t always completes successfully." *He also calls it the strongest
> guarantee and the guarantee upon which the other two depend. *I think
> it could be well argued that this means the operation the function is
> supposed to perform was accomplished since if something went wrong the
> program can't be guaranteed to be in a consistent state by the basic
> and strong guarantees.
>
> Even your resource actually calls "Failure transparency" the "no-throw
> guarantee" and claims that, "Operations are guaranteed to succeed and
> satisfy all requirements even in presence of exceptional situations."
>
> I think that anyone serious about C++ should learn about the no-throw
> *guarantee* and how it differs from a function that simply does not
> throw. *Even if you've never used the term, it is quite common since
> one of the biggest names in C++ codified it with his 3 exception
> guarantees.


I both agree and not.

(BTW, C++ or something else, it does not matter, as far as exception
safety goes, situation is the same for Java or even C; yes, C, too).

Full sentence is: "The no-throw guarantee is the strongest of all, and
it says that an operation is guaranteed not to throw an exception: it
always completes successfully."

Note how "guaranteed not to throw" is there first.

In my mind, there's two kinds of no-throw guarantees: those who
actually succeed, and those who merely make failure transparent. And
as far as designing code WRT exceptions goes, the two are __one and
the same__. Because when doing that, you only care whether exception
can be thrown or not, and whether there was any change to the program
state (important with "strong", but also important with "failure
transparency", because clearly, if failure has been hushed up here,
but can be visible elsewhere, it's not good).

Indeed, I had made design choices that distinguish between the two,
and that had served me well, and would have done the same without a
blink. Example: a logging logging function. Normally, that can fail,
right? So I normally used a throwing version. For places where I
wanted to log from the cleanup, I used a "silent" version. That simply
swallowed possible error, i.e. had "failure transparency" kind of no-
throw. Why is that a good choice? Because cleanup path (e.g. a
destructor) must not throw. That's a massive gaping hole of a bug. So
there, logging must be no-throw, even if that merely means failure-
transparent. At any rate, (and that's the clincher, really), choice is
between

* failing to log, and
* failing to log + borking up program state by breaking unwinding.

That's a no choice, really.

Goran.
 
Reply With Quote
 
none
Guest
Posts: n/a
 
      09-26-2011
In article <(E-Mail Removed)>,
Adam Skutt <(E-Mail Removed)> wrote:
>On Sep 16, 11:41am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
>wrote:
>> In article <(E-Mail Removed)>,
>> Adam Skutt <(E-Mail Removed)> wrote:
>>
>> >On Sep 14, 11:11am, yatremblay@bel1lin202.(none) (Yannick Tremblay)
>> >wrote:
>> >> In article <(E-Mail Removed)>,
>> >> Adam Skutt <(E-Mail Removed)> wrote:

>>
>> >> >On Sep 13, 1:06pm, yatremblay@bel1lin202.(none) (Yannick Tremblay)
>> >> >wrote:
>> >> >Which is what I told you. It's still not sufficiently general to
>> >> >support the counterargument, "You only have to handle some of them".

>>
>> >> You are being obtuse and refuse to even try to understand.

>>
>> >No, I understand your code and your point perfectly. You're simply
>> >wrong, because you have zero evidence that any real application will
>> >behave as your little example.

>>
>> You are amazing. You post here accusing peoples that actually post
>> code samples and providing "zero evidence" while you provide
>> absolutely no evidence of your claims!
>>

>
>I'm responding to this against my better judgement. Don't make me
>regret it.


Sorry about the late reply. Usenet is not my main purpose in life.

>I've provided plenty of evidence, but most of my claims are rather
>basic, elementary facts. They don't really need any substantiation,
>since they can be trivially verified by using Google and/or some basic
>reasoning.
>
>What facts do you not believe aren't basic nor elementary?
>* That's there's no generally way to tell why std::bad_alloc was
>thrown?
>* That the odds of a kernel panic or other seriously bad behavior on
>the part of the OS when the commit limit is reached are good?


Euh, assuming the you mean that kernel panic are undesirable, I agree.
Unless you mean that kernel panic sometimes occurs and there's nothing
you can do about it. Then I also agree.

>* That overcommitting operating systems will only give you
>std::bad_alloc on virtual address space exhaustion (or overcommit
>limit exhaustion)? That overcommitting operating systems may kill
>your program when out of memory without ever causing a std::bad_alloc
>to occur?
>* That operating systems and language runtimes can treat allocations
>of different types and sizes differently, so you can see 'unintuitive'
>behavior such as large allocations succeeding when little ones fail?
>That you can see behavior such as the runtime asking for more memory
>from the OS than strictly necessary to fulfill the programmer's
>request? That heap fragmentation can cause similar issues? That
>merely stack unwinding in respond to an OOM condition doesn't
>necessarily ensure other parts of the code can leverage the
>deallocated resources?
>* That modern language runtimes and libraries, including C++, treat
>memory allocation as a common and generally innocuous side-effect?
>Meaning that, generally speaking, you cannot assume whether a given
>function allocates memory or not?
>* That the default behavior when std::bad_alloc is thrown is program
>termination in some fashion?
>* That most programs aren't failing on OOM because they tried to
>allocate a singular array, in a singular allocation that's too large?
>(This list isn't exhaustive, mind you)


Don't disagree with any of that. But that's part of your problems. You
over-generalise.

>From all of that, I concluded (my original text):
>> "Properly written, exception-safe C++ code will do the right thing when
>> std::bad_alloc is thrown, and most C++ code cannot sensibly handle
>> std::bad_alloc. As a result, the automatic behavior, which is to let
>> the exception propagate up to main() and terminate the program there,
>> is the correct behavior for the overwhelming majority of C++
>> applications. As programmers, we win anytime the automatic behavior
>> is the correct behavior. "


OK.

>I also noted earlier (my original text):
>> Even when there's memory to free up, writing an exception handler that
>> actually safely runs under an out-of-memory condition is impressively
>> difficult. In some situations, it may not be possible to do what you
>> want, and it may not be possible to do anything at all. Moreover, you
>> may not have any way to detect these conditions while your code is
>> running. Your only recourse may be to crash and crash hard.


OK

>Later I noted, to you no less (my original text):
>> This can happen anyway, so your solution must simply be prepared to
>> deal with this eventuality. Making your code handle memory allocation
>> failure gracefully does not save you from the cleaning staff. If your
>> system can handle the cleaning staff, then it can handle memory
>> allocation failure terminating a process, too.


No relevant.

>> I don't know why people think it's interesting to talk about super
>> reliable software but neglect super reliable hardware too. It's
>> impossible to make hardware that never fails (pesky physics) so why
>> would I ever bother writing software that never fails? Software that
>> never crashes is useless if the cleaning staff kicks out the power
>> cable every night.

>
>Note this is a general-case practical argument based on the economic
>cost. There's three thrusts:
>* Most code lacks a sensible response to memory allocation failure.
>This may be due to the fact that there's just no sensible way to
>continue onward, or it may be due to the fact that allocation failure
>only occurs in a catastrophic situation.
>* Even when a sensible response exists, the difficulty (cost) involved
>in writing a sensible response is too hard. This includes all the
>code changes necessary to ensure the program can continue onward.
>These are not free, nor are they small.
>* When a response is necessary, higher-level mechanisms to achieve
>reliability, isolation, and/or problem avoidance are typically
>superior because they handle other situations that you care about as
>well. I'll repeat two: properly redundant and clustered server
>instances also avoid power interruptions; smart limits on input values
>also defend against malicious inputs.
>
>In short, it's too hard to do and even when it can be done, the cost
>doesn't outweigh the benefit.


That's your problem. You generalise and generalise.

My arguments is that it may or may not be too hard to do and the cost
may or may not outweight the benefit.

>I could go one step further and note
>that it's impossible or even more difficult in many common languages
>other than C++, so techniques that benefit those languages as well are
>clearly of more value than inherently C++ specific techniques.
>Unsurprisingly, most, if not all, of what I've suggested one do is
>language agnostic.
>
>Since it's a general-case and not absolute argument, specific
>counterexamples cannot disprove it. You need to either defeat my
>assumptions (e.g., the cost of a handler is too high generally) or
>show I'm mistaken in the generality of my claims (i.e., in reality I'm
>talking about a niche).



>Showing you can provide a sensible response to a singular large array
>is not interesting.


You make one basic error in the above is to assume that the pattern
only works for singular large array. It would work for multiple large
allocation. The only thing that is required is that, unless a
catastrophic situation or a bug occurs, the

It's interesting for my perspective because I can think
of a whole collection of application where memory allocations

>I write a considerable amount of scientific code
>professionally. Such applications frequently use large arrays in the
>bulk of their processing. In my applications, none of them use a
>single array, there's always two at a minimum: one for the input and
>one for output, as much of the stuff I write lacks easy in-place
>algorithms. Much of the time there's several inputs being combined
>into a single output, so there are many arrays. Many of them need
>temporary copies in one state or another, so there's even more arrays
>beyond the input and output. But on top of that, my code still has to
>load data from disk and write it back out to disk. That involves a
>multitude of smaller objects, many of which I don't directly create or
>control: I/O buffers, strings to hold individual lines and portions of
>lines, state objects for my parsers, application objects holding
>command-line parameters and other configuration items, etc.
>
>All of that is involved in performing a single operation. Related to
>your original case, almost all of that would be unique to each request
>if my application were threaded in a per-request manner. I don't
>think the assumption that generally, most applications make many
>"extra" allocations in support of their main algorithm is
>unreasonable. Nor do I think it's reasonable to assume that
>generally, the main algorithm is using some big array or that the
>array gets allocated all at once. Moreover, if you disagree with
>these generalities, I really have no interest in discussing this with
>you at all, as you're too clearly colored by your own perceptions to
>have a reasonable discussion.


Pot kettle black...

>Let me pose it to you another way: glibc specializes small allocations
>by including a small block allocator that's intentionally designed to
>be fast for small allocations. If small allocations weren't frequent,
>what would be the value of specifically optimizing for them?
>Moreover, large allocations are very slow: each one involves a system
>call to mmap(2) on request and again to munmap(2) on release. System
>calls are very, very slow on modern computers. Likewise, the Python
>interpreter intentionally optimizes for the creation of several small
>objects. Finally, some generational garbage collectors consider
>allocation size when determining generation for the object. So do you
>really believe your example is that general purpose, and that
>relevant, when so much time is spent on optimizing the small? Do you
>really believe all of these people are mistaken for focusing on the
>small?


I beleive that in a a large section of applications, the total amount
of memory used by the large amount of small allocations is extremely
unlikely to ever cause an OOM error and as such, OOM error caused by
the common small allocation are better to be treated as catastrophic
failures.


>> It works perfectly as intended in the scenario it is intended to work.

>
>Bully. I don't know any other way to explain to you that the scenario
>that your scenario is uninteresting and not a valid rebuttal. More
>importantly, it can't ever be a valid rebuttal to what I've said.


Bah. Millions of end-user may think otherwise.

>> You claim that there is burden of proof upon me. But you
>> seem totally happy to assume that there is absolutely no burden of
>> proof whatsoever upon you. You are amazing.

>
>The burden of proof upon me is substantially less since I'm arguing
>for the status quo. Simply because you find my statements
>controversial doesn't mean they are actually controversial.


Interesting: you can come up with a generality to apply to everything
with no proof but when someone come up with a specific example of an
application that can and probably should attempt to handle some
specific

>> The best way to improve robustness of an application is by uing
>> multiple techniques that are useful for a limited type of error
>> cases.
>>

>
>No, I don't think so. As I've noted, plenty of systems with safety-
>critical requirements take a unilateral action to all abnormal
>conditions, and that response is frequently to terminate and restart.
>As I've noted, there are plenty of techniques to improve robustness
>that cover a bevy of error conditions.


You are going against the es


>> The design I posted does improve the robustness of real applications
>> under a particular type of failure scenario. *

>
>No, it does not, because you haven't proven that failure scenario
>actually occurs. The issue is not, "It doesn't work if X happens,"
>the issue is, "X 'never' happens". Improving robustness requires
>showing that you've actually solved a real problem that occurs
>regularly. Thus far, you have not shown it will happen regularly as
>you describe. You haven't even shown most applications behave as you
>describe.
>
>> It never intended nor
>> tried to solve 100% of all possible failure scenario. *
>>
>> If we go back to the cleaning lady pulling of the plug example, does
>> this mean that because you can write code to protect against the
>> cleaning lady, you should not bother to attempt to handle any error
>> whatsoever?
>>

>
>Yes it absolutely can, from a cost/benefit perspective. Especially
>when the error in question is rare. Again, my argument is almost
>entirely cost/benefit. If I have to build a system that tolerates
>power failure of individual servers, then it also handles individual
>processes running out of memory. As such, the benefit of handling out
>of memory specially must be quite large to justify further costs, or
>the cost must be incredibly cheap.
>
>> Why? *
>> Why is it impossible to be able to estimate which allocations are
>> more likely to fail?

>
>Because _size_ has very little to do with the probability. They're
>just not strongly correlated factors, unless you're talking about
>allocations that can never succeed.
>
>> Is it impossible to be able to know which allocations are likely to be
>> large and which allocation are likely to be small?

>
>> Why is it impossible to design a system in such a way to some
>> allocations are more likely to be larger than than other?

>
>No and it's not, but it's not enough information to determine anything
>useful. Moreover, if you use a structure such as a linked list or a
>tree, all of your allocations may be the same size!
>
>Again, you would be saying something interesting if you would stop
>talking about a singular allocation. Even using a std::vector means
>you have to contend with more than one allocation.
>
>> Please tell me what my original claim was?
>> I never claimed that it is easy and simple to protect against all
>> failure scenario. *You must have imagined that.

>
>No, but you claimed it was easy and simple to isolate failure in one
>thread from failure in the others. In your defense, you were probably
>talking only about the case you coded in your example, but you hardly
>made that clear from the outset. Regardless, it only hold while your
>assumptions hold, and as I've said many times, your assumptions are
>very, very poor.
>
>> I have no issue with your claim that correctly handling all types of
>> OOM errors is very difficult. *I have issue with your refusal to admit
>> that there are circumstances where OOM errors can be handled.
>>

>
>I have never done the latter at any point. I don't know why you
>persist in believing that I have. There are obviously situations in
>which it can be done, because it has been done in the past.
>
>>
>> >Then you cannot possibly claim it's generally worthwhile to handle OOM
>> >conditions, yet you've attempted to do precisely that several times
>> >over!

>>
>> This is a lie and you know it. *You are attempting to claim that I
>> claimed a generality when I never claimed such a thing.
>>
>> I strongly disagree with your generalities. *I claim that under
>> specific circumstances with intelligent design you can handle some OOM
>> errors. *
>>

>
>If you disagree with my generalities then you too must be talking in
>generalities in order to have anything worth discussing at all.
>
>>
>> >Your example is simply not realistic, and I gave one example
>> >of how it is unrealistic. *The large allocation may be the last
>> >allocation in a long string of small allocations. *It may succeed,
>> >filling the address space, causing all future allocations to fail.

>>
>> Are you are the one claiming that other are unrealistic!
>> The situation you describe is possible. *Is it really probable?
>> (please supply the input value in the example code that make it
>> happen, the example code does have some small allocations and a
>> potentially large one).

>
>Instead of using an array, just use a set, map, or linked list with a
>type of some fixed, small size. Such cases are quite probable. I'm
>not sure why you're so resistant to the notion that most applications
>do not see std::bad_alloc until they are almost out of memory or the
>operating system is almost out of memory.
>
>Like I said, on a common 64-bit operating system and platforms, your
>program has terabytes of virtual address space. The operating system
>probably only has gigabytes to tens of gigabytes of commit to serve
>that space. That means your singular request has to be 2^32 or so
>in size, minimum, in order to trigger std::bad_alloc for the /sole/
>reason you can handle. Are you honestly telling me, with a straight
>face, you believe that is more common than what I suggest happens?
>
>Even on a 32-bit operating system, that request would have to be
>3*2^30 or so in order to trigger the response you can handle
>robustly. That's a lot of memory. On my system right now, the
>largest singular process is Eclipse at ~1.4GiB virtual size. The
>second largest is Firefox at ~210MiB virtual size. All of the rest
>are considerably under 100MiB.
>
>Now sure, I could cause a large allocation (in aggregate) to occur in
>both by attempting to open a very big document. However, even if
>they respond to the failure in the fashion you describe, they still
>haven't managed to do what I want: open the document. In order to
>open that document, they're going to have to be rewritten to support
>handling files that are very large. There's three important
>consequences to this reality:
>* I will have to start a new version of the application in order to do
>what I want: open the document.
>* Since the application couldn't do what I want, I'm going to being
>closing it anyway. As such, while termination without my input is
>hardly the best response, it's also not that divorced from what was
>going to happen anyway. As such, the negativity associated with
>termination isn't nearly as bad as it could be.
>
>You'll probably point out that both can handle multiple inputs, and
>that failure to process one shouldn't impact the others. I'll note
>that Firefox has tab isolation (now) and session management for tabs,
>so that loss work is minimized on failure. Firefox needs this
>functionality /anyway/, since it can crash for reasons entirely beyond
>its control (e.g., a plugin forcibly kills the process). Eclipse isn't
>nearly as wise, but Eclipse has a cavalier attitude towards my data in
>many, many aspects.
>
>Moreover, back to the original point, if those applications behave
>like that every day (and they do), then why would I believe that
>singular large allocations are common events?
>
>> >Yes, I agree. *Doing that means restarting the program. *That means I
>> >must have built a system that can tolerate restarting the program.
>> >That seriously diminishes the value of writing code with the sole
>> >purpose of avoiding program restarts.

>>
>> Crashing application or crashing server are often undesireable even if
>> they will be restarted afterwards. *You must have very tolerant
>> users.

>
>No, I have mechanisms in place so the loss of a server is generally
>transparent to the users.
>
>> Note: I am not arguiing against planning a recovery mechanism, I am
>> arguing against using the existence of a recovery mechanism to avoid
>> doing due diligence in avoiding crashes and writing quality code.

>
>Again, the notion that code is of lower quality merely because it
>crashes is simply not true.
>It's easily the most absurd notion, by far and away, proffered in this
>whole thread.
>
>Adam



 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: Catching exceptions, bad alloc revisited red floyd C++ 3 09-10-2011 03:11 PM
Bad media, bad files or bad Nero? John Computer Information 23 01-08-2008 09:17 PM
ActiveX apologetic Larry Seltzer... "Sun paid for malicious ActiveX code, and Firefox is bad, bad bad baad. please use ActiveX, it's secure and nice!" (ok, the last part is irony on my part) fernando.cassia@gmail.com Java 0 04-16-2005 10:05 PM
24 Season 3 Bad Bad Bad (Spoiler) nospam@nospam.com DVD Video 12 02-23-2005 03:28 AM
24 Season 3 Bad Bad Bad (Spoiler) nospam@nospam.com DVD Video 0 02-19-2005 01:10 AM



Advertisments