Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > program is not crashing, after 10 iteration

Reply
Thread Tools

program is not crashing, after 10 iteration

 
 
Pallav singh
Guest
Posts: n/a
 
      07-18-2009
Hi All ,

the program is not crashing, after 10 iteration of the loop;

int main( )
{
int * ptr = (int *)malloc( 10) ;
while( 1 )
{
printf(" %d \n",*ptr);
ptr++ ;
}
}

Thanks
Pallav Singh
 
Reply With Quote
 
 
 
 
Donkey Hottie
Guest
Posts: n/a
 
      07-18-2009
"Pallav singh" <(E-Mail Removed)> wrote in message
news:(E-Mail Removed)
> Hi All ,
>
> the program is not crashing, after 10 iteration of the
> loop;
>
> int main( )
> {
> int * ptr = (int *)malloc( 10) ;
> while( 1 )
> {
> printf(" %d \n",*ptr);
> ptr++ ;
> }
> }
>
> Thanks
> Pallav Singh


Why do you think it should crash after 10 iterations? You allocate 10 bytes,
which may take only 2 ints on a 32-bit machine, so if it crashes, it might
do it after 2 iterations.

But indefinded behaviour is undefined behavior. It may do whatever it
pleases. As it does not write to the unallocated memory, it propably just
prints whatever happens to be there. As you do not initilise the allocated
memory anyhow, the result is undefined from the first iteration already.

 
Reply With Quote
 
 
 
 
Jonathan Lee
Guest
Posts: n/a
 
      07-18-2009
On Jul 18, 1:25*pm, Pallav singh <(E-Mail Removed)> wrote:
> the program is not crashing, after 10 iteration of the loop;


There's really no reason why it should. Operations like malloc() and
new[] don't allocate the exact amount of memory you ask for, and then
return a pointer to it. Instead they find _at least_ that much memory
free and give you a pointer to that, expecting that you'll obey the
bounds. This is to make memory management faster and easier.

BTW, this behaviour is usually called buffer overrun/overflow. Check
out the wikipedia article. Note also that this is one of the areas
where C/C++ lets you hang yourself. Java (AFAIK), checks bounds on
every array access and will raise an exception if you go out of
bounds.

--Jonathan
 
Reply With Quote
 
Juha Nieminen
Guest
Posts: n/a
 
      07-18-2009
Andy Champ wrote:
> I'd say to anyone stay away from malloc. It's dangerous. Use the STL
> stuff to manage memory, it's much safer.
>
> And if you can come up with a good reason why to use malloc you probably
> know enough to know when to break the rule. It occurs to me that _I_
> haven't use malloc all year.


A related question: If you really need to allocate uninitialized
memory for whatever reason (eg. you are writing your own memory
allocator, some kind of memory pool, or other such low-level thing), is
there any practical difference between using std::malloc() and ::new?
Should one be preferred over the other?
 
Reply With Quote
 
Alf P. Steinbach
Guest
Posts: n/a
 
      07-18-2009
* Juha Nieminen:
> Andy Champ wrote:
>> I'd say to anyone stay away from malloc. It's dangerous. Use the STL
>> stuff to manage memory, it's much safer.
>>
>> And if you can come up with a good reason why to use malloc you probably
>> know enough to know when to break the rule. It occurs to me that _I_
>> haven't use malloc all year.

>
> A related question: If you really need to allocate uninitialized
> memory for whatever reason (eg. you are writing your own memory
> allocator, some kind of memory pool, or other such low-level thing), is
> there any practical difference between using std::malloc() and ::new?


The latter is compatible with your app's overall other error handling.

How to deal with memory exhaustion is a difficult topic, though.

For small allocations you want to rely on a terminating new-handler or other
scheme that terminates, instead of the default std::bad_alloc exception. But say
you're loading a large picture and that allocation might fail. In that case your
app is usually /not/ hopelessly screwed if the allocation fails, so in that case
you may want the exception, and just report load failure to the user.

What's difficult is what to do for the terminating case.

Do you just log (if you have logging) and terminate, or do you let your app have
a go at cleaning up, via some "hard exception" scheme with stack unwinding up to
the top? The problem with the unwinding is that if the failure is caused by a
small allocation, or e.g. if there's a memory-gobbling thread or process around,
then even just making an attempt at cleanup might make matters worse, e.g. one
might end up with hanging program and no log entry. Debates about this have been
endless with no firm conclusion, only that some people find one or the other
idea horrible and signifying the utter incompetence and perhaps even lack of
basic intellect of those arguing the other view.


> Should one be preferred over the other?


Yes.


Cheers & hth.,

- Alf
 
Reply With Quote
 
James Kanze
Guest
Posts: n/a
 
      07-19-2009
On Jul 18, 10:10 pm, "Alf P. Steinbach" <(E-Mail Removed)> wrote:
> * Juha Nieminen:


> > Andy Champ wrote:
> >> I'd say to anyone stay away from malloc. It's dangerous.
> >> Use the STL stuff to manage memory, it's much safer.


> >> And if you can come up with a good reason why to use malloc
> >> you probably know enough to know when to break the rule.
> >> It occurs to me that _I_ haven't use malloc all year.


> > A related question: If you really need to allocate
> > uninitialized memory for whatever reason (eg. you are
> > writing your own memory allocator, some kind of memory pool,
> > or other such low-level thing), is there any practical
> > difference between using std::malloc() and ::new?


> The latter is compatible with your app's overall other error
> handling.


Note that when dealing with raw memory, I prefer :perator
new(n) to ::new char[n]. IMHO, it expresses the intent better.

> How to deal with memory exhaustion is a difficult topic,
> though.


Except when it's easy.

> For small allocations you want to rely on a terminating
> new-handler or other scheme that terminates, instead of the
> default std::bad_alloc exception.


Which makes it easy. (And a lot of applications can use this
strategy.)

> But say you're loading a large picture and that allocation
> might fail. In that case your app is usually /not/ hopelessly
> screwed if the allocation fails, so in that case you may want
> the exception, and just report load failure to the user.


Or you might want to change strategies, spilling part of the
data to disk, in which case, you'd use new(nothrow).

The problem here is that when it "just fits", you might still
end up using so much memory that the machine starts thrashing.
This is one of the cases where memory exhaustion is a difficult
topic.

> What's difficult is what to do for the terminating case.


> Do you just log (if you have logging) and terminate, or do you
> let your app have a go at cleaning up, via some "hard
> exception" scheme with stack unwinding up to the top? The
> problem with the unwinding is that if the failure is caused by
> a small allocation, or e.g. if there's a memory-gobbling
> thread or process around, then even just making an attempt at
> cleanup might make matters worse, e.g. one might end up with
> hanging program and no log entry. Debates about this have been
> endless with no firm conclusion, only that some people find
> one or the other idea horrible and signifying the utter
> incompetence and perhaps even lack of basic intellect of those
> arguing the other view.


There's also the problem that the logging mechanism might try to
allocate (which will fail).

One strategy that I've seen used with success (although I don't
think there's any hard guarantee) is to "pre-allocate" a couple
of KB (or maybe an MB today) up front---the new handler then
frees this before starting the log and abort procedure.

> > Should one be preferred over the other?


> Yes.


--
James Kanze (GABI Software) email:(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
 
Reply With Quote
 
Pascal J. Bourguignon
Guest
Posts: n/a
 
      07-20-2009
Pallav singh <(E-Mail Removed)> writes:

> Hi All ,
>
> the program is not crashing, after 10 iteration of the loop;
>
> int main( )
> {
> int * ptr = (int *)malloc( 10) ;
> while( 1 )
> {
> printf(" %d \n",*ptr);
> ptr++ ;
> }
> }


Check the other discussion "Books for advanced C++ debugging".

This is formally an "undefined behavior" situation.

We plain programmers would like the implementation to throw an
exception when such a situation occurs.

But it would be more work for compiler writters, so they don't want to
provide such a feature (much less optionnaly, since that would be even
more work). And therefore the industry must bear the cost of uncaught
bugs, of which viruses and worms take benefit.


Personnaly, the only solution I see is to forget C and C++ and instead
use implementations of different programming languages that provide
such run-time checks and error detection, be it because the
implementers of these other programming languages are not as stubborn
as C or C++ implementers, or because those other programming languages
define such an error checking behavior.


Just for an example of such another culture, here is what you'd get
with Common Lisp (but most other programming languages detect these
errors):

C/USER[29]> (declaim (optimize (safety 0) (debug 0) (speed 3) (space 2)))
NIL
C/USER[30]> (defun main ()
(let ((ptr (make-array 10 :element-type 'integer :initial-element 0)))
(loop for i from 0 do (format t " ~D ~^" (aref ptr i)))))
MAIN
C/USER[31]> (main)
0 0 0 0 0 0 0 0 0 0
*** - AREF: index 10 for #(0 0 0 0 0 0 0 0 0 0) is out of range
The following restarts are available:
ABORT :R1 Abort main loop
C/Break 1 USER[32]>


--
__Pascal Bourguignon__
 
Reply With Quote
 
Bo Persson
Guest
Posts: n/a
 
      07-20-2009
Pascal J. Bourguignon wrote:
> Pallav singh <(E-Mail Removed)> writes:
>
>> Hi All ,
>>
>> the program is not crashing, after 10 iteration of the loop;
>>
>> int main( )
>> {
>> int * ptr = (int *)malloc( 10) ;
>> while( 1 )
>> {
>> printf(" %d \n",*ptr);
>> ptr++ ;
>> }
>> }

>
> Check the other discussion "Books for advanced C++ debugging".
>
> This is formally an "undefined behavior" situation.
>
> We plain programmers would like the implementation to throw an
> exception when such a situation occurs.
>
> But it would be more work for compiler writters, so they don't want
> to
> provide such a feature (much less optionnaly, since that would be
> even
> more work). And therefore the industry must bear the cost of
> uncaught
> bugs, of which viruses and worms take benefit.
>
>
> Personnaly, the only solution I see is to forget C and C++ and
> instead
> use implementations of different programming languages that provide
> such run-time checks and error detection, be it because the
> implementers of these other programming languages are not as
> stubborn
> as C or C++ implementers, or because those other programming
> languages
> define such an error checking behavior.
>


Not the only solution.

The other solution is of course to use those parts of C++ that gives
you the security you want (and leave the unchecked code for the places
where it is really needed).

int main()
{
std::vector<int> p(10);

int i = 0;
while(1)
{
std::cout << p.at(i) << std::endl;
++i;
}
}

Bug caught at runtime!


Of course, if you chose to iterate from p.begin() to p.end() you have
no chance of stepping outside the vector. Safety!



Bo Persson


 
Reply With Quote
 
Jerry Coffin
Guest
Posts: n/a
 
      07-20-2009
In article <(E-Mail Removed)>,
http://www.velocityreviews.com/forums/(E-Mail Removed) says...

[ ... ]

> Personnaly, the only solution I see is to forget C and C++ and
> instead use implementations of different programming languages that
> provide such run-time checks and error detection, be it because the
> implementers of these other programming languages are not as
> stubborn as C or C++ implementers, or because those other
> programming languages define such an error checking behavior.


Actually, there's a much better solution than changing languages.
Though it's virtually never put to use, x86 CPUs (for one example)
could implement such checks in hardware if we just put to use what's
been there for years.

Specifically, an x86 running protected mode uses segments, and each
segment has a base and a limit. The problem is that right now, every
major OS basically ignores the segments -- they're all set up with a
base of 0 and a limit of 4 GiB (technically there _are_ segments for
which this isn't true, but the stack, data and code segments are set
up this way, so most normal uses have access to all memory).

Now, it is true that as currently implemented, segments have some
shortcomings -- in particular, there are only 6 available segment
registers, and loading a segment register is a comparatively
expensive operation, so when you use segments code will usually run a
bit slower. The slowdown from doing the job in hardware is a lot
smaller than the slowdown from doing it in software though.

The other problem is that Intel's current implementation uses only 16
bit segment registers, so you can only define 65,536 segments at a
time. This is mostly an accident of history though -- the
segmentation scheme originated with the 286, when none of the
registers was larger than that. The OS that ever put it to use was
OS/2 1.x, so it's received only the most minimal updates necessary to
allow access to more memory.

OTOH, keep in mind that the 286 had four segment registers and all
the associated hardware for checking segment access. Expanding the
hardware to include (for example) 32 segments of 32 bits apiece
wouldn't be a huge addition to the size of today's chips.

Absent that, C++ provides (but doesn't force you to use) about as
good of checking as most other languages. In particular, things like
std::vector include both operator[] and the at() member function. The
former doesn't require bounds checking, but the latter does.

IMO, the biggest problem is that they should have reversed the two:
the easy, more or less default choice, should also be the safe one.
Bypassing the safety of a bounds check should be what requires longer
code with rather oddball syntax that's easy to spot. In that case,
seeing 'at(x)' during a code inspection would be almost the same
level of red flag as, say, a reinterpret_cast is.

--
Later,
Jerry.
 
Reply With Quote
 
Balog Pal
Guest
Posts: n/a
 
      07-20-2009

"Jerry Coffin" <(E-Mail Removed)>

> Specifically, an x86 running protected mode uses segments, and each
> segment has a base and a limit. The problem is that right now, every
> major OS basically ignores the segments -- they're all set up with a
> base of 0 and a limit of 4 GiB (technically there _are_ segments for
> which this isn't true, but the stack, data and code segments are set
> up this way, so most normal uses have access to all memory).
>
> Now, it is true that as currently implemented, segments have some
> shortcomings -- in particular, there are only 6 available segment
> registers, and loading a segment register is a comparatively
> expensive operation, so when you use segments code will usually run a
> bit slower. The slowdown from doing the job in hardware is a lot
> smaller than the slowdown from doing it in software though.
>
> The other problem is that Intel's current implementation uses only 16
> bit segment registers, so you can only define 65,536 segments at a
> time.


The selector's last bits are reserved, so IIRC in a descriptor table you
only have 2x8k segments. Though there are ways to manipulate the descriptor
table. OTOH In practice having 8k segments for a process looks quite
enough.

> This is mostly an accident of history though -- the
> segmentation scheme originated with the 286, when none of the
> registers was larger than that.


386 introduced 32 bit extended regs for all the regular regs, it could have
done the same for selectors... but it would not resolve a real life problem.

> The OS that ever put it to use was OS/2 1.x,


IIRC Xenix used the segmented model too both on 286 and 386 versions. NT
uses segments deep in the kernel.



 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Simple array iteration not working? brian.haines@gmail.com Perl Misc 6 06-08-2009 04:25 AM
Struts - Problem with nested iteration or double iteration Rudi Java 5 10-01-2008 03:30 AM
Showing value of loop iteration in assert statement dwerdna VHDL 5 03-31-2005 05:23 PM
(cast) iteration not working??? Radith Java 12 01-13-2005 05:00 PM
How to bail from an iteration but not the program?... Kurt Euler Ruby 2 09-03-2003 03:39 AM



Advertisments