Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C Programming > Re: Pointer Arithmetic & UB

Reply
Thread Tools

Re: Pointer Arithmetic & UB

 
 
Shao Miller
Guest
Posts: n/a
 
      01-16-2013
On 1/16/2013 10:34, Ben Bacarisse wrote:
> Tim Rentsch <(E-Mail Removed)> writes:
>
>> Ben Bacarisse <(E-Mail Removed)> writes:
>>
>>> Tim Rentsch <(E-Mail Removed)> writes:
>>>
>>>> Ben Bacarisse <(E-Mail Removed)> writes:
>>>>
>>>>> Tim Rentsch <(E-Mail Removed)> writes:
>>>>>>
>>>>>> [discussing the "other unknown side effects" related to volatile
>>>>>> objects] They might modify the object being read, or they might
>>>>>> not; they might modify some other scalar object, or they might
>>>>>> not;
>>>>>
>>>>> [Aside: could it? Only if it, too, is volatile I'd have thought.]
>>>>
>>>> The Standard says only that there may be other unknown side
>>>> effects. AFAIK the Standard puts no limitations on what those
>>>> other side effects might be, so they could include changes to other
>>>> objects, even non-volatile ones.
>>>
>>> Footnote 34 on 6.2.4 p2 seems to suggest otherwise, does it not?
>>>
>>> 2. An object exists, has a constant address, and retains
>>> its last-stored value throughout its lifetime.[34]
>>>
>>> 34) In the case of a volatile object, the last store need not be
>>> explicit in the program.
>>>

>>
>> An object declared volatile is allowed to change at any time,
>> with no explicit program actions. That's why this footnote
>> is there.

>
> My point is that it's worded in a very odd way. It might also say:
>
> 34) In the case of a program that declares at least one volatile
> object, the last store on *any* object need not be explicit in the
> program.
>
> Why is the current wording so very conservative? The retraining of an
> object's explicitly last-stored value does not apply to any object in a
> program that has at least one volatile, so it seems to be a rather
> measly clarification.
>
>> More significantly, this paragraph (and indeed almost all of the
>> Standard) is concerned only with what an implementation may do (or
>> not do). The consequences of accessing a volatile are outside the
>> control of, and even the knowledge of, the implementation. If
>> there is text in the Standard that is meant to impose limitations
>> outside of the domain of just implementations, that distinction
>> would have to be made apparent in the text. I just don't know of
>> any such cases (or at least I can't remember any).

>
> I don't have any trouble with that interpretation but it sets up a
> tension. The standard doesn't want to restrict what an implementation
> does in the presence of volatile accesses, but it also wants to
> document what is likely to happen with implementations that define more
> normal behaviour for volatile objects. The result is some rather coy
> wording.
>


An observation regarding "restricting" an implementation:

struct s {
int a;
int b;
};

void func(void) {
volatile struct s foo;
int x;

foo.a = 0;
x = foo.a + (foo.b = 42);
}

If the implementation defines (and documents) that a modification to a
member of a volatile-qualified struct involves reading and storing to
the whole struct, then there would appear to be a side effect on the
scalar object 'foo.a' caused by the evaluation of the RHS operand of '+'
that is unsequenced relative to the evaluation of the LHS operand.

--
- Shao Miller
--
"Thank you for the kind words; those are the kind of words I like to hear.

Cheerily," -- Richard Harter
 
Reply With Quote
 
 
 
 
Tim Rentsch
Guest
Posts: n/a
 
      01-16-2013
Ben Bacarisse <(E-Mail Removed)> writes:

> Tim Rentsch <(E-Mail Removed)> writes:
>
>> Ben Bacarisse <(E-Mail Removed)> writes:
>>
>>> Tim Rentsch <(E-Mail Removed)> writes:
>>>
>>>> Ben Bacarisse <(E-Mail Removed)> writes:
>>>>
>>>>> Tim Rentsch <(E-Mail Removed)> writes:
>>>>>>
>>>>>> [discussing the "other unknown side effects" related to volatile
>>>>>> objects] They might modify the object being read, or they might
>>>>>> not; they might modify some other scalar object, or they might
>>>>>> not;
>>>>>
>>>>> [Aside: could it? Only if it, too, is volatile I'd have thought.]
>>>>
>>>> The Standard says only that there may be other unknown side
>>>> effects. AFAIK the Standard puts no limitations on what those
>>>> other side effects might be, so they could include changes to other
>>>> objects, even non-volatile ones.
>>>
>>> Footnote 34 on 6.2.4 p2 seems to suggest otherwise, does it not?
>>>
>>> 2. An object exists, has a constant address, and retains
>>> its last-stored value throughout its lifetime.[34]
>>>
>>> 34) In the case of a volatile object, the last store need not be
>>> explicit in the program.
>>>

>>
>> An object declared volatile is allowed to change at any time,
>> with no explicit program actions. That's why this footnote
>> is there.

>
> My point is that it's worded in a very odd way. It might also say:
>
> 34) In the case of a program that declares at least one volatile
> object, the last store on *any* object need not be explicit in the
> program.
>
> Why is the current wording so very conservative? The retraining of an
> object's explicitly last-stored value does not apply to any object in a
> program that has at least one volatile, so it seems to be a rather
> measly clarification.


Oh, but that's not true. An object declared volatile may change at
any time no matter what else the program is doing. An object not
declared volatile might "spontaneously" change _only when there is
an access to a volatile-qualified object_. Merely _declaring_ a
volatile object doesn't have any effect on non-volatiles -- only
accessing one.

>> More significantly, this paragraph (and indeed almost all of the
>> Standard) is concerned only with what an implementation may do (or
>> not do). The consequences of accessing a volatile are outside the
>> control of, and even the knowledge of, the implementation. If
>> there is text in the Standard that is meant to impose limitations
>> outside of the domain of just implementations, that distinction
>> would have to be made apparent in the text. I just don't know of
>> any such cases (or at least I can't remember any).

>
> I don't have any trouble with that interpretation but it sets up a
> tension. The standard doesn't want to restrict what an implementation
> does in the presence of volatile accesses, but it also wants to
> document what is likely to happen with implementations that define more
> normal behaviour for volatile objects. The result is some rather coy
> wording.


What happens when a volatile object is accessed is, by definition,
outside of what the Standard knows. An implementation can choose
to define anything it wants, but when accessing a volatile, it must
act in accordance with what the Standard stipulates. Obviously
the Standard can't document something that it explicitly doesn't
know about, and implementations aren't allowed to assume something
(like what side effects are caused by some volatile access) that
the Standard deems outside of what they know. What constitutes
an access to a volatile object is implementation defined, but
what happens when a volatile access occurs is not, and cannot be,
defined by an implementation, because the Standard specifically
allows side effects unknown to the implementation to take place,
and implementations are obliged to proceed under that assumption.
 
Reply With Quote
 
 
 
 
glen herrmannsfeldt
Guest
Posts: n/a
 
      01-16-2013
Tim Rentsch <(E-Mail Removed)> wrote:

(snip)

> Oh, but that's not true. An object declared volatile may change at
> any time no matter what else the program is doing. An object not
> declared volatile might "spontaneously" change _only when there is
> an access to a volatile-qualified object_. Merely _declaring_ a
> volatile object doesn't have any effect on non-volatiles -- only
> accessing one.


(snip)

> What happens when a volatile object is accessed is, by definition,
> outside of what the Standard knows. An implementation can choose
> to define anything it wants, but when accessing a volatile, it must
> act in accordance with what the Standard stipulates. Obviously
> the Standard can't document something that it explicitly doesn't
> know about,


I agree with that one, though in the case of multithreading,
it seems to me that the standard does allow for some possibilities
for variables to change at unusual times.

> and implementations aren't allowed to assume something
> (like what side effects are caused by some volatile access) that
> the Standard deems outside of what they know.


This is the one I don't agree with. An implementation might,
for example, and as an extension, allow access to I/O registers.
The implementation might, then, document that I/O registers should
be declared volatile, and might even document that no other kinds
of volatile data are supported.

> What constitutes an access to a volatile object is
> implementation defined, but what happens when a volatile
> access occurs is not, and cannot be,


Why can't the implementation define exactly what types
of volatile access it allows?

> defined by an implementation, because the Standard specifically
> allows side effects unknown to the implementation to take place,
> and implementations are obliged to proceed under that assumption.


OK, then, the implementation can, as an extension to the standard,
define which types of volatile access it allow and which ones
it doesn't. That would be especially useful for an implementation
that, as an extension, added ways for varaibles to change other
than the usual ones.

--------------------

PL/I has the ABNORMAL attribute, similar to "volatile" in that it
allows for a variable to change at unexpected times. But then PL/I
has supported multitasking pretty much from the beginning, which
allows for a way to change a variable that another task can read.

Then again, last time I looked the compilers ignored the attribute.

-- glen
 
Reply With Quote
 
Ben Bacarisse
Guest
Posts: n/a
 
      01-16-2013
Tim Rentsch <(E-Mail Removed)> writes:
<snip>
> Here is what I'm left with: I understand what but I don't
> understand why. That is, I think I know what you believe about how
> the phrases should be read, but I don't understand what your
> reasons are for believing that. To me it looks like you're saying,
> "phrase X should be read as <something> because that's what makes
> sense". (I don't mean for this to be an unfair characterization.
> I have taken liberties in the paraphrase only for the sake of
> brevity -- please adjust accordingly.) Furthermore your conclusions
> don't fit with my sense of how the language (ie, the English language)
> is normally read here. For example, suppose we try a parallel
> construction:
>
> "Building a house is a /construction event/, which is a set
> or series of actions that creates a new physical arrangement
> in the environment."
>
> Now, after reading this, would I say that "building a house is a
> construction event on a house"? No, of course I wouldn't. I think
> most people who speak English as a first language would be surprised
> to hear such a (linguistic) construction -- the word "on" just isn't
> right there. Yet this example seems parallel to what you are saying
> (or at least what I understand you to be saying). Assuming I
> understand you, your reasoning is basically linguistic in nature
> (eg, referring to "ordinary meanings"). But I don't see what those
> reasons are. You see my problem? I'm looking for examples of
> similar phrasings that seem natural, and also match what you said
> (ie, what I understood you to say) about how the statement regarding
> "Accessing a volatile object" being a side effect means that there
> is a side effect on that particular object (or for that matter, on
> any object).
>
> If your response now is that this is just what seems linguistically
> natural to you, then there probably isn't much more to say, and I
> will just be left baffled. On the other hand, if you have some
> parallel examples or other supporting linguistic evidence to offer,
> I would very much like to hear some.


Here's another analogy:

"Checking out a book and shredding a book are both /lender actions/,
which are changes in the state of the library."

Now, after reading this, would I say that "checking out a book is a
lender action on a book"? Yes I would.

But I don't think either analogy tells us much. Changing the words
changes the meaning and one can support both sides by picking the right
phrases to substitute.

But that's all irrelevant: I was wrong. The phrase "side effect on an
object" can only refer to modifications. Other side effects (accesses
to volatile objects) that are simply associated with an object can't
reasonably be said to be "on that object"; the effect of a side effect
on an object can't be "none".

--
Ben.
 
Reply With Quote
 
Richard Damon
Guest
Posts: n/a
 
      01-17-2013
On 1/16/13 2:09 PM, glen herrmannsfeldt wrote:

> This is the one I don't agree with. An implementation might,
> for example, and as an extension, allow access to I/O registers.
> The implementation might, then, document that I/O registers should
> be declared volatile, and might even document that no other kinds
> of volatile data are supported.


I don't think that this would be a conforming implementation. A program
is allowed to take the address of and object of type T, convert that T*
pointer (implicitly) to a volatile T*, and then dereference it with
defined behavior. (similar to converting a T* to a const T*).

This implies that the access through a pointer to volatile needs to be
somewhat similar in effect as a non-volatile access. Since, in general,
the implementation can't know when passed a pointer to volatile if the
object pointer to is truly volatile or not it becomes hard to make the
definition of access dependent on that.

Also, I don't believe that an implementation can fail to translate
something like

struct foo {
int i;
volatile int v;
} myfoo;

....

myfoo.v = 1;


(i.e., a believe that it is allowed to arbitrarily make give any object
a volatile modifier).

What the implementation would be allowed to do (I believe) is to reserve
certain "addresses" (values of pointers) as something special, and only
decode that "specialness" for volatiles, thus normal pointer access
would not be slowed down (but not "work" for these addresses, but
volatile pointer access to these special addresses could be converted to
a different instruction to access a different I/O space.
 
Reply With Quote
 
Tim Rentsch
Guest
Posts: n/a
 
      01-18-2013
Ben Bacarisse <(E-Mail Removed)> writes:

> Tim Rentsch <(E-Mail Removed)> writes:
> <snip>
>> Here is what I'm left with: I understand what but I don't
>> understand why. That is, I think I know what you believe about how
>> the phrases should be read, but I don't understand what your
>> reasons are for believing that. To me it looks like you're saying,
>> "phrase X should be read as <something> because that's what makes
>> sense". (I don't mean for this to be an unfair characterization.
>> I have taken liberties in the paraphrase only for the sake of
>> brevity -- please adjust accordingly.) Furthermore your conclusions
>> don't fit with my sense of how the language (ie, the English language)
>> is normally read here. For example, suppose we try a parallel
>> construction:
>>
>> "Building a house is a /construction event/, which is a set
>> or series of actions that creates a new physical arrangement
>> in the environment."
>>
>> Now, after reading this, would I say that "building a house is a
>> construction event on a house"? No, of course I wouldn't. I think
>> most people who speak English as a first language would be surprised
>> to hear such a (linguistic) construction -- the word "on" just isn't
>> right there. Yet this example seems parallel to what you are saying
>> (or at least what I understand you to be saying). Assuming I
>> understand you, your reasoning is basically linguistic in nature
>> (eg, referring to "ordinary meanings"). But I don't see what those
>> reasons are. You see my problem? I'm looking for examples of
>> similar phrasings that seem natural, and also match what you said
>> (ie, what I understood you to say) about how the statement regarding
>> "Accessing a volatile object" being a side effect means that there
>> is a side effect on that particular object (or for that matter, on
>> any object).
>>
>> If your response now is that this is just what seems linguistically
>> natural to you, then there probably isn't much more to say, and I
>> will just be left baffled. On the other hand, if you have some
>> parallel examples or other supporting linguistic evidence to offer,
>> I would very much like to hear some.

>
> Here's another analogy:
>
> "Checking out a book and shredding a book are both /lender actions/,
> which are changes in the state of the library."
>
> Now, after reading this, would I say that "checking out a book is a
> lender action on a book"? Yes I would.
>
> But I don't think either analogy tells us much. Changing the words
> changes the meaning and one can support both sides by picking the right
> phrases to substitute.


I don't really disagree, but I thought some examples might illuminate
your thinking (this one doesn't, at least not to me). I'm looking
for understanding more than I am for argument.

> But that's all irrelevant: I was wrong. The phrase "side effect on an
> object" can only refer to modifications. Other side effects (accesses
> to volatile objects) that are simply associated with an object can't
> reasonably be said to be "on that object"; the effect of a side effect
> on an object can't be "none".


Well, okay! I didn't understand why you said what you did in the
first place, and now I don't understand what made you switch. But,
no matter, at least there was ultimately agreement...
 
Reply With Quote
 
Tim Rentsch
Guest
Posts: n/a
 
      01-18-2013
Richard Damon <(E-Mail Removed)> writes:

> [snip] A program
> is allowed to take the address of and object of type T, convert that T*
> pointer (implicitly) to a volatile T*, and then dereference it with
> defined behavior. (similar to converting a T* to a const T*).
>
> This implies that the access through a pointer to volatile needs to be
> somewhat similar in effect as a non-volatile access. Since, in general,
> the implementation can't know when passed a pointer to volatile if the
> object pointer to is truly volatile or not it becomes hard to make the
> definition of access dependent on that.


This idea isn't right. An expression that accesses an object is
subject to volatile-access semantics exactly when the lvalue that
designates the object is volatile-qualified. Whether the object
actually being accessed was declared volatile or not makes no
difference (as far as how this particular access must be performed).

> Also, I don't believe that an implementation can fail to translate
> something like
>
> struct foo {
> int i;
> volatile int v;
> } myfoo;
>
> ...
>
> myfoo.v = 1;
>
> (i.e., a believe that it is allowed to arbitrarily make give any object
> a volatile modifier).


If the implementation can prove that the assignment must be
evaluated, then strictly speaking the program doesn't have
to be accepted, because it is not strictly conforming. Not
counting that technicality though this is right.

> What the implementation would be allowed to do (I believe) is to reserve
> certain "addresses" (values of pointers) as something special, and only
> decode that "specialness" for volatiles, thus normal pointer access
> would not be slowed down (but not "work" for these addresses, but
> volatile pointer access to these special addresses could be converted to
> a different instruction to access a different I/O space.


Any access arising from a volatile-qualified lvalue must be
performed using volatile-access semantics, whether the address
is "in range" or not. Unless there is an actual declaration for
a variable (aka object) at such an address, and which includes a
volatile qualifier, then the implementation must allow access
through an lvalue that is not volatile-qualified as well as a
volatile-qualified one. Moreover the non-volatile lvalues must
access the same area of memory as the volatile lvalues.
 
Reply With Quote
 
Tim Rentsch
Guest
Posts: n/a
 
      01-18-2013
glen herrmannsfeldt <(E-Mail Removed)> writes:

> Tim Rentsch <(E-Mail Removed)> wrote:
> [snip]
>
>> and implementations aren't allowed to assume something (like
>> what side effects are caused by some volatile access) that the
>> Standard deems outside of what they know.

>
> This is the one I don't agree with. An implementation might, for
> example, and as an extension, allow access to I/O registers. The
> implementation might, then, document that I/O registers should be
> declared volatile, and might even document that no other kinds of
> volatile data are supported.


Implementations don't get to make that choice. The volatile
keyword is allowed in declaring/defining any variable, and also
in type names (eg, casts or compound literals). Any expression
doing an access through a volatile-qualified lvalue is obliged to
perform that access under the rules that the Standard gives for
the semantics of such accesses. Implementations don't get to
choose which volatile-qualified accesses are "supported" and
which ones aren't -- the Standard obliges them to perform them
all, and without making any assumptions about what side effects
might take place as a result.

>> What constitutes an access to a volatile object is
>> implementation defined, but what happens when a volatile
>> access occurs is not, and cannot be,

>
> Why can't the implementation define exactly what types of
> volatile access it allows?


Because it isn't consistent with the requirements that all
accesses be performed, and volatile-qualified accesses be
performed according to how the Standard describes the semantics
for such accesses.

>> defined by an implementation, because the Standard specifically
>> allows side effects unknown to the implementation to take place,
>> and implementations are obliged to proceed under that assumption.

>
> OK, then, the implementation can, as an extension to the standard,
> define which types of volatile access it allow and which ones it
> doesn't. [snip]


That simply isn't consistent with what the Standard requires.
Implementations must carry out volatile-qualified accesses
according to the requireed semantics; they don't get to choose
which of those might be "allowed" and which aren't. It's like
saying an implementation could define an extension where array
indexing only works for even index values -- it cannot, because
such an implementation fails to meet the requirements that the
Standard otherwise imposes.
 
Reply With Quote
 
Keith Thompson
Guest
Posts: n/a
 
      01-18-2013
Tim Rentsch <(E-Mail Removed)> writes:
> Richard Damon <(E-Mail Removed)> writes:

[...]
>> Also, I don't believe that an implementation can fail to translate
>> something like
>>
>> struct foo {
>> int i;
>> volatile int v;
>> } myfoo;
>>
>> ...
>>
>> myfoo.v = 1;
>>
>> (i.e., a believe that it is allowed to arbitrarily make give any object
>> a volatile modifier).

>
> If the implementation can prove that the assignment must be
> evaluated, then strictly speaking the program doesn't have
> to be accepted, because it is not strictly conforming. Not
> counting that technicality though this is right.


Are you saying that an implementation is free to reject a program just
because it's not strictly conforming?

4p6 (quoting N1570) does say:

A *conforming hosted implementation* shall accept any strictly
conforming program. A *conforming freestanding implementation*
shall accept any strictly conforming program in which [SNIP]

which could be taken to imply that an implementation needn't accept a
program that's not strictly conforming.

But paragraph 3 says:

A program that is correct in all other aspects, operating on correct
data, containing unspecified behavior shall be a correct program
and act in accordance with 5.1.2.3.

A program can't "act in accordance with 5.1.2.3" if the implementation
doesn't accept it.

A concrete example: I don't believe a conforming hosted implementation
may reject this:

#include <stdio.h>
#include <limits.h>
int main(void) {
printf("INT_MAX = %d\n", INT_MAX);
return 0;
}

even though it's not strictly conforming.

--
Keith Thompson (The_Other_Keith) http://www.velocityreviews.com/forums/(E-Mail Removed) <http://www.ghoti.net/~kst>
Working, but not speaking, for JetHead Development, Inc.
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
 
Reply With Quote
 
glen herrmannsfeldt
Guest
Posts: n/a
 
      01-18-2013
Tim Rentsch <(E-Mail Removed)> wrote:
> glen herrmannsfeldt <(E-Mail Removed)> writes:


>> Tim Rentsch <(E-Mail Removed)> wrote:
>> [snip]


>>> and implementations aren't allowed to assume something (like
>>> what side effects are caused by some volatile access) that the
>>> Standard deems outside of what they know.

>
>> This is the one I don't agree with. An implementation might, for
>> example, and as an extension, allow access to I/O registers. The
>> implementation might, then, document that I/O registers should be
>> declared volatile, and might even document that no other kinds of
>> volatile data are supported.


> Implementations don't get to make that choice. The volatile
> keyword is allowed in declaring/defining any variable, and also
> in type names (eg, casts or compound literals). Any expression
> doing an access through a volatile-qualified lvalue is obliged to
> perform that access under the rules that the Standard gives for
> the semantics of such accesses. Implementations don't get to
> choose which volatile-qualified accesses are "supported" and
> which ones aren't -- the Standard obliges them to perform them
> all, and without making any assumptions about what side effects
> might take place as a result.


I don't say this very often, but that is just dumb.

Except for the case of multithreading, which has been pretty
much left out of the discussion, only the implementation knows
which possible accesses need to be protected against, and how
to protect against them.

Someone whose name I forget used to post in comp.lang.fortran
on how useless the C rules were for volatile, but I never
understood why they were so useless.

Now, on many processors with memory mapped I/O registers,
it is simple to access one by dereferencing a pointer
to a specific address. Volatile pointers and pointers
(that aren't volatile) to volatile locations also have
been pretty much left out of the discussion.

>>> What constitutes an access to a volatile object is
>>> implementation defined, but what happens when a volatile
>>> access occurs is not, and cannot be,


>> Why can't the implementation define exactly what types of
>> volatile access it allows?


> Because it isn't consistent with the requirements that all
> accesses be performed, and volatile-qualified accesses be
> performed according to how the Standard describes the semantics
> for such accesses.


There are way too many different ways to access memory, most
of which only the implementation can know about.

OK, say a program has:

volatile int x=1;
while(x) ;

waiting until someone, somewhere, changes x, and, as
required the implementation makes the appropriate access and, among
others, doesn't optimize the loop. Now, say that the implementation
has a cache such that, even though memory fetch instructions are
executed it never actually gets to real memory. Maybe the user need
to go to the BIOS setup and turn off caching. Yet you say that
the standard understands how to do that?

>>> defined by an implementation, because the Standard specifically
>>> allows side effects unknown to the implementation to take place,
>>> and implementations are obliged to proceed under that assumption.


>> OK, then, the implementation can, as an extension to the standard,
>> define which types of volatile access it allow and which ones it
>> doesn't. [snip]


> That simply isn't consistent with what the Standard requires.
> Implementations must carry out volatile-qualified accesses
> according to the requireed semantics; they don't get to choose
> which of those might be "allowed" and which aren't. It's like
> saying an implementation could define an extension where array
> indexing only works for even index values -- it cannot, because
> such an implementation fails to meet the requirements that the
> Standard otherwise imposes.


Probably a bad example because, with alignment restrictions,
implementations can do just that. The implementation has to
add the appropriate padding, as required.

Fortran now has the ASYNCHRONOUS attribute, and ASYNCHRONOUS I/O
statements. That is another way that variables can change at
unexpected times. I know I used asynchronous I/O in SunOS C
many years ago, but it wasn't in the standard, and may still
not be.

I previously wrote about the PL/I ABNORMAL attribute. PL/I is
insteresting as, pretty much, the whole language was defined
before any implementations were written. They had to anticipate
what could and couldn't be implemented, and ABNORMAL has a
pretty simple definition. But it is then up to the implementation
to figure out what can actually happen, and what to do about it.

-- glen
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Pointer to pointer or reference to pointer A C++ 7 07-05-2011 07:49 PM
Pointer to pointer Vs References to Pointer bansalvikrant@gmail.com C++ 4 07-02-2009 10:20 AM
passing the address of a pointer to a func that doesnt recieve a pointer-to-a-pointer jimjim C Programming 16 03-27-2006 11:03 PM
Usual Arithmetic Conversions-arithmetic expressions joshc C Programming 5 03-31-2005 02:23 AM
Pointer-to-pointer-to-pointer question masood.iqbal@lycos.com C Programming 10 02-04-2005 02:57 AM



Advertisments