Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > C++ > Operator Overloading: Assume rational use

Reply
Thread Tools

Operator Overloading: Assume rational use

 
 
Tomás Ó hÉilidhe
Guest
Posts: n/a
 
      06-07-2008
Operator overloads are just like any other member function, you
can make them do whatever you want. However, of course, we might
expect them to behave in a certain way. The ++ operator should perform
some sort of increment, the / operator should do something along the
lines of division.

Do you think it would have been worthwhile for the C++ Standard to
"codify" this expected use of operator overloads? I'll be specific:

Let's say you overload the ++ operator, both the pre and the post
form. In our class, they both behave as expected: The pre gives you
the new value, the post gives you the old value. In our particular
implementation, the pre version is much more efficient that the post
version because the post version involves the creation of a temporary
(and let's say our class object is quite expensive to construct).

Let's say we have code such as the following:

for (OurClass obj = 0; obj < 77; obj++) DoSomething;

When the compiler looks at this, it can see straight away that the
result of the incrementation is discarded. If it has some sort of
"codified expected usage" of operator overloads, it could invoke "+
+obj" instead.

Similarly, if you had a function such as:

void Func(ClassObject const arg)
{
return arg * 7 - 2;
}

it could treat it as:

void Func(ClassObject arg)
{
arg *= 7;
arg -= 2;
}

thus getting rid of temporary objects.

If there was a "codified expected usage" then I don't think it
would be too far removed from the current situation we have with
contructor elision. With constructor elision, the compiler just
assumes that the creation of a temporary object won't result in
something important happening like a rocket being sent to the moon, so
it just gets rid of the temporary. For it to have this "way of
thinking" though, the Standard basically had to say "well constructors
aren't supposed to do something important outside of the object". This
wouldn't be very different at all from saying "Well the pre-increment
should be indentical to the post-increment if the result is
discarded".

The net result of this is that code could be written more naturally;
for instance take the following function:

void Func(int const i)
{
return i * 7 - 2;
}

If we introduce a class object, it could be left as:

void Func(OurClass const i)
{
return i * 7 - 2;
}

instead of having to change it to:

void Func(OurClass i)
{
i *= 7;
i -= 2;
return i;
}

But then again, even if the Standard did have some sort of expected
usage of operator overloads, they're probably still be people who
wouldn't trust the compiler to do the right thing.



 
Reply With Quote
 
 
 
 
Rolf Magnus
Guest
Posts: n/a
 
      06-07-2008
Tomás Ó hÉilidhe wrote:

> Operator overloads are just like any other member function, you
> can make them do whatever you want. However, of course, we might
> expect them to behave in a certain way. The ++ operator should perform
> some sort of increment, the / operator should do something along the
> lines of division.


Yes. However, there are cases where this rule is ignored. Think about the
bit shift operators that are used for stream I/O in the standard library.
Boost has a lot of operator abuse too (look at Spirit).
One could even consider operator+ for strings as misuse of operator
overloading, since a concatenation isn't really the same as an addition.

> Let's say we have code such as the following:
>
> for (OurClass obj = 0; obj < 77; obj++) DoSomething;
>
> When the compiler looks at this, it can see straight away that the
> result of the incrementation is discarded. If it has some sort of
> "codified expected usage" of operator overloads, it could invoke "+
> +obj" instead.


If the operator ++ can be inlined, the compiler might be able to optimize
the copied object away. But I see your point. I think it wouldn't be a good
idea to let the compiler do assumptions about what the overloaded operators
do.

> Similarly, if you had a function such as:
>
> void Func(ClassObject const arg)
> {
> return arg * 7 - 2;
> }
>
> it could treat it as:
>
> void Func(ClassObject arg)
> {
> arg *= 7;
> arg -= 2;
> }
>
> thus getting rid of temporary objects.


Now consider your ClassObject to be a matrix, and instead of 7, you multiply
it with another matrix. Such a multiplication needs a temporary anyway, so
you might choose to implement operator*= by using operator* instead of the
other way round. So in some cases, such a transformation might actually
_add_ another temporary object.

What I could imagine is that there could be a way for the programmer to
explicitly request such transformations to happen. But I don't think that
it's a good idea to implictly do it.

 
Reply With Quote
 
 
 
 
James Kanze
Guest
Posts: n/a
 
      06-07-2008
On Jun 7, 9:53 am, Tomás Ó hÉilidhe <(E-Mail Removed)> wrote:
> Operator overloads are just like any other member function,
> you can make them do whatever you want. However, of course, we
> might expect them to behave in a certain way. The ++ operator
> should perform some sort of increment, the / operator should
> do something along the lines of division.


And of course, + and * are commutative, where as - and / aren't.

> Do you think it would have been worthwhile for the C++
> Standard to "codify" this expected use of operator overloads?


I think this was rejected in the early days of C++. I'm not
sure I agree with this, but it's far too late to change it now.
It's even violated regularly in the standard: think of operator+
on strings, for example, and there are examples in mathematics
where operator* wouldn't be commutative either.

The rejection was complete; I think it arguable that there are
two difference cases: one concerning such "external" rules, and
another concerning internal rules, e.g. the relationship between
+ and +=, for example, or prefix and postfix ++.

> I'll be specific:


> Let's say you overload the ++ operator, both the pre and the
> post form. In our class, they both behave as expected: The pre
> gives you the new value, the post gives you the old value. In
> our particular implementation, the pre version is much more
> efficient that the post version because the post version
> involves the creation of a temporary (and let's say our class
> object is quite expensive to construct).


> Let's say we have code such as the following:


> for (OurClass obj = 0; obj < 77; obj++) DoSomething;


> When the compiler looks at this, it can see straight away that
> the result of the incrementation is discarded. If it has some
> sort of "codified expected usage" of operator overloads, it
> could invoke "++obj" instead.


If the functions involved are all inline, it can skip the
construction of the extra object anyway. And if they aren't,
and can't reasonably be made inline, then it is probable that
skipping the copy won't make a measurable difference anyway.
(It's easy to invent perverse cases where it would, but they
don't occur in real code.)

The important difference would be applying the rules of
associativity, commutivity, and why not, distributivity (should
the compiler also assume that addition is cheaper than
multiplication). Even more useful, IMHO, would be if the
compiler would automatically generate +=, given + and a copy
constructor, or vice versa. (In practice, today, all you have
to do is derive from an appropriate class template for this, so
it probably isn't worth it. But it certainly would have been
back before we had templates.)

> Similarly, if you had a function such as:


> void Func(ClassObject const arg)
> {
> return arg * 7 - 2;
> }


> it could treat it as:


> void Func(ClassObject arg)
> {
> arg *= 7;
> arg -= 2;
> }


> thus getting rid of temporary objects.


Well, you certainly wouldn't want the compiler to do this as an
"optimizing" measure, if you had written both functions, since
it's not clear which version will be faster. (Of course, if the
compiler can see enough of the functions to know which one will
be faster, it can do this transformation today, under the "as
if" rule.)

> If there was a "codified expected usage" then I don't think it
> would be too far removed from the current situation we have
> with contructor elision.


Agree. Constructor elision is precisely an example of this.

--
James Kanze (GABI Software) email:(E-Mail Removed)
Conseils en informatique orientée objet/
Beratung in objektorientierter Datenverarbeitung
9 place Sémard, 78210 St.-Cyr-l'École, France, +33 (0)1 30 23 00 34
 
Reply With Quote
 
Erik Wikström
Guest
Posts: n/a
 
      06-07-2008
On 2008-06-07 09:53, Tomás Ó hÉilidhe wrote:
> Operator overloads are just like any other member function, you
> can make them do whatever you want. However, of course, we might
> expect them to behave in a certain way. The ++ operator should perform
> some sort of increment, the / operator should do something along the
> lines of division.
>
> Do you think it would have been worthwhile for the C++ Standard to
> "codify" this expected use of operator overloads? I'll be specific:
>
> Let's say you overload the ++ operator, both the pre and the post
> form. In our class, they both behave as expected: The pre gives you
> the new value, the post gives you the old value. In our particular
> implementation, the pre version is much more efficient that the post
> version because the post version involves the creation of a temporary
> (and let's say our class object is quite expensive to construct).
>
> Let's say we have code such as the following:
>
> for (OurClass obj = 0; obj < 77; obj++) DoSomething;
>
> When the compiler looks at this, it can see straight away that the
> result of the incrementation is discarded. If it has some sort of
> "codified expected usage" of operator overloads, it could invoke "+
> +obj" instead.
>
> Similarly, if you had a function such as:
>
> void Func(ClassObject const arg)
> {
> return arg * 7 - 2;
> }
>
> it could treat it as:
>
> void Func(ClassObject arg)
> {
> arg *= 7;
> arg -= 2;
> }
>
> thus getting rid of temporary objects.
>
> If there was a "codified expected usage" then I don't think it
> would be too far removed from the current situation we have with
> contructor elision. With constructor elision, the compiler just
> assumes that the creation of a temporary object won't result in
> something important happening like a rocket being sent to the moon, so
> it just gets rid of the temporary. For it to have this "way of
> thinking" though, the Standard basically had to say "well constructors
> aren't supposed to do something important outside of the object". This
> wouldn't be very different at all from saying "Well the pre-increment
> should be indentical to the post-increment if the result is
> discarded".
>
> The net result of this is that code could be written more naturally;
> for instance take the following function:
>
> void Func(int const i)
> {
> return i * 7 - 2;
> }
>
> If we introduce a class object, it could be left as:
>
> void Func(OurClass const i)
> {
> return i * 7 - 2;
> }
>
> instead of having to change it to:
>
> void Func(OurClass i)
> {
> i *= 7;
> i -= 2;
> return i;
> }
>
> But then again, even if the Standard did have some sort of expected
> usage of operator overloads, they're probably still be people who
> wouldn't trust the compiler to do the right thing.


You might want to look into expression templates if the performance-hit
of using arg * 7 + 2 is too great (or educate your users).

I do not think that it would be a good idea to have some kind of
expected behaviour in the standard since it would change the semantics
of the language and there might be cases where those expectations are
not true.

--
Erik Wikström
 
Reply With Quote
 
Kai-Uwe Bux
Guest
Posts: n/a
 
      06-07-2008
James Kanze wrote:

> On Jun 7, 9:53 am, Tomás Ó hÉilidhe <(E-Mail Removed)> wrote:
>> Operator overloads are just like any other member function,
>> you can make them do whatever you want. However, of course, we
>> might expect them to behave in a certain way. The ++ operator
>> should perform some sort of increment, the / operator should
>> do something along the lines of division.

>
> And of course, + and * are commutative, where as - and / aren't.


That would rule out many reasonable uses of * like matrix multiplication,
multiplication in other groups, or the use of + for string concatenation.



>> Do you think it would have been worthwhile for the C++
>> Standard to "codify" this expected use of operator overloads?

>
> I think this was rejected in the early days of C++. I'm not
> sure I agree with this, but it's far too late to change it now.
> It's even violated regularly in the standard: think of operator+
> on strings, for example, and there are examples in mathematics
> where operator* wouldn't be commutative either.


Exactly. I would really deplore a language that allows operator overloading
but does not acknowledge the possibility of non-commutative multiplication.

I guess, it all comes down to your attitude toward operator overloading in
general. I see two main possible operator coding styles: (a) have your
operators mimmick the built-in versions so that a person with C++ knowledge
will understand the code easily, or (b) try to make client code look
similar to formula from text books about the problem domain so that a
persion with background knowledge can understand the code easily. I usually
follow (b) and in that case, * is clearly to be used for matrix
multiplication (since we cannot overload whitespace . But I do see that
those coding guidelines should be local and do not generalize from one
place to another. Therefore, I think the standard made the right decision
not to legislate style.


> The rejection was complete; I think it arguable that there are
> two difference cases: one concerning such "external" rules, and
> another concerning internal rules, e.g. the relationship between
> + and +=, for example, or prefix and postfix ++.
>
>> I'll be specific:

>
>> Let's say you overload the ++ operator, both the pre and the
>> post form. In our class, they both behave as expected: The pre
>> gives you the new value, the post gives you the old value. In
>> our particular implementation, the pre version is much more
>> efficient that the post version because the post version
>> involves the creation of a temporary (and let's say our class
>> object is quite expensive to construct).

>
>> Let's say we have code such as the following:

>
>> for (OurClass obj = 0; obj < 77; obj++) DoSomething;

>
>> When the compiler looks at this, it can see straight away that
>> the result of the incrementation is discarded. If it has some
>> sort of "codified expected usage" of operator overloads, it
>> could invoke "++obj" instead.

>
> If the functions involved are all inline, it can skip the
> construction of the extra object anyway. And if they aren't,
> and can't reasonably be made inline, then it is probable that
> skipping the copy won't make a measurable difference anyway.
> (It's easy to invent perverse cases where it would, but they
> don't occur in real code.)
>
> The important difference would be applying the rules of
> associativity, commutivity, and why not, distributivity (should
> the compiler also assume that addition is cheaper than
> multiplication).


Such rules cannot be applied by the compiler even for signed intergral types
as intermediate results could differ, which in case of overflows might turn
defined behavior into undefined behavior (if you are on a platform where
signed overflow really causes trouble, that is). Similarly for floating
point arithmetic, some path might yield nan whereas a mathematically
equivalent expression might yield 1.0.

For better or worse, arithmetic on computers simply does not obey the usual
mathematical laws; and pretending it does is a surefire method to get into
trouble.


[snip]


Best

Kai-Uwe Bux
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Date#parse assume a EU format - can it assume a US format? Josh Sharpe Ruby 1 09-21-2010 09:16 AM
The input operator for Rational numbers Saeed Amrollahi C++ 4 11-17-2009 08:45 AM
Can I assume the memory is continuous? linq936 C++ 21 09-19-2007 12:12 PM
The CIT200 VOIP phone I bought will not establish connection to the USB base. It just beeps - busy signal I assume. Carl Rossman VOIP 0 03-16-2007 10:29 PM
Is it safe to assume default value from dynamic memory allocation? howa C++ 7 12-03-2006 07:44 AM



Advertisments