Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > relative speed of incremention syntaxes (or "i=i+1" VS "i+=1")

Reply
Thread Tools

relative speed of incremention syntaxes (or "i=i+1" VS "i+=1")

 
 
Laurent
Guest
Posts: n/a
 
      08-21-2011
Hi Folks,

I was arguing with a guy who was sure that incrementing a variable i with "i += 1" is faster than "i = i + 1". I couldn't tell if he was right or wrong so I did a little benchmark with the very useful timeit module.
Here are the results on my little Linux Eeepc Netbook (using Python 3.2):


Computing, please wait...

Results for 1000000 times "i = i + 1":
0.37591004371643066
0.3827171325683594
0.37238597869873047
0.37305116653442383
0.3725881576538086
0.37294602394104004
0.3712761402130127
0.37357497215270996
0.371567964553833
0.37359118461608887
Total 3.7396 seconds.

Results for 1000000 times "i += 1":
0.3821070194244385
0.3802030086517334
0.3828878402709961
0.3823058605194092
0.3801591396331787
0.38340115547180176
0.3795340061187744
0.38153910636901855
0.3835160732269287
0.381864070892334
Total 3.8175 seconds.

==> "i = i + 1" is 2.08% faster than "i += 1".



I did many tests and "i = i + 1" always seems to be around 2% faster than "i += 1". This is no surprise as the += notation seems to be a syntaxic sugar layer that has to be converted to i = i + 1 anyway. Am I wrong in my interpretation?

Btw here's the trivial Python 3.2 script I made for this benchmark:


import timeit

r = 10
n = 1000000

s1 = "i = i + 1"
s2 = "i += 1"

t1 = timeit.Timer(stmt=s1, setup="i = 0")
t2 = timeit.Timer(stmt=s2, setup="i = 0")

print("Computing, please wait...")

results1 = t1.repeat(repeat=r, number=n)
results2 = t2.repeat(repeat=r, number=n)

print('\nResults for {} times "{}":'.format(n, s1))
sum1 = 0
for result in results1:
print(result)
sum1 += result
print("Total {:.5} seconds.".format(sum1))

print('\nResults for {} times "{}":'.format(n, s2))
sum2 = 0
for result in results2:
print(result)
sum2 += result
print("Total {:.5} seconds.".format(sum2))

print('\n==> "{}" is {:.3}% faster than "{}".'.format(s1,(sum2 / sum1) * 100 - 100, s2))



Comments are welcome...

 
Reply With Quote
 
 
 
 
woooee
Guest
Posts: n/a
 
      08-21-2011
as the += notation seems to be a syntaxic sugar layer that has to be
converted to i = i + 1 anyway.

That has always been my understanding. The faster way is to append to
a list as concatenating usually, requires the original string,
accessing an intermediate block of memory, and the memory for the
final string.
x_list.append(value)
to_string = "".join(x_list)
 
Reply With Quote
 
 
 
 
Laurent
Guest
Posts: n/a
 
      08-21-2011
Well I agree with you about string concatenation, but here I'm talking about integers incrementation...
 
Reply With Quote
 
Irmen de Jong
Guest
Posts: n/a
 
      08-21-2011
On 21-08-11 19:03, Laurent wrote:
> Well I agree with you about string concatenation, but here I'm talking about integers incrementation...


Seems the two forms are not 100% identical:

>>> import dis
>>> def f1(x):

.... x=x+1
....
>>> def f2(x):

.... x+=1
....
>>>
>>> dis.dis(f1)

2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (1)
6 BINARY_ADD
7 STORE_FAST 0 (x)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
>>> dis.dis(f2)

2 0 LOAD_FAST 0 (x)
3 LOAD_CONST 1 (1)
6 INPLACE_ADD
7 STORE_FAST 0 (x)
10 LOAD_CONST 0 (None)
13 RETURN_VALUE
>>>



What the precise difference (semantics and speed) is between the
BINARY_ADD and INPLACE_ADD opcodes, I dunno. Look in the Python source
code or maybe someone knows it from memory

Irmen

 
Reply With Quote
 
Laurent
Guest
Posts: n/a
 
      08-21-2011
Thanks for these explanations. So 2% speed difference just between "B..." and "I..." entries in a huge alphabetically-ordered switch case? Wow. Maybe there is some material for speed enhancement here...
 
Reply With Quote
 
Hans Mulder
Guest
Posts: n/a
 
      08-21-2011
On 21/08/11 19:14:19, Irmen de Jong wrote:

> What the precise difference (semantics and speed) is between the
> BINARY_ADD and INPLACE_ADD opcodes, I dunno. Look in the Python source
> code or maybe someone knows it from memory


There is a clear difference in semantics: BINARY_ADD always produces
a new object, INPLACE_ADD may modify its left-hand operand in situ
(if it's mutable).

Integers are immutable, so for integers the semantics are the same,
but for lists, for example, the two are different:

>>> x = [2, 3, 5, 7]
>>> y = [11, 13]
>>> x+y

[2, 3, 5, 7, 11, 13]
>>> x

[2, 3, 5, 7] # x still has its original value
>>> x += y
>>> x

[2, 3, 5, 7, 11, 13] # x is now modified
>>>


For integers, I would not expect a measurable difference in speed.

Hope this helps,

-- HansM

 
Reply With Quote
 
Laurent
Guest
Posts: n/a
 
      08-21-2011
Well 2% more time after 1 million iterations so you're right I won't consider it.
 
Reply With Quote
 
Roy Smith
Guest
Posts: n/a
 
      08-21-2011
In article <(E-Mail Removed)>,
Christian Heimes <(E-Mail Removed)> wrote:

> Am 21.08.2011 19:27, schrieb Andreas Löscher:
> > As for using Integers, the first case (line 1319 and 1535) are true and
> > there is no difference in Code. However, Python uses a huge switch-case
> > construct to execute it's opcodes and INPLACE_ADD cames after
> > BINARY_ADD, hence the difference in speed.

>
> I don't think that's the reason. Modern compiles turn a switch statement
> into a jump or branch table rather than a linear search like chained
> elif statements.


This is true even for very small values of "modern". I remember the
Unix v6 C compiler (circa 1977) was able to do this.
 
Reply With Quote
 
Terry Reedy
Guest
Posts: n/a
 
      08-21-2011
On 8/21/2011 1:27 PM, Andreas Löscher wrote:
>> What the precise difference (semantics and speed) is between the
>> BINARY_ADD and INPLACE_ADD opcodes, I dunno. Look in the Python source
>> code or maybe someone knows it from memory
>>
>> Irmen
>>

> from Python/ceval.c:
>
> 1316 case BINARY_ADD:
> 1317 w = POP();
> 1318 v = TOP();
> 1319 if (PyInt_CheckExact(v)&& PyInt_CheckExact(w)) {
> 1320 /* INLINE: int + int */
> 1321 register long a, b, i;
> 1322 a = PyInt_AS_LONG(v);
> 1323 b = PyInt_AS_LONG(w);
> 1324 /* cast to avoid undefined behaviour
> 1325 on overflow */
> 1326 i = (long)((unsigned long)a + b);
> 1327 if ((i^a)< 0&& (i^b)< 0)
> 1328 goto slow_add;
> 1329 x = PyInt_FromLong(i);
> 1330 }
> 1331 else if (PyString_CheckExact(v)&&
> 1332 PyString_CheckExact(w)) {
> 1333 x = string_concatenate(v, w, f, next_instr);
> 1334 /* string_concatenate consumed the ref to v */
> 1335 goto skip_decref_vx;
> 1336 }
> 1337 else {
> 1338 slow_add:
> 1339 x = PyNumber_Add(v, w);
> 1340 }
> 1341 Py_DECREF(v);
> 1342 skip_decref_vx:
> 1343 Py_DECREF(w);
> 1344 SET_TOP(x);
> 1345 if (x != NULL) continue;
> 1346 break;
>
> 1532 case INPLACE_ADD:
> 1533 w = POP();
> 1534 v = TOP();
> 1535 if (PyInt_CheckExact(v)&& PyInt_CheckExact(w)) {
> 1536 /* INLINE: int + int */
> 1537 register long a, b, i;
> 1538 a = PyInt_AS_LONG(v);
> 1539 b = PyInt_AS_LONG(w);
> 1540 i = a + b;
> 1541 if ((i^a)< 0&& (i^b)< 0)
> 1542 goto slow_iadd;
> 1543 x = PyInt_FromLong(i);
> 1544 }
> 1545 else if (PyString_CheckExact(v)&&
> 1546 PyString_CheckExact(w)) {
> 1547 x = string_concatenate(v, w, f, next_instr);
> 1548 /* string_concatenate consumed the ref to v */
> 1549 goto skip_decref_v;
> 1550 }
> 1551 else {
> 1552 slow_iadd:
> 1553 x = PyNumber_InPlaceAdd(v, w);
> 1554 }
> 1555 Py_DECREF(v);
> 1556 skip_decref_v:
> 1557 Py_DECREF(w);
> 1558 SET_TOP(x);
> 1559 if (x != NULL) continue;
> 1560 break;
>
> As for using Integers, the first case (line 1319 and 1535) are true and
> there is no difference in Code. However, Python uses a huge switch-case
> construct to execute it's opcodes and INPLACE_ADD cames after
> BINARY_ADD, hence the difference in speed.
>
> To be clear, this is nothing you should consider when writing fast code.
> Complexity wise they both are the same.


With 64 bit 3.2.2 on my Win 7 Pentium, the difference was 4% and with
floats (0.0 and 1.0), 6%

--
Terry Jan Reedy


 
Reply With Quote
 
Laurent
Guest
Posts: n/a
 
      08-21-2011

> With 64 bit 3.2.2 on my Win 7 Pentium, the difference was 4% and with
> floats (0.0 and 1.0), 6%


For floats it is understandable. But for integers, seriously, 4% is a lot. I would never have thought an interpreter would have differences like this in syntax for something as fundamental as adding 1.
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Kernel modules syntaxes InuY4sha C Programming 1 04-09-2008 04:20 PM
add to words syntaxes Dani Ruby 1 03-16-2006 08:42 AM
Canonical Science Today, and notation/syntaxes for CanonMath Juan R. XML 1 02-11-2006 06:37 PM
eclipse doesn't support new syntaxes - Java 2 06-08-2005 02:03 PM
speed speed speed a.metselaar Computer Support 14 12-30-2003 03:34 AM



Advertisments