Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Ruby > Parallel Assignments and Elegance/Complexity Ratio.

Reply
Thread Tools

Parallel Assignments and Elegance/Complexity Ratio.

 
 
Charles Oliver Nutter
Guest
Posts: n/a
 
      01-14-2011
On Wed, Jan 12, 2011 at 10:44 AM, Colin Bartlett
<(E-Mail Removed)> wrote:
> First, whenever I've benchmarked parallel assignment against
> individual assignment, I've found the parallel assignment somewhat
> slower. The swap is more elegant, but even that seems slower.


Parallel assignment is generally slower than straight-up assignment in
1.9 and JRuby because it stands up a full Ruby Array for the RHS and
result of the entire assignment expression:

~/projects/jruby =E2=9E=94 jruby -e "p((a, b, c =3D 1, 2, 3))"
[1, 2, 3]

As you would expect this is a significant cost compared to just
assigning the values directly. JRuby can improve this when it knows
that the assignment is not being used as an expression:

~/projects/jruby =E2=9E=94 jruby -rbenchmark -e "2.times{ puts
Benchmark.measure { 10000000.times {a,b,c=3D1,2,3} } }"
1.924000 0.000000 1.924000 ( 1.812000)
1.810000 0.000000 1.810000 ( 1.810000)

~/projects/jruby =E2=9E=94 jruby -rbenchmark -e "2.times{ puts
Benchmark.measure { 10000000.times {a,b,c=3D1,2,3; nil} } }"
1.073000 0.000000 1.073000 ( 0.959000)
0.884000 0.000000 0.884000 ( 0.884000)

There are also some implementations that "cheat" (I mean that in the
nicest way possible) and don't bother producing that array return
value at all, and they perform much better on parallel assignment as a
result.

- Charlie

 
Reply With Quote
 
 
 
 
Colin Bartlett
Guest
Posts: n/a
 
      01-14-2011
On Fri, Jan 14, 2011 at 4:41 PM, Charles Oliver Nutter
<(E-Mail Removed)> wrote:
> Parallel assignment is generally slower than straight-up assignment in
> 1.9 and JRuby because it stands up a full Ruby Array for the RHS and
> result of the entire assignment expression:


I'd assumed it was something like that. Thanks for the explanation.

> As you would expect this is a significant cost compared to just
> assigning the values directly. JRuby can improve this when it knows
> that the assignment is not being used as an expression:
> ...
> There are also some implementations that "cheat" (I mean that in the
> nicest way possible)


I don't know if you've heard of the English magician Paul Daniels, but
I'm very fond of a phrase he uses to contrast himself with some people
who are, let us say, not self-admitted magicians. "I cheat, but I
cheat honestly".

> and don't bother producing that array return value at all,
> and they perform much better on parallel assignment as a result.


Cheating honestly, I think!

That's interesting, and the "cheating" idea hadn't ocurred to me. From
the benchmarks below (similar to those you quoted; I did actually run
the benchmarks twice, but the runs were very similar, so to avoid
clutter I've only given one run) it seems MRC 1.9.1 is also cheating
honestly if it can.

Now the only thing that's puzzling me is why the MRI 1.9.1 "cheating
honestly" version of parallel assignment seems to be slightly but
clearly faster than the MRI 1.9.1 single assignment!

require "benchmark"
kt = 10_000_000
nn = 1
nn.times{ puts Benchmark.measure { kt.times {a,b,c=1,2,3 } } }
nn.times{ puts Benchmark.measure { kt.times {a,b,c=1,2,3; nil} } }
nn.times{ puts Benchmark.measure { kt.times {a = 1; b = 2; c = 3 } } }

jruby 1.5.3 (ruby 1.8.7 patchlevel 249) (2010-09-28 7ca06d7)
(Java HotSpot(TM) Client VM 1.6.0_14) [x86-java]
3.038000 0.000000 3.038000 ( 3.007000)
1.561000 0.000000 1.561000 ( 1.562000)
1.558000 0.000000 1.558000 ( 1.558000)

ruby 1.9.1p430 (2010-08-16 revision 2899 [i386-mingw32]
3.962000 0.000000 3.962000 ( 3.963000)
1.747000 0.000000 1.747000 ( 1.741000)
2.106000 0.000000 2.106000 ( 2.105000)

 
Reply With Quote
 
 
 
 
Charles Oliver Nutter
Guest
Posts: n/a
 
      01-15-2011
On Fri, Jan 14, 2011 at 2:45 PM, Colin Bartlett <(E-Mail Removed)> w=
rote:
> Now the only thing that's puzzling me is why the MRI 1.9.1 "cheating
> honestly" version of parallel assignment seems to be slightly but
> clearly faster than the MRI 1.9.1 single assignment!


Yes, that is a bit baffling! I have no explanation for that. As you
can see in JRuby, the times for the non-expression parallel assignment
and the normal assignment are roughly the same.

> require "benchmark"
> kt =3D 10_000_000
> nn =3D 1
> nn.times{ puts Benchmark.measure { kt.times {a,b,c=3D1,2,3 =C2=A0 =C2=A0 =

} } }
> nn.times{ puts Benchmark.measure { kt.times {a,b,c=3D1,2,3; nil} } }
> nn.times{ puts Benchmark.measure { kt.times {a =3D 1; b =3D 2; c =3D 3 } =

} }
>
> jruby 1.5.3 (ruby 1.8.7 patchlevel 249) (2010-09-28 7ca06d7)
> =C2=A0(Java HotSpot(TM) Client VM 1.6.0_14) [x86-java]
> =C2=A03.038000 =C2=A0 0.000000 =C2=A0 3.038000 ( =C2=A03.007000)
> =C2=A01.561000 =C2=A0 0.000000 =C2=A0 1.561000 ( =C2=A01.562000)
> =C2=A01.558000 =C2=A0 0.000000 =C2=A0 1.558000 ( =C2=A01.558000)


FWIW, you'd get better results here if you ran a couple iterations,
and of course if you specified --server it's significantly better...

~/projects/jruby =E2=9E=94 jruby -v passign.rb
jruby 1.6.0.RC1 (ruby 1.8.7 patchlevel 330) (2011-01-14 da2bb9d) (Java
HotSpot(TM) Client VM 1.6.0_22) [darwin-i386-java]
1.952000 0.000000 1.952000 ( 1.839000)
1.784000 0.000000 1.784000 ( 1.784000)
1.796000 0.000000 1.796000 ( 1.796000)
0.919000 0.000000 0.919000 ( 0.919000)
0.865000 0.000000 0.865000 ( 0.865000)
0.856000 0.000000 0.856000 ( 0.856000)
0.961000 0.000000 0.961000 ( 0.961000)
0.924000 0.000000 0.924000 ( 0.924000)
0.880000 0.000000 0.880000 ( 0.880000)

~/projects/jruby =E2=9E=94 jruby --server -v passign.rb
jruby 1.6.0.RC1 (ruby 1.8.7 patchlevel 330) (2011-01-14 da2bb9d) (Java
HotSpot(TM) Server VM 1.6.0_22) [darwin-i386-java]
1.388000 0.000000 1.388000 ( 1.324000)
1.086000 0.000000 1.086000 ( 1.086000)
1.034000 0.000000 1.034000 ( 1.034000)
0.522000 0.000000 0.522000 ( 0.522000)
0.500000 0.000000 0.500000 ( 0.500000)
0.491000 0.000000 0.491000 ( 0.491000)
0.517000 0.000000 0.517000 ( 0.517000)
0.485000 0.000000 0.485000 ( 0.485000)
0.496000 0.000000 0.496000 ( 0.496000)

 
Reply With Quote
 
Joseph Lenton
Guest
Posts: n/a
 
      01-15-2011
Josh Cheek wrote in post #974019:
> On Tue, Jan 11, 2011 at 8:29 AM, Kedar Mhaswade
> <(E-Mail Removed)>wrote:
>
>>
>> Kedar
>>
>> --
>> Posted via http://www.ruby-forum.com/.
>>
>>

> I don't use it very often, but when I do, it usually makes an elegant
> solution. I think part of the reason it doesn't seem that way is because
> you
> are playing with it in too sterile of an environment. For example, you
> rate
> "x, (y, (z, a))=[1, [2, [3, 4]]]" as lowest, suggesting it is equivalent
> to
> "x=1;y=2;z=3;a=4" but this is not true. If you are actually assigning
> with
> literals, you would, of course, use the equivalent way, but if your data
> comes in as nested arrays, then you can't assign like that, instead you
> have
> to do something like this:
>
> def parallel(values)
> x, (y, (z, a))=values
> [x,y,z,a]
> end


Instead you could also do:

x, y, z, a = values.flatten

Which avoids making you look like your writing in LISP.

For small stuff parrallel assignment makes it look elegant, but like any
language feature it can be abused. Although I don't think I've ever
actually found a case where I've wanted to use this. Especially when
most of the time my 'x, y, z and a' variables are unrelated so I don't
want to use them together in the same statement.

--
Posted via http://www.ruby-forum.com/.

 
Reply With Quote
 
Adam Prescott
Guest
Posts: n/a
 
      01-17-2011
[Note: parts of this message were removed to make it a legal post.]

>
> > def parallel(values)
> > x, (y, (z, a))=values
> > [x,y,z,a]
> > end

>
> Instead you could also do:
>
> x, y, z, a = values.flatten
>
> Which avoids making you look like your writing in LISP.



These two bits of code are not the same thing.

a, (b, c) = [1, [2, [3, 4]]]

a #=> 1
b #=> 2
c #=> [3, 4]

a, b, c = [1, [2, [3, 4]]].flatten

a #=> 1
b #=> 2
c #=> 3


On Sat, Jan 15, 2011 at 11:04 AM, Joseph Lenton <(E-Mail Removed)> wrote:

> Josh Cheek wrote in post #974019:
> > On Tue, Jan 11, 2011 at 8:29 AM, Kedar Mhaswade
> > <(E-Mail Removed)>wrote:
> >
> >>
> >> Kedar
> >>
> >> --
> >> Posted via http://www.ruby-forum.com/.
> >>
> >>

> > I don't use it very often, but when I do, it usually makes an elegant
> > solution. I think part of the reason it doesn't seem that way is because
> > you
> > are playing with it in too sterile of an environment. For example, you
> > rate
> > "x, (y, (z, a))=[1, [2, [3, 4]]]" as lowest, suggesting it is equivalent
> > to
> > "x=1;y=2;z=3;a=4" but this is not true. If you are actually assigning
> > with
> > literals, you would, of course, use the equivalent way, but if your data
> > comes in as nested arrays, then you can't assign like that, instead you
> > have
> > to do something like this:
> >
> > def parallel(values)
> > x, (y, (z, a))=values
> > [x,y,z,a]
> > end

>
> Instead you could also do:
>
> x, y, z, a = values.flatten
>
> Which avoids making you look like your writing in LISP.
>
> For small stuff parrallel assignment makes it look elegant, but like any
> language feature it can be abused. Although I don't think I've ever
> actually found a case where I've wanted to use this. Especially when
> most of the time my 'x, y, z and a' variables are unrelated so I don't
> want to use them together in the same statement.
>
> --
> Posted via http://www.ruby-forum.com/.
>
>


 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: Parallel in, Parallel out shift register Vivek Menon VHDL 0 06-10-2011 10:15 PM
Parallel in, Parallel out shift register Vivek Menon VHDL 5 06-08-2011 03:56 PM
Parallel assignments vs. Serial assigments Gavin Ruby 5 02-16-2010 06:16 PM
Parallel port control with USB->Parallel converter Soren Python 4 02-14-2008 03:18 PM
Signals and variables, concurrent and sequential assignments Taras_96 VHDL 5 04-14-2005 03:07 AM



Advertisments