Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > Re: Pass data to a subprocess

Reply
Thread Tools

Re: Pass data to a subprocess

 
 
Laszlo Nagy
Guest
Posts: n/a
 
      08-01-2012
>
> As I wrote "I found many nice things (Pipe, Manager and so on), but
> actually even
> this seems to work:" yes I did read the documentation.

Sorry, I did not want be offensive.
>
> I was just surprised that it worked better than I expected even
> without Pipes and Queues, but now I understand why..
>
> Anyway now I would like to be able to detach subprocesses to avoid the
> nasty code reloading that I was talking about in another thread, but
> things get more tricky, because I can't use queues and pipes to
> communicate with a running process that it's noit my child, correct?
>

Yes, I think that is correct. Instead of detaching a child process, you
can create independent processes and use other frameworks for IPC. For
example, Pyro. It is not as effective as multiprocessing.Queue, but in
return, you will have the option to run your service across multiple
servers.

The most effective IPC is usually through shared memory. But there is no
OS independent standard Python module that can communicate over shared
memory. Except multiprocessing of course, but AFAIK it can only be used
to communicate between fork()-ed processes.
 
Reply With Quote
 
 
 
 
Roy Smith
Guest
Posts: n/a
 
      08-01-2012
In article <(E-Mail Removed)>,
Laszlo Nagy <(E-Mail Removed)> wrote:

> Yes, I think that is correct. Instead of detaching a child process, you
> can create independent processes and use other frameworks for IPC. For
> example, Pyro. It is not as effective as multiprocessing.Queue, but in
> return, you will have the option to run your service across multiple
> servers.


You might want to look at beanstalk (http://kr.github.com/beanstalkd/).
We've been using it in production for the better part of two years. At
a 30,000 foot level, it's an implementation of queues over named pipes
over TCP, but it takes care of a zillion little details for you.

Setup is trivial, and there's clients for all sorts of languages. For a
Python client, go with beanstalkc (pybeanstalk appears to be
abandonware).
>
> The most effective IPC is usually through shared memory. But there is no
> OS independent standard Python module that can communicate over shared
> memory.


It's true that shared memory is faster than serializing objects over a
TCP connection. On the other hand, it's hard to imagine anything
written in Python where you would notice the difference.
 
Reply With Quote
 
 
 
 
Laszlo Nagy
Guest
Posts: n/a
 
      08-01-2012

>> The most effective IPC is usually through shared memory. But there is no
>> OS independent standard Python module that can communicate over shared
>> memory.

> It's true that shared memory is faster than serializing objects over a
> TCP connection. On the other hand, it's hard to imagine anything
> written in Python where you would notice the difference.

Well, except in response times.

The TCP stack likes to wait after you call send() on a socket. Yes, you
can use setsockopt/TCP_NOWAIT, but my experience is that response times
with TCP can be long, especially when you have to do many
request-response pairs.

It also depends on the protocol design - if you can reduce the number of
request-response pairs then it helps a lot.
 
Reply With Quote
 
Laszlo Nagy
Guest
Posts: n/a
 
      08-01-2012
On 2012-08-01 12:59, Roy Smith wrote:
> In article <(E-Mail Removed)>,
> Laszlo Nagy <(E-Mail Removed)> wrote:
>
>> Yes, I think that is correct. Instead of detaching a child process, you
>> can create independent processes and use other frameworks for IPC. For
>> example, Pyro. It is not as effective as multiprocessing.Queue, but in
>> return, you will have the option to run your service across multiple
>> servers.

> You might want to look at beanstalk (http://kr.github.com/beanstalkd/).
> We've been using it in production for the better part of two years. At
> a 30,000 foot level, it's an implementation of queues over named pipes
> over TCP, but it takes care of a zillion little details for you.

Looks very simple to use. Too bad that it doesn't work on Windows systems.
 
Reply With Quote
 
andrea crotti
Guest
Posts: n/a
 
      08-01-2012
2012/8/1 Roy Smith <(E-Mail Removed)>:
> In article <(E-Mail Removed)>,
> Laszlo Nagy <(E-Mail Removed)> wrote:
>
>> Yes, I think that is correct. Instead of detaching a child process, you
>> can create independent processes and use other frameworks for IPC. For
>> example, Pyro. It is not as effective as multiprocessing.Queue, but in
>> return, you will have the option to run your service across multiple
>> servers.

>
> You might want to look at beanstalk (http://kr.github.com/beanstalkd/).
> We've been using it in production for the better part of two years. At
> a 30,000 foot level, it's an implementation of queues over named pipes
> over TCP, but it takes care of a zillion little details for you.
>
> Setup is trivial, and there's clients for all sorts of languages. For a
> Python client, go with beanstalkc (pybeanstalk appears to be
> abandonware).
>>
>> The most effective IPC is usually through shared memory. But there is no
>> OS independent standard Python module that can communicate over shared
>> memory.

>
> It's true that shared memory is faster than serializing objects over a
> TCP connection. On the other hand, it's hard to imagine anything
> written in Python where you would notice the difference.
> --
> http://mail.python.org/mailman/listinfo/python-list



That does look nice and I would like to have something like that..
But since I have to convince my boss of another external dependency I
think it might be worth
to try out zeromq instead, which can also do similar things and looks
more powerful, what do you think?
 
Reply With Quote
 
Grant Edwards
Guest
Posts: n/a
 
      08-01-2012
On 2012-08-01, Laszlo Nagy <(E-Mail Removed)> wrote:
>>
>> As I wrote "I found many nice things (Pipe, Manager and so on), but
>> actually even
>> this seems to work:" yes I did read the documentation.

> Sorry, I did not want be offensive.
>>
>> I was just surprised that it worked better than I expected even
>> without Pipes and Queues, but now I understand why..
>>
>> Anyway now I would like to be able to detach subprocesses to avoid the
>> nasty code reloading that I was talking about in another thread, but
>> things get more tricky, because I can't use queues and pipes to
>> communicate with a running process that it's noit my child, correct?
>>

> Yes, I think that is correct.


I don't understand why detaching a child process on Linux/Unix would
make IPC stop working. Can somebody explain?

--
Grant Edwards grant.b.edwards Yow! My vaseline is
at RUNNING...
gmail.com
 
Reply With Quote
 
Laszlo Nagy
Guest
Posts: n/a
 
      08-01-2012

>>> things get more tricky, because I can't use queues and pipes to
>>> communicate with a running process that it's noit my child, correct?
>>>

>> Yes, I think that is correct.

> I don't understand why detaching a child process on Linux/Unix would
> make IPC stop working. Can somebody explain?
>

It is implemented with shared memory. I think (although I'm not 100%
sure) that shared memory is created *and freed up* (shm_unlink() system
call) by the parent process. It makes sense, because the child processes
will surely die with the parent. If you detach a child process, then it
won't be killed with its original parent. But the shared memory will be
freed by the original parent process anyway. I suspect that the child
that has mapped that shared memory segment will try to access a freed up
resource, do a segfault or something similar.
 
Reply With Quote
 
Laszlo Nagy
Guest
Posts: n/a
 
      08-01-2012

>>> Yes, I think that is correct.

>> I don't understand why detaching a child process on Linux/Unix would
>> make IPC stop working. Can somebody explain?
>>

> It is implemented with shared memory. I think (although I'm not 100%
> sure) that shared memory is created *and freed up* (shm_unlink()
> system call) by the parent process. It makes sense, because the child
> processes will surely die with the parent. If you detach a child
> process, then it won't be killed with its original parent. But the
> shared memory will be freed by the original parent process anyway. I
> suspect that the child that has mapped that shared memory segment will
> try to access a freed up resource, do a segfault or something similar.

So detaching the child process will not make IPC stop working. But
exiting from the original parent process will. (And why else would you
detach the child?)

 
Reply With Quote
 
andrea crotti
Guest
Posts: n/a
 
      08-01-2012
2012/8/1 Laszlo Nagy <(E-Mail Removed)>:
>
> So detaching the child process will not make IPC stop working. But exiting
> from the original parent process will. (And why else would you detach the
> child?)
>
> --
> http://mail.python.org/mailman/listinfo/python-list



Well it makes perfect sense if it stops working to me, so or
- I use zeromq or something similar to communicate
- I make every process independent without the need to further
communicate with the parent..
 
Reply With Quote
 
Roy Smith
Guest
Posts: n/a
 
      08-01-2012
On Aug 1, 2012, at 9:25 AM, andrea crotti wrote:

> [beanstalk] does look nice and I would like to have something like that..
> But since I have to convince my boss of another external dependency I
> think it might be worth
> to try out zeromq instead, which can also do similar things and looks
> more powerful, what do you think?


I'm afraid I have no experience with zeromq, so I can't offer an opinion.

--
Roy Smith
http://www.velocityreviews.com/forums/(E-Mail Removed)



 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Re: Pass data to a subprocess Laszlo Nagy Python 0 07-31-2012 02:46 PM
Re: Pass data to a subprocess andrea crotti Python 0 07-31-2012 02:12 PM
Pass data to a subprocess andrea crotti Python 0 07-31-2012 01:56 PM
how to import subprocess into my 'subprocess.py' file hiral Python 2 05-05-2010 12:56 PM
[Subprocess/Windows] subprocess module under Windows 98 Andreas Jung Python 2 11-02-2005 05:41 PM



Advertisments