Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Perl > Perl Misc > Communication across Perl scripts

Reply
Thread Tools

Communication across Perl scripts

 
 
Peter Makholm
Guest
Posts: n/a
 
      10-12-2010
"(E-Mail Removed)" <(E-Mail Removed)> writes:

> That may be easiest, but I don't think it's the dumbest. And if
> you use this approach, I highly recommend using the "Storable" module
> (it's a standard module so you should already have it).


As long as you just use it for a single host for very temporary files,
Storable is fine. But I have been bitten by Storable not being
compatible between versions or different installations one time to
many to call it 'highly recommended'.

If you need suport for every possible perl structure then Storable is
probably the only almost viable solution. But if simple trees of
hashrefs and arrayrefs is good enough the I consider JSON::XS a better
choice.


But it all depends on the exact needs and for the original poster he
might not come in situations where Storable shows it's nasty sides and
don't need the extra speed from JSON::XS or the more future-proofe and
portable format.

//Makholm
 
Reply With Quote
 
 
 
 
Dr.Ruud
Guest
Posts: n/a
 
      10-12-2010
On 2010-10-11 18:25, Jean wrote:

> I have two scripts; Script 1 generates some data. I want my
> script two to be able to access that information. The easiest/dumbest
> way is to write the data generated by script 1 as a file and read it
> later using script 2. Is there any other way than this ?


I normally use a database for that. Script-1 can normally be scaled up
by making it do things in parallel (by chunking the input in an obvious
non-inter-dependable way).

Script-2 can also just be a phase in script-1. Once all children are
done processing, there normally is a reporting phase.


> There is no guarantee that Script 2 will be run after Script 1. So
> there should be some way to free that memory using a watchdog timer.


When the intermediate data is in temporary database tables, they
disappear automatically with the close of the connection.

--
Ruud
 
Reply With Quote
 
 
 
 
Martijn Lievaart
Guest
Posts: n/a
 
      10-12-2010
On Tue, 12 Oct 2010 10:05:57 +0200, Peter Makholm wrote:

> "(E-Mail Removed)" <(E-Mail Removed)> writes:
>
>> That may be easiest, but I don't think it's the dumbest. And if
>> you use this approach, I highly recommend using the "Storable" module
>> (it's a standard module so you should already have it).

>
> As long as you just use it for a single host for very temporary files,
> Storable is fine. But I have been bitten by Storable not being
> compatible between versions or different installations one time to many
> to call it 'highly recommended'.


Another way might be Data:umper.

M4
 
Reply With Quote
 
Randal L. Schwartz
Guest
Posts: n/a
 
      10-12-2010
>>>>> "Jean" == Jean <(E-Mail Removed)> writes:

Jean> I am searching for efficient ways of communication across two Perl
Jean> scripts. I have two scripts; Script 1 generates some data. I want my
Jean> script two to be able to access that information.

Look at DBM:eep for a trivial way to store structured data, including
having transactions so the data will change "atomically".

And despite the name... DBM:eep has no XS components... so it can even
be installed in a hosted setup with limited ("no") access to compilers.

Disclaimer: Stonehenge paid for part of the development of DBM:eep,
because yes, it's *that* useful.

print "Just another Perl hacker,"; # the original

--
Randal L. Schwartz - Stonehenge Consulting Services, Inc. - +1 503 777 0095
<(E-Mail Removed)> <URL:http://www.stonehenge.com/merlyn/>
Smalltalk/Perl/Unix consulting, Technical writing, Comedy, etc. etc.
See http://methodsandmessages.posterous.com/ for Smalltalk discussion
 
Reply With Quote
 
jl_post@hotmail.com
Guest
Posts: n/a
 
      10-12-2010
On Oct 12, 2:05*am, Peter Makholm <(E-Mail Removed)> wrote:
>
> As long as you just use it for a single host for very temporary files,
> Storable is fine. But I have been bitten by Storable not being
> compatible between versions or different installations one time to
> many to call it 'highly recommended'.



I was under the impression that Storable::nstore() was cross-
platform compatible (as opposed to Storable::store(), which isn't).
"perldoc Storable" has this to say about it:

> You can also store data in network order to allow easy
> sharing across multiple platforms, or when storing on a
> socket known to be remotely connected. The routines to
> call have an initial "n" prefix for *network*, as in
> "nstore" and "nstore_fd".


Unfortunately, it doesn't really specify the extent of what was
meant by "multiple platforms". I always thought that meant any
platform could read data written out by nstore(), but since I've never
tested it, I can't really be sure.

When you said you were "bitten" by Storable, were you using
Storable::store(), or Storable::nstore()?

-- Jean-Luc
 
Reply With Quote
 
Peter J. Holzer
Guest
Posts: n/a
 
      10-12-2010
On 2010-10-12 00:59, Ted Zlatanov <(E-Mail Removed)> wrote:
> On Mon, 11 Oct 2010 14:25:59 -0700 (PDT) "C.DeRykus" <(E-Mail Removed)> wrote:
> CD> With a named pipe though, each script just deals with the named file
> CD> for reading or writing while the OS takes care of the messy IPC
> CD> details for you. The 2nd script will just block until data is
> CD> available so running order isn't a concern. As long as the two
> CD> scripts are running more or less concurrently, I would guess memory
> CD> use will be manageable too since the reader will be draining the
> CD> pipe as the data arrives.
>
> The only warning I have there is that pipes are pretty slow and have
> small buffers by default in the Linux kernel (assuming Linux).


Hmm. On my system (a 1.86 GHz Core2 - not ancient, but not the latest
and greatest, either) I can transfer about 800 MB/s through a pipe at
32 kB buffer size. For larger buffers it gets a bit slower, but a buffer
size of 1MB is still quite ok.

You may confuse that with other systems. Windows pipes have a reputation
for being slow. Traditionally Unix pipes were restricted to a rather
small buffer (8 or 10 kB). I do think Linux pipes become synchronous for
large writes, though.

> I forget exactly why, I think it's due to terminal disciplines or
> something, I didn't dig too much.


Unix pipes have nothing to do with terminals. Originally they were
implemented as files, BSD 4.x reimplemented them on top of Unix sockets.
I don't now how Linux implements them, but I'm quite sure that no
terminals are involved, and certainly no terminal disciplines.
Are you confusing them with ptys, perhaps?

> I ran into this earlier this year.


Can you dig up the details?

hp

 
Reply With Quote
 
Bart Lateur
Guest
Posts: n/a
 
      10-13-2010
Randal L. Schwartz wrote:

>Look at DBM:eep for a trivial way to store structured data, including
>having transactions so the data will change "atomically".
>
>And despite the name... DBM:eep has no XS components... so it can even
>be installed in a hosted setup with limited ("no") access to compilers.
>
>Disclaimer: Stonehenge paid for part of the development of DBM:eep,
>because yes, it's *that* useful.


Ouch. DBM:eep is buggy, in my experience.

I don't know the exact circumstances, but when using it to cache the XML
contents of user home nodes on Perlmonks, I regularly get crashes in it.
It has something to do with changing size of the data, IIRC from larger
than 8k to below 8k. But I could have gotten these details wrong, as it
has been many since I last tried it.

--
Bart.
 
Reply With Quote
 
paul
Guest
Posts: n/a
 
      10-13-2010
On Oct 13, 2:07*pm, Bart Lateur <(E-Mail Removed)> wrote:
> Randal L. Schwartz wrote:
> >Look at DBM:eep for a trivial way to store structured data, including
> >having transactions so the data will change "atomically".

>
> >And despite the name... DBM:eep has no XS components... so it can even
> >be installed in a hosted setup with limited ("no") access to compilers.

>
> >Disclaimer: Stonehenge paid for part of the development of DBM:eep,
> >because yes, it's *that* useful.

>
> Ouch. DBM:eep is buggy, in my experience.
>
> I don't know the exact circumstances, but when using it to cache the XML
> contents of user home nodes on Perlmonks, I regularly get crashes in it.
> It has something to do with changing size of the data, IIRC from larger
> than 8k to below 8k. But I could have gotten these details wrong, as it
> has been many since I last tried it.
>
> --
> * * * * Bart.


you can try Named pipes, as a special type of file that allows for
interprocess communication
.. by using the "mknod" command you can create a name pile file, for
one process can open for reading
another for writing.
 
Reply With Quote
 
Ted Zlatanov
Guest
Posts: n/a
 
      10-14-2010
On Tue, 12 Oct 2010 22:24:47 +0200 "Peter J. Holzer" <(E-Mail Removed)> wrote:

PJH> On 2010-10-12 00:59, Ted Zlatanov <(E-Mail Removed)> wrote:
>> On Mon, 11 Oct 2010 14:25:59 -0700 (PDT) "C.DeRykus" <(E-Mail Removed)> wrote:

CD> With a named pipe though, each script just deals with the named file
CD> for reading or writing while the OS takes care of the messy IPC
CD> details for you. The 2nd script will just block until data is
CD> available so running order isn't a concern. As long as the two
CD> scripts are running more or less concurrently, I would guess memory
CD> use will be manageable too since the reader will be draining the
CD> pipe as the data arrives.
>>
>> The only warning I have there is that pipes are pretty slow and have
>> small buffers by default in the Linux kernel (assuming Linux).


PJH> Hmm. On my system (a 1.86 GHz Core2 - not ancient, but not the latest
PJH> and greatest, either) I can transfer about 800 MB/s through a pipe at
PJH> 32 kB buffer size. For larger buffers it gets a bit slower, but a buffer
PJH> size of 1MB is still quite ok.

Hmm, sorry for stating that badly.

The biggest problem is that pipes *block* normally. So even if your
reader is slow only once in a while, as long as you're using the default
buffer (which is small), your writer will block too. In my situation
(the writer was receiving data from TIBCO) that was deadly.

I meant to say that but somehow it turned into "pipes are slow" between
brain and keyboard. Sorry.

PJH> You may confuse that with other systems. Windows pipes have a
PJH> reputation for being slow.

Yes, on Windows we had even more trouble for many reasons. But I was
only talking about Linux so I won't take that bailout

>> I forget exactly why, I think it's due to terminal disciplines or
>> something, I didn't dig too much.


PJH> Unix pipes have nothing to do with terminals. Originally they were
PJH> implemented as files, BSD 4.x reimplemented them on top of Unix sockets.
PJH> I don't now how Linux implements them, but I'm quite sure that no
PJH> terminals are involved, and certainly no terminal disciplines.
PJH> Are you confusing them with ptys, perhaps?

Probably. I was on a tight deadline and the pipe approach simply did
not work, so I couldn't investigate in more detail. There's a lot more
resiliency in a file drop approach, too: if either side dies, the other
one is not affected. There is no leftover mess like with shared memory,
either. So I've been pretty happy with the file drop.

Ted
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
What is required for perl scripts to run correct when launched from rc scripts on HPUX 11? deanjones7@gmail.com Perl Misc 13 09-10-2007 11:58 AM
Communication between python scripts Chris Python 9 03-04-2005 08:52 PM
RE: Communication between remote scripts Tim Golden Python 1 09-14-2004 09:08 PM
Communication between remote scripts ChrisH Python 5 09-14-2004 07:25 PM
Sharing variables across scripts. Darren Dunham Perl Misc 1 10-13-2003 05:18 PM



Advertisments