Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Perl > Perl Misc > file locking

Reply
Thread Tools

file locking

 
 
Tim Neukum
Guest
Posts: n/a
 
      05-09-2005
i have a three programs that run concurrently: call them daemon, user1,
user2 on a unix machine SunOS 5.9

daemon is long running as the name implies
it reads and writes to a file that represents a queue
something like this:

open (QFILE ">+ $queue_file") or die "daemon ERROR: open: $! died";
flock (QFILE, LOCK_EX) or die "daemon ERROR: flock: $! died";
seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
# process file
truncate(QFILE, 0) or die "daemon ERROR: truncate: $! died";
seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
# rewrite file

at prompt> addToQueue job
adding things to the queue file are implemented with exclusive locking
this functionality works fine and has for some time with

here's my problem.
user1 is a process started in the background by addToQueue that monitors the
job while it is still in the queue
user2 is another process started in the background by addToQueue that
monitors the job while it is still in the queue

both read the queue file but i want to have a stable state of the file when
reading, but i get bad file number when i try to flock

user1 something like this
while (inQueue()) {
#dosomething here
}

user2 something like this
while (inQueue()) {
#do something different here
}

same for both user1 and user2
sub inQueue {
open (INFILE, "< $queue_file") or die "user# ERROR: open: $! died";
flock (INFILE, LOCK_EX) or die "user# ERROR: open: $! died";
<<-----------this is the culprit
seek (QFILE, 0, 0) or die "user# ERROR: seek: $! died";

#check for job

close (INFILE) or die "user# ERROR: close: $! died";
return $found_job
}

QUESTIONS:
i've seen that sysopen may be prefered to open for writing, Why? eg sysopen
(QFILE "$queue_file" O_RDWR) or die...

why can't i request a LOCK_EX in sub inQueue??

if i change inQueue to LOCK_SH....
if i have two processes using LOCK_SH will they attempt to read at the same
time?

will LOCK_EX ever get a chance?? ie will shared locks always succeed if
another process has a shared lock even if another process is was waiting for
an exclusive lock
scenario timeline:
time 0 user1 requests LOCK_SH
time 1 user1 gets LOCK_SH
time 2 daemon requests LOCK_EX
time 3 daemon waits for os to give lock
time 4 user2 requests LOCK_SH
time5 ....does user2 get lock? or does it get in line behind daemon.

if the latter, what the $$$$ is the difference between LOCK_SH and
LOCK_EX????

if the former (ie user2 gets LOCK_SH without waiting) then how can you be
certain of the position, interaction within file stream considering user1
may be reading? very carefully i guess??? does each process get a different
pointer into the file position?

I know this is a lot, but i haven't found anything that sufficiently answers
these questions.

Thanks in advance,
Tim


 
Reply With Quote
 
 
 
 
xhoster@gmail.com
Guest
Posts: n/a
 
      05-09-2005
"Tim Neukum" <(E-Mail Removed)> wrote:
> i have a three programs that run concurrently: call them daemon, user1,
> user2 on a unix machine SunOS 5.9
>
> daemon is long running as the name implies
> it reads and writes to a file that represents a queue
> something like this:
>
> open (QFILE ">+ $queue_file") or die "daemon ERROR: open: $! died";


You've just blown away your file. Is that what you want to do?

> flock (QFILE, LOCK_EX) or die "daemon ERROR: flock: $! died";
> seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
> # process file
> truncate(QFILE, 0) or die "daemon ERROR: truncate: $! died";
> seek (QFILE, 0, 0) or die "daemon ERROR: seek: $! died";
> # rewrite file


Nowhere here do you give up your lock. If you don't care what is in the
file at start up time, and don't let anyone else access while you are
running, what is the point of having it? If it does give up the lock, then
you have failed to show us something important.

> at prompt> addToQueue job
> adding things to the queue file are implemented with exclusive locking
> this functionality works fine and has for some time with
>
> here's my problem.
> user1 is a process started in the background by addToQueue that monitors
> the job while it is still in the queue
> user2 is another process started in the background by addToQueue that
> monitors the job while it is still in the queue
>
> both read the queue file but i want to have a stable state of the file
> when reading, but i get bad file number when i try to flock

....
> sub inQueue {
> open (INFILE, "< $queue_file") or die "user# ERROR: open: $! died";
> flock (INFILE, LOCK_EX) or die "user# ERROR: open: $! died";
> <<-----------this is the culprit


Some OSes don't let you take an exclusive lock on a file opened
only for reading. Sad, but true.


> QUESTIONS:
> i've seen that sysopen may be prefered to open for writing, Why? eg
> sysopen (QFILE "$queue_file" O_RDWR) or die...


The main reason I see for using it is to avoid race conditions when you
want to be assured the file does not (or does) exist before (re)creating
it. I don't see that that applies to your current situation.

>
> why can't i request a LOCK_EX in sub inQueue??


Because your OS won't let you do that on a opened-for-read file.


> if i change inQueue to LOCK_SH....
> if i have two processes using LOCK_SH will they attempt to read at the
> same time?


Yes. Do you (think you) have a problem with that? If so, why?

> will LOCK_EX ever get a chance??


That depends on the details of your code, which we don't see.

> ie will shared locks always succeed if
> another process has a shared lock even if another process is was waiting
> for an exclusive lock


That is entirely up to your OS, and is not up to Perl. I think the answer
is "yes" on most or all OSes/filesystems, but I don't know. (For what it is
worth, on Mysql, at least older versions that I am used to, it is not the
case. I.e. by default a pending exclusive request will block incoming
shared requests.)


> scenario timeline:
> time 0 user1 requests LOCK_SH
> time 1 user1 gets LOCK_SH
> time 2 daemon requests LOCK_EX
> time 3 daemon waits for os to give lock
> time 4 user2 requests LOCK_SH
> time5 ....does user2 get lock? or does it get in line behind daemon.


Turn the above into Perl, and test it on your OS, using the filesystem
you want to use. See what it does.

>
> if the latter, what the $$$$ is the difference between LOCK_SH and
> LOCK_EX????


One is shared and the other is exclusive. I don't know what part of that
you don't understand, so I don't know how to explain it to you.


> if the former (ie user2 gets LOCK_SH without waiting) then how can you be
> certain of the position, interaction within file stream considering user1
> may be reading?


The system (or perl, or something, anyway) does that for you.

> does each process get a
> different pointer into the file position?


That is certainly the case on the systems I use.

Xho

--
-------------------- http://NewsReader.Com/ --------------------
Usenet Newsgroup Service $9.95/Month 30GB
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Locking locking resolution Frontpage raiderhawk General Computer Support 0 01-08-2008 01:42 AM
increase in cpu usage on locking and locking the system sowmya.rangineni@gmail.com Computer Support 0 06-15-2007 12:06 PM
Application locking to support optimisitc locking ? Timasmith Java 4 11-01-2006 12:42 AM
Designer locking mdb file... (file and dir GRANTS FULL CONTROL FOR "EVERYONE") Praveen ASP .Net 1 04-16-2005 06:01 AM
Confused about locking a file via file.flock(File::LOCK_EX) Ludwigi Beethoven Ruby 5 07-26-2003 03:26 PM



Advertisments