Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Java > A quota based lock

Reply
Thread Tools

A quota based lock

 
 
Tom Anderson
Guest
Posts: n/a
 
      08-09-2011
On Mon, 8 Aug 2011, Robert Stark wrote:

> I want to write a lock to control access to a resource, there are
> different kind of jobs using this resource, say job A,B,C, [...] My idea
> is input a map of percentages you want to assign for each job, and
> provide a simple lock api.
>
> Input A=>20, B=>70, C=>10 means A=>20%, B=>70%, C=>10% If there's no A
> jobs pending on the lock, then its quota would be divided evenly to
> other pending jobs that is B=>80%, C=>20%.


So there's a single resource, right? And at any time, only one thread can
hold the lock on the resource? But you want to share the lock between the
threads in a fair (or at least controlled) way over time? Do you want to
be fair in terms of time for which the lock his held, or the number of
lock acquisitions?

> Then i got stuck, the only way i can think about is to introduce an
> extra dispatch thread and several queues, can someone give me some hint?


You will need a separate queue for each kind of thread. I don't think you
need another thread to handle the dispatching.

When a thread tries to acquire the lock, if the lock is available, it
takes it (of course). If the lock is not available, it joins the right
queue.

When a thread releases the lock, if there are no threads waiting, it does
nothing (of course). If there are threads waiting, it decides which of the
kinds of thread should get the lock next, and hands the lock to the next
thread in that queue.

The only problem is how you decide which kind of thread should get the
lock next.

The easiest would be to keep counters for each kind of thread, counting
how many times threads of that kind have acquired the lock (or how long in
total they have held it for, if you prefer). You can then easily calculate
the total number of acquisitions (or the total time of holding), and so
the share of the lock which each kind of thread has had so far. At any
point in time, you can compare that historical share to your desired
share, and put the kinds of threads in order, from the most short-changed
to the most overprivileged, and award the lock to a thread of the most
needy kind which currently has threads waiting.

This approach will give you a fair apportionment over time.

However, it means that if a kind of thread does not make full use of its
allowance at some point in time (perhaps because there are not many of
that kind of thread running), then it effectively builds up a 'share
credit' which will allow it to make greater demands on the resource later
on. You might consider that unfair. You might not.

If you do consider it unfair, you could address it by decaying the share
counts over time - it would be a simple use of timestamps and Math.exp()
to apply an exponential decay immediately before updating the counters.

tom

--
One chants out between two worlds Fire - Walk With Me.
 
Reply With Quote
 
 
 
 
Robert Klemme
Guest
Posts: n/a
 
      08-10-2011
On 09.08.2011 02:41, Eric Sosman wrote:
> On 8/8/2011 3:46 PM, Robert Klemme wrote:
>> On 08.08.2011 20:57, markspace wrote:
>>> On 8/8/2011 11:39 AM, Knute Johnson wrote:
>>>
>>>> No priority scheme will ever be truly fair. I'll bet you could get
>>>> pretty close without being too complicated. I'll think about it some
>>>> more.
>>>
>>>
>>> A simple priority system might involve multiple queues, where the high
>>> priority queues are serviced X times more than the lower ones.
>>>
>>> E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
>>> queue B gets executed. But because queue B is always guaranteed to be
>>> serviced eventually, there is no starvation.
>>>
>>> This is a simple step up from round-robin service (which is what Eric
>>> proposed). There are many algorithms existing. Check out any text on OSs
>>> and job scheduling.

>>
>> Another idea would be to take the time a task has access to the
>> resource, sum up per task category and for the next task pick the first
>> one from the category which is furthest below its specified share
>> (percentage). Basically your approach measures executions and this
>> approach measures actual resource usage time.

>
> Yes, all these disciplines are plausible. My main piece of advice
> is KISS: Begin with the simplest possible solution, and elaborate it
> only when there's solid evidence it won't suffice.
>
> Sometimes the evidence can be gathered in advance: If you know
> things about arrival rates and hold times and latency requirements, you
> may be able to do a calculation that shows simple FIFO won't hack it.
> More often, given the inherent complexity and "brittleness" of software
> systems, you'll need to implement first and measure afterwards to learn
> about a solution's shortcomings. This can lead to discarding the first
> solution -- but, hey: It was the simplest one you could imagine, so you
> probably didn't expend inordinate effort on it, right? Much cheaper to
> jettison a simple approach than a complicated one.
>
> KISS.


Absolutely agree. And we still need the OP to state his problem /
requirements clearly. Robert, what is it that you really need? What is
your use case and your ultimate goal?

Kind regards

robert



--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/
 
Reply With Quote
 
 
 
 
Robert Stark
Guest
Posts: n/a
 
      08-10-2011
Sorry, i use google groups and i could not find my post until today.
Thank you all!
There is a single resource and i want more sophisticated control over
it, there're different types of jobs( client requests, back-end jobs)
running in my system, every job would hold the resource for 1ms~20ms,
some back-end jobs will run for several hours and they will
continuously acquire this resource, in this case, client requests will
suffered low throughput even starvation, so i come up with this idea,
and i do want to keep it as simple as possible. Client request may
come once a while (10-50 per second), acquire lock 1-2 times and exit.

I keep a queue for each type of job, and keep counters to make
schedule decision, the counters expired when their sum exceed 100 or
some jobs' counter exceed their quota and no other jobs pending. But i
still introduce an extra dispatch thread, i find it extremely hard to
write something like Tom Anderson said ...
I want to complete it and test how it goes.

On Aug 10, 3:36*pm, Robert Klemme <(E-Mail Removed)> wrote:
> On 09.08.2011 02:41, Eric Sosman wrote:
>
>
>
>
>
>
>
>
>
> > On 8/8/2011 3:46 PM, Robert Klemme wrote:
> >> On 08.08.2011 20:57, markspace wrote:
> >>> On 8/8/2011 11:39 AM, Knute Johnson wrote:

>
> >>>> No priority scheme will ever be truly fair. I'll bet you could get
> >>>> pretty close without being too complicated. I'll think about it some
> >>>> more.

>
> >>> A simple priority system might involve multiple queues, where the high
> >>> priority queues are serviced X times more than the lower ones.

>
> >>> E.g., two queues. Queue A gets 10 jobs executed for each 1 job that
> >>> queue B gets executed. But because queue B is always guaranteed to be
> >>> serviced eventually, there is no starvation.

>
> >>> This is a simple step up from round-robin service (which is what Eric
> >>> proposed). There are many algorithms existing. Check out any text on OSs
> >>> and job scheduling.

>
> >> Another idea would be to take the time a task has access to the
> >> resource, sum up per task category and for the next task pick the first
> >> one from the category which is furthest below its specified share
> >> (percentage). Basically your approach measures executions and this
> >> approach measures actual resource usage time.

>
> > Yes, all these disciplines are plausible. My main piece of advice
> > is KISS: Begin with the simplest possible solution, and elaborate it
> > only when there's solid evidence it won't suffice.

>
> > Sometimes the evidence can be gathered in advance: If you know
> > things about arrival rates and hold times and latency requirements, you
> > may be able to do a calculation that shows simple FIFO won't hack it.
> > More often, given the inherent complexity and "brittleness" of software
> > systems, you'll need to implement first and measure afterwards to learn
> > about a solution's shortcomings. This can lead to discarding the first
> > solution -- but, hey: It was the simplest one you could imagine, so you
> > probably didn't expend inordinate effort on it, right? Much cheaper to
> > jettison a simple approach than a complicated one.

>
> > KISS.

>
> Absolutely agree. *And we still need the OP to state his problem /
> requirements clearly. *Robert, what is it that you really need? *Whatis
> your use case and your ultimate goal?
>
> Kind regards
>
> * * * * robert
>
> --
> remember.guy do |as, often| as.you_can - without endhttp://blog.rubybestpractices.com/


 
Reply With Quote
 
Robert Klemme
Guest
Posts: n/a
 
      08-10-2011

Please do not top post.

On 10.08.2011 13:40, Robert Stark wrote:
> Sorry, i use google groups and i could not find my post until today.
> Thank you all!
> There is a single resource and i want more sophisticated control over
> it, there're different types of jobs( client requests, back-end jobs)
> running in my system, every job would hold the resource for 1ms~20ms,
> some back-end jobs will run for several hours and they will
> continuously acquire this resource, in this case, client requests will
> suffered low throughput even starvation, so i come up with this idea,
> and i do want to keep it as simple as possible. Client request may
> come once a while (10-50 per second), acquire lock 1-2 times and exit.


This can never work: if you have jobs that - by design - hold the
resource for hours then no amount of lock implementation smartness will
prevent starvation without preemption. You cannot have a resource with
exclusive access, long access times and responsiveness at the same time.
Doing preemption manually will be difficult. It's better to break up
long running tasks into smaller sub tasks which need exclusive resource
access. Whether that's possible or not depends of course on your
business logic (which we still haven't seen).

> I want to complete it and test how it goes.


Good idea.

Cheers

robert

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/
 
Reply With Quote
 
Martin Gregorie
Guest
Posts: n/a
 
      08-10-2011
On Wed, 10 Aug 2011 18:55:43 +0200, Robert Klemme wrote:

> This can never work: if you have jobs that - by design - hold the
> resource for hours then no amount of lock implementation smartness will
> prevent starvation without preemption. You cannot have a resource with
> exclusive access, long access times and responsiveness at the same time.
> Doing preemption manually will be difficult. It's better to break up
> long running tasks into smaller sub tasks which need exclusive resource
> access. Whether that's possible or not depends of course on your
> business logic (which we still haven't seen).
>

Total agreement.

If you have a long running job that requires exclusive resource access,
then by definition no other task will ever get a look-in while its
running.

If such long running background jobs are doing housekeeping tasks on a
database, which they often are, its usually possible to identify
relatively short processing cycles that amend the database and can be
split out into separate transactions that run in the same timescale as
online transactions. By splitting the task up this way it can take its
turn in obtaining the resource lock for each transaction and your locking
mechanism may not need to be any more complex that a FIFO queue.

The long-running task may run a bit shower due to transaction commit
overheads and (possibly) a need to save running totals between
transactions, but often the running totals etc can be collected in a
separate read-only, and hence non-locking, transaction and the results
committed in separate, short update transaction.

On the bright side, if the long process is redesigned along these lines
and also keeps track of progress, it will recover *much* faster when a
system crash, exception or whatever occurs while its running.


--
martin@ | Martin Gregorie
gregorie. | Essex, UK
org |
 
Reply With Quote
 
Robert Stark
Guest
Posts: n/a
 
      08-11-2011
On Aug 11, 3:37*am, Patricia Shanahan <(E-Mail Removed)> wrote:
> On 8/10/2011 9:55 AM, Robert Klemme wrote:
>
>
>
>
>
>
>
>
>
>
>
> > Please do not top post.

>
> > On 10.08.2011 13:40, Robert Stark wrote:
> >> Sorry, i use google groups and i could not find my post until today.
> >> Thank you all!
> >> There is a single resource and i want more sophisticated control over
> >> it, there're different types of jobs( client requests, back-end jobs)
> >> running in my system, every job would hold the resource for 1ms~20ms,
> >> some back-end jobs will run for several hours and they will
> >> continuously acquire this resource, in this case, client requests will
> >> suffered low throughput even starvation, so i come up with this idea,
> >> and i do want to keep it as simple as possible. Client request may
> >> come once a while (10-50 per second), acquire lock 1-2 times and exit.

>
> > This can never work: if you have jobs that - by design - hold the
> > resource for hours then no amount of lock implementation smartness will
> > prevent starvation without preemption. You cannot have a resource with
> > exclusive access, long access times and responsiveness at the same time..
> > Doing preemption manually will be difficult. It's better to break up
> > long running tasks into smaller sub tasks which need exclusive resource
> > access. Whether that's possible or not depends of course on your
> > business logic (which we still haven't seen).

>
> I interpreted the paragraph you quoted rather differently. My picture is
> that each back-end job runs for a long time, but during that time
> repeatedly acquires the contended resource, still only keeping it for
> order milliseconds at a time. I base this on "continuously acquire"
> rather than "continuously hold" the resource.
>
> If that picture is correct, simple FIFO lock handling may be sufficient
> to let the client requests get through fast enough.
>
> However, any time a lock becomes enough of an issue that it requires
> this sort of discussion, the first thing to do is to examine whether it
> can be refactored to reduce contention. Is the lock really covering only
> one resource? Can that resource be replicated? Can it be divided up?
>
> Patricia


You are right, Patricia.
"each back-end job runs for a long time, but during that time
repeatedly acquires the contended resource, still only keeping it for
order milliseconds at a time"
Sorry i failed to make it clear in the first time.
I think i have to do more test using a simple FIFO lock, maybe there's
something wrong in my code!
 
Reply With Quote
 
markspace
Guest
Posts: n/a
 
      08-11-2011
On 8/10/2011 6:30 PM, Robert Stark wrote:
> I think i have to do more test using a simple FIFO lock, maybe there's
> something wrong in my code!



If this is the case, try encapsulating your lock, and recording the
timing of each lock and unlock. Print out any thread that holds the
lock for a long time (> 500 ms?). This might clue you in to where some
threads are being starved.

Also note the "fair" constructor for ReentrantLock:

<http://download.oracle.com/javase/6/docs/api/java/util/concurrent/locks/ReentrantLock.html>

 
Reply With Quote
 
Robert Klemme
Guest
Posts: n/a
 
      08-11-2011
On 11.08.2011 03:30, Robert Stark wrote:
> On Aug 11, 3:37 am, Patricia Shanahan<(E-Mail Removed)> wrote:
>> On 8/10/2011 9:55 AM, Robert Klemme wrote:
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> Please do not top post.

>>
>>> On 10.08.2011 13:40, Robert Stark wrote:
>>>> Sorry, i use google groups and i could not find my post until today.
>>>> Thank you all!
>>>> There is a single resource and i want more sophisticated control over
>>>> it, there're different types of jobs( client requests, back-end jobs)
>>>> running in my system, every job would hold the resource for 1ms~20ms,
>>>> some back-end jobs will run for several hours and they will
>>>> continuously acquire this resource, in this case, client requests will
>>>> suffered low throughput even starvation, so i come up with this idea,
>>>> and i do want to keep it as simple as possible. Client request may
>>>> come once a while (10-50 per second), acquire lock 1-2 times and exit.

>>
>>> This can never work: if you have jobs that - by design - hold the
>>> resource for hours then no amount of lock implementation smartness will
>>> prevent starvation without preemption. You cannot have a resource with
>>> exclusive access, long access times and responsiveness at the same time.
>>> Doing preemption manually will be difficult. It's better to break up
>>> long running tasks into smaller sub tasks which need exclusive resource
>>> access. Whether that's possible or not depends of course on your
>>> business logic (which we still haven't seen).

>>
>> I interpreted the paragraph you quoted rather differently. My picture is
>> that each back-end job runs for a long time, but during that time
>> repeatedly acquires the contended resource, still only keeping it for
>> order milliseconds at a time. I base this on "continuously acquire"
>> rather than "continuously hold" the resource.


I considered that as well but figured that then the starvation issue
wouldn't be so dramatic.

>> If that picture is correct, simple FIFO lock handling may be sufficient
>> to let the client requests get through fast enough.


I would have assumed that this is what Robert tried first and hit the
starvation issue then.

>> However, any time a lock becomes enough of an issue that it requires
>> this sort of discussion, the first thing to do is to examine whether it
>> can be refactored to reduce contention. Is the lock really covering only
>> one resource? Can that resource be replicated? Can it be divided up?


Absolutely. Often refactoring helps a great deal more. Maybe there is
a solution where an immutable fraction of the state can be shared
reducing need for locking. Or state can be copied and merged after
manipulation. There are so many options...

> You are right, Patricia.
> "each back-end job runs for a long time, but during that time
> repeatedly acquires the contended resource, still only keeping it for
> order milliseconds at a time"


Thanks for clarifying.

> Sorry i failed to make it clear in the first time.


No prob, I might actually have stumbled over my non native readerness.

> I think i have to do more test using a simple FIFO lock, maybe there's
> something wrong in my code!


Please see Mark's suggestion of using fair mode as well. You do pay a
performance penalty for the fair mode. Whether that is relevant in your
case needs to be measured of course.

Kind regards

robert

--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Read-Write Lock vs primitive Lock() k3xji Python 7 12-30-2008 10:19 PM
Application.Lock()/UnLock() or lock(Application) nano2k ASP .Net Web Services 2 08-09-2007 09:31 AM
ISG: volume based quota Pavel Crasotin2147483647 Cisco 0 04-07-2006 11:58 AM
Threading - Why Not Lock Objects Rather than lock the interpreter Fuzzyman Python 3 12-05-2003 10:43 PM
RE: Threading - Why Not Lock Objects Rather than lock theinterpreter Robert Brewer Python 0 12-05-2003 05:33 PM



Advertisments