Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > global interpreter lock

Reply
Thread Tools

global interpreter lock

 
 
Bryan Olson
Guest
Posts: n/a
 
      09-02-2005
Mike Meyer wrote:
> Bryan Olson writes:
>>With Python threads/queues how do I wait for two queues (or
>>locks or semaphores) at one call? (I know some methods to
>>accomplish the same effect, but they suck.)

>
> By "not as good as", I meant the model they provide isn't as managable
> as the one provided by Queue/Threading. Like async I/O,
> Queue/Threading provides a better model at the cost of
> generality.


I can't tell why you think that.

> Instead of making vague assertions, why don't you provide us
> with facts?


Yeah, I'll keep doing that. You're the one proclaiming a 'model'
to be more manageable with no evidence.

> I.e. - what are the things you think are obvious that turned
> out not to be true? Name some software that implements sophisticated
> services that we can go look out. And so on...


Thought we went over that. Look at the popular relational-
database engines. Implementing such a service with one line of
execution and async I/O is theoretically possible, but I've not
heard of anyone who has managed to do it. MySQL, PostgreSQL,
IBPhoenix, MaxDB, all have multiple simultaneous threads and/or
processes (as do the competitive commercial database engines,
though you can't look under the hood so easily).


--
--Bryan
 
Reply With Quote
 
 
 
 
Mike Meyer
Guest
Posts: n/a
 
      09-04-2005
Dennis Lee Bieber <(E-Mail Removed)> writes:
> On Wed, 31 Aug 2005 22:44:06 -0400, Mike Meyer <(E-Mail Removed)> declaimed
> the following in comp.lang.python:
>> I don't know what Ada offers. Java gives you pseudo-monitors. I'm

>
> From the days of mil-std 1815, Ada has supported "tasks" which
> communicate via "rendezvous"... The receiving task waits on an "accept"
> statement (simplified -- there is a means to wait on multiple different
> accepts, and/or time-out). The "sending" task calls the "entry" (looks
> like a regular procedure call with in and/or out parameters -- matches
> the signature of the waiting "accept"). As with "accept", there are
> selective entry calls, wherein which ever task is waiting on the
> matching accept will be invoked. During the rendezvous, the "sending"
> task blocks until the "receiving" task exits the "accept" block -- at
> which point both tasks may proceed concurrently.


Thank you for providing the description. That was sufficient context
that Google found the GNAT documentation, which was very detailed.

Based on that, it seems that entry/accept are just a synchronization
construct - with some RPC semantics thrown in.

> As you might notice -- data can go both ways: in at the top of the
> rendezvous, and out at the end.
>
> Tasks are created by declaring them (there are also task types, so
> one can easily create a slew of identical tasks).
>
> procedure xyz is
>
> a : task; -- not real Ada, again, simplified
> b : task;
>
> begin -- the tasks begin execution here
> -- do stuff in the procedure itself, maybe call task entries
> end;


The problem is that this doesn't really provide any extra protection
for the programmer. You get language facilities that will provide the
protection, but the programmer has to remember to use them in every
case. If you forget to declare a method as protected, then nothing
stops two tasks from entering it and screwing up the objects data with
unsynchronized access. This should be compared to SCOOP, where trying
to do something like that is impossible.

Thanks again,
<mike

--
Mike Meyer <(E-Mail Removed)> http://www.mired.org/home/mwm/
Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
 
Reply With Quote
 
 
 
 
Paul Rubin
Guest
Posts: n/a
 
      09-10-2005
Michael Sparks <(E-Mail Removed)> writes:
> > But I think to do it on Erlang's scale, Python needs user-level
> > microthreads and not just OS threads.

>
> You've just described Kamaelia* BTW, except substitute micro-thread
> with generator (Also we call the queues outboxes and inboxes, and
> the combination of a generator in a class with inboxes and outboxes
> components)
> * http://kamaelia.sf.net/


I don't see how generators substitute for microthreads. In your example
from another post:

class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def main(self):
while 1:
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
yield 1

You've got the "main" method creating a generator that has its own
event loop that yields after each event it processes. Notice the kludge

if self.dataReady("inbox"):
data = self.recv("inbox")

instead of just saying something like:

data = self.get_an_event("inbox")

where .get_an_event "blocks" (i.e. yields) if no event is pending.
The reason for that is that Python generators aren't really coroutines
and you can't yield except from the top level function in the generator.

In that particular example, the yield is only at the end, so the
generator isn't doing anything that an ordinary function closure
couldn't:

def main(self):
def run_event():
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return run_event

Now instead of calling .next on a generator every time you want to let
your microthread run, just call the run_event function that main has
returned. However, I suppose there's times when you'd want to read an
event, do something with it, yield, read another event, and do
something different with it, before looping. In that case you can use
yields in different parts of that state machine. But it's not that
big a deal; you could just use multiple functions otherwise.

All in all, maybe I'm missing something but I don't see generators as
being that much help here. With first-class continuations like
Stackless used to have, the story would be different, of course.

 
Reply With Quote
 
Michael Sparks
Guest
Posts: n/a
 
      09-14-2005
Paul Rubin wrote:
....
> I don't see how generators substitute for microthreads. In your example
> from another post:


I've done some digging and found what you mean by microthreads -
specifically I suspect you're referring to the microthreads package for
stackless? (I tend to view an activated generator as having a thread of
control, and since it's not a true thread, but is similar, I tend to view
that as a microthread. However your term and mine don't co-incide, and it
appears to cause confusion, so I'll switch my definition to match yours,
given the microthreads package, etc)

You're right, generators aren't a substitue for microthreads. However I do
see them as being a useful alternative to microthreads. Indeed the fact
that you're limited to a single stack frame I think has actually helped our
architecture.

The reason I say this is because it naturally encourages small components
which are highly focussed in what they do. For example, when I was
originally looking at how to wrap network handling up, it was logical to
want to do this:

[ writing something probably implementable using greenlets, but definitely
pseudocode ]

@Nestedgenerator
def runProtocol(...)
while:
data = get_data_from_connection( ... )

# Assume non-blocking socket
def get_data_from_connection(...)
try:
data = sock.recv()
return data
except ... :
Yield(WaitSocketDataReady(sock))
except ... :
return failure

Of something - you get the idea (the above code is naff, but that's because
it's late here) - the operation that would block normally you yield inside
until given a message.

The thing about this is that we wouldn't have resulted in the structure we
do have - which is to have components for dealing with connected sockets,
listening sockets and so on. We've been able to reuse the connected socket
code between systems much more cleanly that we would have done (I
suspect) than if we'd been able to nest yields (as I once asked about here)
or have true co-routines.

At some point it would be interesing to rewrite our entire system based on
greenlets and see if that works out with more or less reuse. (And more or
less ability to make code more parallel or not)


[re-arranging order slightly of comments ]
> class encoder(component):
> def __init__(self, **args):
> self.encoder = unbreakable_encryption.encoder(**args)
> def main(self):
> while 1:
> if self.dataReady("inbox"):
> data = self.recv("inbox")
> encoded = self.encoder.encode(data)
> self.send(encoded, "outbox")
> yield 1
>

....
> In that particular example, the yield is only at the end, so the
> generator isn't doing anything that an ordinary function closure
> couldn't:
>
> def main(self):
> def run_event():
> if self.dataReady("inbox"):
> data = self.recv("inbox")
> encoded = self.encoder.encode(data)
> self.send(encoded, "outbox")
> return run_event


Indeed, in particular we can currently rewrite that particular example as:

class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def mainLoop(self):
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return 1

That's a bad example though. A more useful example is probably something
more like this:

class Multicast_sender(Axon.Component.component):
def __init__(self, local_addr, local_port, remote_addr, remote_port):
super(Multicast_sender, self).__init__()
self.local_addr = local_addr
self.local_port = local_port
self.remote_addr = remote_addr
self.remote_port = remote_port

def main(self):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
socket.IPPROTO_UDP)
sock.bind((self.local_addr,self.local_port))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 10)
while 1:
if self.dataReady("inbox"):
data = self.recv()
l = sock.sendto(data, (self.remote_addr,self.remote_port) );
yield 1

With a bit of fun with decorators, that can actually be collapsed into
something more like:

@component
def Multicast_sender(self, local_addr, local_port, remote_addr,
remote_port):
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM,
socket.IPPROTO_UDP)
sock.bind((self.local_addr,self.local_port))
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 10)
while 1:
if self.dataReady("inbox"):
data = self.recv()
l = sock.sendto(data, (self.remote_addr,self.remote_port) );
yield 1





> You've got the "main" method creating a generator that has its own
> event loop that yields after each event it processes. Notice the kludge
>
> if self.dataReady("inbox"):
> data = self.recv("inbox")
>
> instead of just saying something like:
>
> data = self.get_an_event("inbox")
>
> where .get_an_event "blocks" (i.e. yields) if no event is pending.
> The reason for that is that Python generators aren't really coroutines
> and you can't yield except from the top level function in the generator.
>




>
> Now instead of calling .next on a generator every time you want to let
> your microthread run, just call the run_event function that main has
> returned. However, I suppose there's times when you'd want to read an
> event, do something with it, yield, read another event, and do
> something different with it, before looping. In that case you can use
> yields in different parts of that state machine. But it's not that
> big a deal; you could just use multiple functions otherwise.
>
> All in all, maybe I'm missing something but I don't see generators as
> being that much help here. With first-class continuations like
> Stackless used to have, the story would be different, of course.


 
Reply With Quote
 
Michael Sparks
Guest
Posts: n/a
 
      09-14-2005
arrgh... hit wrong keystroke which caused an early send before I'd finished
typing... (skip the message I'm replying to hear for a minute please


Michael.

 
Reply With Quote
 
Michael Sparks
Guest
Posts: n/a
 
      09-15-2005
[ Second time lucky... ]
Paul Rubin wrote:
....
> I don't see how generators substitute for microthreads. In your example
> from another post:


I've done some digging and found what you mean by microthreads -
specifically I suspect you're referring to the microthreads package for
stackless? (I tend to view an activated generator as having a thread of
control, and since it's not a true thread, but is similar, I tend to view
that as a microthread. However your term and mine don't co-incide, and it
appears to cause confusion, so I'll switch my definition to match yours,
given the microthreads package, etc)

The reason I say this is because it naturally encourages small components
which are highly focussed in what they do. For example, when I was
originally looking at how to wrap network handling up, it was logical to
want to do this:

[ writing something probably implementable using greenlets, but definitely
pseudocode ]

@Nestedgenerator
def runProtocol(...)
while:
data = get_data_from_connection( ... )

# Assume non-blocking socket
def get_data_from_connection(...)
try:
data = sock.recv()
return data
except ... :
Yield(WaitSocketDataReady(sock))
except ... :
return failure

Of something - you get the idea (the above code is naff, but that's because
it's late here) - the operation that would block normally you yield inside
until given a message.

The thing about this is that we wouldn't have resulted in the structure we
do have - which is to have components for dealing with connected sockets,
listening sockets and so on. We've been able to reuse the connected socket
code between systems much more cleanly that we would have done (I
suspect) than if we'd been able to nest yields (as I once asked about here)
or have true co-routines.

At some point it would be interesing to rewrite our entire system based on
greenlets and see if that works out with more or less reuse. (And more or
less ability to make code more parallel or not)

[re-arranging order slightly of comments ]
> class encoder(component):
> def __init__(self, **args):
> self.encoder = unbreakable_encryption.encoder(**args)
> def main(self):
> while 1:
> if self.dataReady("inbox"):
> data = self.recv("inbox")
> encoded = self.encoder.encode(data)
> self.send(encoded, "outbox")
> yield 1
>

....
> In that particular example, the yield is only at the end, so the
> generator isn't doing anything that an ordinary function closure
> couldn't:
>
> def main(self):
> def run_event():
> if self.dataReady("inbox"):
> data = self.recv("inbox")
> encoded = self.encoder.encode(data)
> self.send(encoded, "outbox")
> return run_event



Indeed, in particular we can currently rewrite that particular example as:

class encoder(component):
def __init__(self, **args):
self.encoder = unbreakable_encryption.encoder(**args)
def mainLoop(self):
if self.dataReady("inbox"):
data = self.recv("inbox")
encoded = self.encoder.encode(data)
self.send(encoded, "outbox")
return 1

And that will work today. (We have a 3 callback form available for people
who aren't very au fait with generators, or are just more comfortable with
callbacks)


That's a bad example though. A more useful example is probably something
more like this: (changed example from accidental early post)
....
center = list(self.rect.center)
self.image = self.original
current = self.image
scale = 1.0
angle = 1
pos = center
while 1:
self.image = current
if self.dataReady("imaging"):
self.image = self.recv("imaging")
current = self.image
if self.dataReady("scaler"):
# Scaling
scale = self.recv("scaler")
w,h = self.image.get_size()
self.image = pygame.transform.scale(self.image, (w*scale,
h*scale))
if self.dataReady("rotator"):
angle = self.recv("rotator")
# Rotation
self.image = pygame.transform.rotate(self.image, angle)
if self.dataReady("translation"):
# Translation
pos = self.recv("translation")
self.rect = self.image.get_rect()
self.rect.center = pos
yield 1


(this code is from Kamaelia.UI.Pygame.BasicSprite)

Can it be transformed to something event based? Yes of course. Is it clear
what's happening though? I would say yes. Currently we encourage the user
to look to see if data is ready before taking it, simply because it's the
simplest interface that we can guarantee consistency with.

For example, currently the exception based equivalent would be:
try:
pos = self.recv("translation")
except IndexError:
pass

Which isn't necessarily ideal, because we haven't really finalised the
implementation of inboxes and outboxes (eg will we always through
IndexError?). We are certain though that the behaviour of
send/recv/dataReady can remain consistent until then.

(Some discussions regarding Twisted people have suggested twisted deferred
queues might be useful here, but I haven't had a chance to look at them in
detail.)


At the moment, one option that springs to mind is this:
yield WaitDataAvailable("inbox")

(This is largely because we're looking at how to add syntactic sugar for
synchronous bidirectional messaging) Allowing the scheduler to suspend the
generator until data is ready. This however doesn't work for the example
above.

Whereas currently the following:
self.pause()
yield 1
Will prevent the component being run until one of the inboxes has a delivery
made to it *or* a message taken from its outboxes. (Very course grained)

> Notice the kludge


FWIW, it's deliberate because we can maintain API consistency, until we
decide on better syntactic sugar.

> The reason for that is that Python generators aren't really coroutines
> and you can't yield except from the top level function in the generator.


Agreed - as noted above. We're finding this to be a strength though. (Though
to confirm/deny this properly would require a rewrite using greenlets or
similar)

> Now instead of calling .next on a generator every time you want to let
> your microthread run, just call the run_event function that main has
> returned.


But then you're building state machines. We're using generators because they
allow people to write code looking completely single threaded, throw in
yields in key locations, abstract out input/output and do all this in small
gradual steps. (I reference an example of this below)

Relevant quote that might help explain where I'm coming from is this:

"Threads are for people who cant program state machines." -- Alan Cox

I'd agree really on some level, but I'm always left wondering - what about
the people who cant program state machines, but don't want to use threads
etc? (For whatever reason - maybe the architecture they're running on has a
poor threads implementation)

Initially co-routines struck me as the halfway house, but we decided to
stick with standard python and explore a generator based approach.

> However, I suppose there's times when you'd want to read an
> event, do something with it, yield, read another event, and do
> something different with it, before looping. In that case you can use
> yields in different parts of that state machine.


That's indeed what we do for a number of different existing components. Also
there's the viewpoint aspect - you can view the system as event based
(receiving a message as an event) or you can view it as dataflow. From our
perspective we view it as a dataflow system.

> But it's not that
> big a deal; you could just use multiple functions otherwise.



Having a single function with yields peppering it provides a simpler path
from single program single threaded to sitting inside a larger system whilst
remaining single threaded. We have a walk through of how to write a
component here [1] which is based on the experience of writing components
for multicast handling.

[1]
http://kamaelia.sourceforge.net/cgi-...tid=1113495151

The components written are sufficient for the tasks we need them at present
by probably need work for the general case. However the resulting code
remains close to looking single threaded - lowering the barrier for bug
finding. (I'm a firm believer that > 90% of the population can't write bug
free code - me included)

The final multicast transciever may have some issues that jump out to
someone else that wouldn't necessarily jump out if I'd turned the code
inside out into seperate state functions. I'm fairly certain it would've
been less clear (to someone coming along later) how to join the
sender/receiver code into a single transceiver.

The other thing is the alternatives to generators/coroutines are:
* threads/processes
* State machine style approaches

Having worked on a (very) large project (in C++) which was very state
machine based, I've come to have a natural dislike for them, and wondered
at the time if a generator/coroutine approach would be easier to pickup and
more maintainable. It might be, it might not be.

The idea behind our work is to have a go, build something and see if it
really is better or worse. If it's worse, that's life. If it's better,
hopefully other people will copy the approach of use the tools we release.

Until then (he says optimistically) other people do have GOOD systems
like twisted which is one of the nicest systems of it's kind. (Personally
I'd expect that if our stuff pans out we'd need to do a partial rewrite to
simplify the process for people to cherry pick code into twisted (or
whatever), if they want it.)

> All in all, maybe I'm missing something but I don't see generators as
> being that much help here. With first-class continuations like
> Stackless used to have, the story would be different, of course.


I suppose what I'm saying is what you're losing isn't as large as you think
it is, and brings benefits of its own along the way. This does mean though
that we now have the ability to compose interesting systems in a unix
pipeline approach using graphical pipeline editors that produce code
that looks like this:

pipeline( ReadFileAdaptor( filename = '/data/dirac-video/bar.drc',
readmode = 'bitrate', bitrate = 480000 ),
SingleServer( ).activate()

pipeline( TCPClient( host = "127.0.0.1", port = 1601 ),
DiracDecoder( ),
RateLimit( messages_per_second = 15, buffer=2),
VideoOverlay( ),
).run()

.... which creates 2 pipelines - one represents a server sending data out
over a network socket, the other represents a client that connects, decodes
and displays the video.

The Tk integration was relatively quick to write, because it /couldn't/ be
complex. The Pygame integration was fairly simple, because it /couldn't/
be complex. (which may be fringe benefits of generators) We haven't looked
at interating gtk, wx or qt yet.

From our perspective the implementation of pipeline is the interesting part.
Currently this is simply a wrapper component, however it is responsible for
activating the components passed over, and /could/ run the generator
based components in different processes (and hence processors potentially).
Alternatively that could be left to the scheduler to do, but I suspect
something with a bit of control would be nice.

None of this is really special to generators as is probably obvious, but
that's where we started because we hypothesised that the resulting code
*might* be cleaner, whilst potentially able to be just as efficient as more
state machine based approaches. If greenlets had been available when we
started I suspect we would have used those.

We rejected stackless at the time because generators were available, and
whilst not as good from some perspectives /are/ part of the standard
language since 2.2.something. That decision has meant that we're able to
(and do) run on things like mobiles, and upwards without changes, except to
packaging.

At the end of the day, the only reason I'm talking about this stuff at
all is because we're finding it useful - perhaps more so than I expected
when I first realised the limitations of generators If you don't find it
useful, then fair enough

Best Regards,


Michael.

 
Reply With Quote
 
Stephen Thorne
Guest
Posts: n/a
 
      09-15-2005
On 15/09/05, Michael Sparks <(E-Mail Removed)> wrote:
> At the moment, one option that springs to mind is this:
> yield WaitDataAvailable("inbox")


Twisted supports this.

help("twisted.internet.defer.waitForDeferred")

example usage is:

@deferredGenerator
def thingummy():
thing = waitForDeferred(makeSomeRequestResultingInDeferred ())
yield thing
thing = thing.getResult()
print thing #the result! hoorj!

With the new generator syntax, it becomes somewhat less clunky,
allowing for the syntax:

@defgen
def foo():
somereturnvalue = yield SomeLongRunningOperation()
print somereturnvalue

http://svn.twistedmatrix.com/cvs/san...rkup&rev=14348

--
Stephen Thorne
Development Engineer
 
Reply With Quote
 
Michael Sparks
Guest
Posts: n/a
 
      09-15-2005
Stephen Thorne wrote:

> On 15/09/05, Michael Sparks <(E-Mail Removed)> wrote:
>> At the moment, one option that springs to mind is this:
>> yield WaitDataAvailable("inbox")

>
> Twisted supports this.
>
> help("twisted.internet.defer.waitForDeferred")


Thanks for this. I'll take a look and either we'll use that or we'll use
something that maps cleanly. (Reason for pause is because running on
mobiles is important to us.) Thanks for the example too

Best Regards,


Michael.
--
http://www.velocityreviews.com/forums/(E-Mail Removed), http://kamaelia.sourceforge.net/
British Broadcasting Corporation, Research and Development
Kingswood Warren, Surrey KT20 6NP

This message (and any attachments) may contain personal views
which are not the views of the BBC unless specifically stated.

 
Reply With Quote
 
Michele Simionato
Guest
Posts: n/a
 
      09-15-2005
It looks like I am reinventing Twisted and/or Kamelia.
This is code I wrote just today to simulate Python 2.5
generator in current Python:

import Queue

class coroutine(object):
def __init__(self, *args, **kw):
self.queue = Queue.Queue()
self.it = self.__cor__(*args, **kw)
def start(self):
return self.it.next()
def next(self):
return self.send(None)
def __iter__(self):
return self
def send(self, *args):
self.queue.put(args)
return self.it.next()
def recv(self):
return self.queue.get()
@classmethod
def generator(cls, gen):
return type(gen.__name__, (cls,), dict(__cor__=gen))

@coroutine.generator
def consumer(self, N):
for i in xrange(N):
yield i
cmd = self.recv()
if cmd == "exit": break

c = consumer(100)
print c.start()
for cmd in ["", "", "", "", "exit"]:
print c.send(cmd)


Michele Simionato

 
Reply With Quote
 
Michael Sparks
Guest
Posts: n/a
 
      09-15-2005
Michele Simionato wrote:

> It looks like I am reinventing Twisted and/or Kamaelia.


If it's /fun/ , is that a problem ? (Interesting implementation BTW

FWIW, I've about a year ago it wasn't clear if we would be able to release
our stuff, so as part of a presentation I included a minimalistic decorator
based version of our system that has some similarities to yours. (The idea
was then at least the ideas had been shared, if not the main code - which
had approval)

Posted below in case it's of interest:

import copy
def wrapgenerator(bases=object, **attrs):
def decorate(func):
class statefulgenerator(bases):
__doc__ = func.__doc__
def __init__(self,*args):
super(statefulgenerator, self) __init__(*args)
self.func=func(self,*args)
for k in attrs.keys():
self.__dict__[k] = copy.deepcopy(attrs[k])
self.next=self.__iter__().next
def __iter__(self): return iter(self.func)
return statefulgenerator
return decorate

class com(object):
def __init__(_, *args):
# Default queues
_.queues = {"inbox":[],"control":[],
"outbox":[], "signal":[]}
def send(_,box,obj): _.queues[box].append(obj)
def dataReady(_,box): return len(_.queues[box])>0
def recv(_, box): # NB. Exceptions aren't caught
X=_.queues[box][0]
del _.queues[box][0]
return X


A sample component written using this approach then looks like this:

@wrapgenerator(com)
def forwarder(self):
"Simple data forwarding generator"
while 1:
if self.dataReady("inbox"):
self.send("outbox",self.recv("inbox"))
elif self.dataReady("control"):
if self.recv("control") == "shutdown":
break
yield 1
self.send("signal","shutdown")
yield 0

Since we're not actualy using this approach, there's likely to be border
issues here. I'm not actually sure I like this particular approach, but it
was an interesting experiment.

Best Regards,


Michael.

 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Regular expressions and the global interpreter lock Duncan Grisby Python 0 11-18-2005 03:47 PM
global interpreter lock Tommy.Ryding@gmail.com Python 2 10-19-2005 06:44 AM
Global Interpreter Lock Tomas Christiansen Python 3 09-24-2004 10:00 PM
Threading - Why Not Lock Objects Rather than lock the interpreter Fuzzyman Python 3 12-05-2003 10:43 PM
Re: Thread State and the Global Interpreter Lock Aahz Python 0 06-28-2003 01:20 PM



Advertisments