Re: python thread scheduler?
On Wed, 07 Apr 2004 11:27:58 +0100, project2501 <email@example.com> wrote:
> to clarifgy, i'm seeing responec times as low as 0.3 seconds when the
> client has 5 worker threads... rising to an average of about 8 seconds
> with 50 threads... and more ith 100 threads.
The CPython interpreter utilizes a global lock which prevents more than one Python thread from running at any particular time. You will not see performance scale well with large numbers of threads (indeed, performance will probably be worse).
Threads which do not run Python code (for example, code in an extension module) can release the global interpreter lock to take advantage of multiple CPUs. An alternative is to use concurrency without using threads. It sounds like you are working on a network server, so I would recommend taking a look at Twisted:
Re: python thread scheduler?
at the bottom is a reply i got on the comp.prgramming.threads group... it
sufggests there is no programmatic way to improve client request rate ona
uni-processor machine... do you agree? i think i agree with the person
suggesting the only solutionis more horsepower.
will twisted help even after this consideration? i had a look at twisted a
while back (re: asyncore) and it seemed overly complex...
On Wed, 07 Apr 2004 16:20:19 +0000, exarkun wrote:
> utilizes a global lock which prevents more than one Python thread from running at any particular time. You will not see performance scale well with large numbers of threads (indeed, performance will probably be worse).
Subject: Re: threads not switching fast enough one a 1-cpu system?
From: steve@nospam.Watt.COM (Steve Watt)
Date: Wed, 7 Apr 2004 19:31:31 GMT
In article <firstname.lastname@example.org> ,
project2501 <email@example.com> wrote:
>i'm trying to becnhmark a server software. simply throwing requests at it
>sequentially, however small the interval, is not stressing the server.
That means your server is able to serve whatever you're using as a test
client. Probably good news.
>forking() children to throw requests quickly fills up the memory and swap
>until the client machine breaks.
Why do you think making more processes will make more requests per
>threading (using python and also stackless python for now) lets me have
>plenty of threads (i've tried up to 100). however my responce time graphs
>are flat (although more threads means higher flat graph)...
Why do you think making more threads will make more requests per
>... which indicates the that the bottleneck being measured is the thread
>swicthing... and that not enough threads are actually running "parallel"
No, it indicates that it takes no longer to service a request than it
does to generate it. If you've got one client machine and one server
machine, your server is probably faster than your client, or your
requests are easy for the server.
You need more client horsepower. It is not uncommon, when doing big
load testing, to use a dozen or more machines against a single server to
really exercise the overload behavior.
Creating threads or processes does not create computing power. In fact,
it ALWAYS* reduces the amount of CPU available to user code, because
of the increased state maintenance.
So you need to procure more CPU cycles from somewhere, or make requests
that take the server longer to complete.
* OK, SMP machines are allowed a thread/process per CPU.
Steve Watt KD6GGD PP-ASEL-IA ICBM: 121W 56' 57.8" / 37N 20' 14.9"
Internet: steve @ Watt.COM Whois: SW32
Free time? There's no such thing. It just comes in varying prices...
|All times are GMT. The time now is 08:00 AM.|
Powered by vBulletin®. Copyright ©2000 - 2013, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.