paralell ftp uploads and pool size
I have a python script that uploads multiple files from the local machine to a remote server in parallel via ftp using p process pool:
p = Pool(processes=x)
Now as I increase the value of x, the overall upload time for all files drops as expected. If I set x too high however, then an exception is thrown. The exact value at which this happens varies, but is ~20
Traceback (most recent call last):
File "uploadFTP.py", line 59, in <module>
File "uploadFTP.py", line 56, in multiupload
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 148, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 422, in get
Now this is not a problem - 20 is more than enough - but I'm trying to understand the mechanisms involved, and why the exact number of processes at which this exception occurs seems to vary.
I guess it comes down to the current resources of the server itself...but any insight would be much appreciated!
|All times are GMT. The time now is 02:24 PM.|
Powered by vBulletin®. Copyright ©2000 - 2013, vBulletin Solutions, Inc.
SEO by vBSEO ©2010, Crawlability, Inc.