Velocity Reviews - Computer Hardware Reviews

Velocity Reviews > Newsgroups > Programming > Python > Is there any way to minimize str()/unicode() objects memory usage[Python 2.6.4] ?

Reply
Thread Tools

Is there any way to minimize str()/unicode() objects memory usage[Python 2.6.4] ?

 
 
dmtr
Guest
Posts: n/a
 
      08-07-2010
I'm running into some performance / memory bottlenecks on large lists.
Is there any easy way to minimize/optimize memory usage?

Simple str() and unicode objects() [Python 2.6.4/Linux/x86]:
>>> sys.getsizeof('') 24 bytes
>>> sys.getsizeof('0') 25 bytes
>>> sys.getsizeof(u'') 28 bytes
>>> sys.getsizeof(u'0') 32 bytes


Lists of str() and unicode() objects (see ref. code below):
>>> [str(i) for i in xrange(0, 10000000)] 370 Mb (37 bytes/item)
>>> [unicode(i) for i in xrange(0, 10000000)] 613 Mb (63 bytes/item)


Well... 63 bytes per item for very short unicode strings... Is there
any way to do better than that? Perhaps some compact unicode objects?

-- Regards, Dmitry

----
import os, time, re
start = time.time()
l = [unicode(i) for i in xrange(0, 10000000)]
dt = time.time() - start
vm = re.findall("(VmPeak.*|VmSize.*)", open('/proc/%d/status' %
os.getpid()).read())
print "%d keys, %s, %f seconds, %f keys per second" % (len(l), vm, dt,
len(l) / dt)
 
Reply With Quote
 
 
 
 
Steven D'Aprano
Guest
Posts: n/a
 
      08-07-2010
On Fri, 06 Aug 2010 17:45:31 -0700, dmtr wrote:

> I'm running into some performance / memory bottlenecks on large lists.
> Is there any easy way to minimize/optimize memory usage?


Yes, lots of ways. For example, do you *need* large lists? Often a better
design is to use generators and iterators to lazily generate data when
you need it, rather than creating a large list all at once.

An optimization that sometimes may help is to intern strings, so that
there's only a single copy of common strings rather than multiple copies
of the same one.

Can you compress the data and use that? Without knowing what you are
trying to do, and why, it's really difficult to advise a better way to do
it (other than vague suggestions like "use generators instead of lists").

Very often, it is cheaper and faster to just put more memory in the
machine than to try optimizing memory use. Memory is cheap, your time and
effort is not.

[...]
> Well... 63 bytes per item for very short unicode strings... Is there
> any way to do better than that? Perhaps some compact unicode objects?


If you think that unicode objects are going to be *smaller* than byte
strings, I think you're badly informed about the nature of unicode.

Python is not a low-level language, and it trades off memory compactness
for ease of use. Python strings are high-level rich objects, not merely a
contiguous series of bytes. If all else fails, you might have to use
something like the array module, or even implement your own data type in
C.

But as a general rule, as I mentioned above, the best way to minimize the
memory used by a large list is to not use a large list. I can't emphasise
that enough -- look into generators and iterators, and lazily handle your
data whenever possible.


--
Steven
 
Reply With Quote
 
 
 
 
Thomas Jollans
Guest
Posts: n/a
 
      08-07-2010
On 08/07/2010 02:45 AM, dmtr wrote:
> I'm running into some performance / memory bottlenecks on large lists.
> Is there any easy way to minimize/optimize memory usage?
>
> Simple str() and unicode objects() [Python 2.6.4/Linux/x86]:
>>>> sys.getsizeof('') 24 bytes
>>>> sys.getsizeof('0') 25 bytes
>>>> sys.getsizeof(u'') 28 bytes
>>>> sys.getsizeof(u'0') 32 bytes

>
> Lists of str() and unicode() objects (see ref. code below):
>>>> [str(i) for i in xrange(0, 10000000)] 370 Mb (37 bytes/item)
>>>> [unicode(i) for i in xrange(0, 10000000)] 613 Mb (63 bytes/item)

>
> Well... 63 bytes per item for very short unicode strings... Is there
> any way to do better than that? Perhaps some compact unicode objects?


There is a certain price you pay for having full-feature Python objects.

What are you trying to accomplish anyway? Maybe the array module can be
of some help. Or numpy?



>
> -- Regards, Dmitry
>
> ----
> import os, time, re
> start = time.time()
> l = [unicode(i) for i in xrange(0, 10000000)]
> dt = time.time() - start
> vm = re.findall("(VmPeak.*|VmSize.*)", open('/proc/%d/status' %
> os.getpid()).read())
> print "%d keys, %s, %f seconds, %f keys per second" % (len(l), vm, dt,
> len(l) / dt)


 
Reply With Quote
 
dmtr
Guest
Posts: n/a
 
      08-07-2010
Steven, thank you for answering. See my comments inline. Perhaps I
should have formulated my question a bit differently: Are there any
*compact* high performance containers for unicode()/str() objects in
Python? By *compact* I don't mean compression. Just optimized for
memory usage, rather than performance.

What I'm really looking for is a dict() that maps short unicode
strings into tuples with integers. But just having a *compact* list
container for unicode strings would help a lot (because I could add a
__dict__ and go from it).


> Yes, lots of ways. For example, do you *need* large lists? Often a better
> design is to use generators and iterators to lazily generate data when
> you need it, rather than creating a large list all at once.


Yes. I do need to be able to process large data sets.
No, there is no way I can use an iterator or lazily generate data when
I need it.


> An optimization that sometimes may help is to intern strings, so that
> there's only a single copy of common strings rather than multiple copies
> of the same one.


Unfortunately strings are unique (think usernames on facebook or
wikipedia). And I can't afford storing them in db/memcached/redis/
etc... Too slow.


> Can you compress the data and use that? Without knowing what you are
> trying to do, and why, it's really difficult to advise a better way to do
> it (other than vague suggestions like "use generators instead of lists").


Yes. I've tried. But I was unable to find a good, unobtrusive way to
do that. Every attempt either adds some unnecessary pesky code, or
slow, or something like that. See more at: http://bugs.python.org/issue9520


> Very often, it is cheaper and faster to just put more memory in the
> machine than to try optimizing memory use. Memory is cheap, your time and
> effort is not.


Well... I'd really prefer to use say 16 bytes for 10 chars strings and
fit data into 8Gb
Rather than paying extra $1k for 32Gb.

> > Well... *63 bytes per item for very short unicode strings... Is there
> > any way to do better than that? Perhaps some compact unicode objects?

>
> If you think that unicode objects are going to be *smaller* than byte
> strings, I think you're badly informed about the nature of unicode.


I don't think that that unicode objects are going to be *smaller*!
But AFAIK internally CPython uses UTF-8? No? And 63 bytes per item
seems a bit excessive.
My question was - is there any way to do better than that....


> Python is not a low-level language, and it trades off memory compactness
> for ease of use. Python strings are high-level rich objects, not merely a
> contiguous series of bytes. If all else fails, you might have to use
> something like the array module, or even implement your own data type in
> C.


Are there any *compact* high performance containers (with dict, list
interface) in Python?

-- Regards, Dmitry
 
Reply With Quote
 
dmtr
Guest
Posts: n/a
 
      08-07-2010
> > Well... *63 bytes per item for very short unicode strings... Is there
> > any way to do better than that? Perhaps some compact unicode objects?

>
> There is a certain price you pay for having full-feature Python objects.


Are there any *compact* Python objects? Optimized for compactness?

> What are you trying to accomplish anyway? Maybe the array module can be
> of some help. Or numpy?


Ultimately a dict that can store ~20,000,000 entries: (u'short
string' : (int, int, int, int, int, int, int)).

-- Regards, Dmitry
 
Reply With Quote
 
Chris Rebert
Guest
Posts: n/a
 
      08-07-2010
On Fri, Aug 6, 2010 at 6:39 PM, dmtr <(E-Mail Removed)> wrote:
<snip>
>> > Well... *63 bytes per item for very short unicode strings... Is there
>> > any way to do better than that? Perhaps some compact unicode objects?

>>
>> If you think that unicode objects are going to be *smaller* than byte
>> strings, I think you're badly informed about the nature of unicode.

>
> I don't think that that unicode objects are going to be *smaller*!
> But AFAIK internally CPython uses UTF-8?


Nope. unicode objects internally use UCS-2 or UCS-4, depending on how
CPython was ./configure-d; the former is the default.
See PEP 261.

Cheers,
Chris
--
http://blog.rebertia.com
 
Reply With Quote
 
Neil Hodgson
Guest
Posts: n/a
 
      08-07-2010
dmtr:

> What I'm really looking for is a dict() that maps short unicode
> strings into tuples with integers. But just having a *compact* list
> container for unicode strings would help a lot (because I could add a
> __dict__ and go from it).


Add them all into one string or array and use indexes into that string.

Neil
 
Reply With Quote
 
Carl Banks
Guest
Posts: n/a
 
      08-07-2010
On Aug 6, 6:56*pm, dmtr <(E-Mail Removed)> wrote:
> > > Well... *63 bytes per item for very short unicode strings... Is there
> > > any way to do better than that? Perhaps some compact unicode objects?

>
> > There is a certain price you pay for having full-feature Python objects..

>
> Are there any *compact* Python objects? Optimized for compactness?


Yes, but probably not in the way that'd be useful to you.

Look at the array module, and also consider the third-party numpy
library. They store compact arrays of numeric types (mostly) but they
have character type storage as well. That probably won't help you,
though, since you have variable-length strings.

I don't know of any third-party types that can do what you want, but
there might be some. Search PyPI.


> > What are you trying to accomplish anyway? Maybe the array module can be
> > of some help. Or numpy?

>
> Ultimately a dict that can store ~20,000,000 entries: (u'short
> string' : (int, int, int, int, int, int, int)).


My recommendation would be to use sqlite3. Only if you know for sure
that it's too slow--meaning that you've actually tried it and it was
too slow, and nothing else--then should you bother with a

For that I'd probably go with a binary tree rather than a hash. So
you have a huge numpy character array that stores all 20 million short
strings end-to-end (in lexical order, so that you can look up the
strings with a binary search), then you have an numpy integer array
that stores the indices into this string where the word boundaries
are, and then an Nx7 numpy integer array storing the int return
vslues. That's three compact arrays.


Carl Banks
 
Reply With Quote
 
Michael Torrie
Guest
Posts: n/a
 
      08-07-2010
On 08/06/2010 07:56 PM, dmtr wrote:
> Ultimately a dict that can store ~20,000,000 entries: (u'short
> string' : (int, int, int, int, int, int, int)).


I think you really need a real database engine. With the proper
indexes, MySQL could be very fast storing and retrieving this
information for you. And it will use your RAM to cache as it sees fit.
Don't try to reinvent the wheel here.
 
Reply With Quote
 
dmtr
Guest
Posts: n/a
 
      08-07-2010
On Aug 6, 10:56*pm, Michael Torrie <(E-Mail Removed)> wrote:
> On 08/06/2010 07:56 PM, dmtr wrote:
>
> > Ultimately a dict that can store ~20,000,000 entries: (u'short
> > string' : (int, int, int, int, int, int, int)).

>
> I think you really need a real database engine. *With the proper
> indexes, MySQL could be very fast storing and retrieving this
> information for you. *And it will use your RAM to cache as it sees fit.
> *Don't try to reinvent the wheel here.


No, I've tried. DB solutions are not even close in terms of the speed.
Processing would take weeks Memcached or REDIS sort of work, but
they are still a bit on the slow side, to be a pleasure to work with.
The standard dict() container is *a lot* faster. It is also hassle
free (accepting unicode keys/etc). I just wish there was a bit more
compact dict container, optimized for large dataset and memory, not
for speed. And with the default dict() I'm also running into some kind
of nonlinear performance degradation, apparently after
10,000,000-13,000,000 keys. But I can't recreate this with a solid
test case (see http://bugs.python.org/issue9520 )

-- Dmitry
 
Reply With Quote
 
 
 
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Q] Is there a way to minimize a Tkinter application to the systemtray? Thomas Ploch Python 0 01-28-2008 07:47 PM
How to minimize memory usage by asp.net application san ASP .Net 3 01-08-2007 04:46 PM
501 PIX "deny any any" "allow any any" Any Anybody? Networking Student Cisco 4 11-16-2006 10:40 PM
in wxPython, how to minimize the window in absence of minimize button? Erik Bethke Python 1 02-08-2005 03:35 PM
is there a way ..... any way Andries Perl Misc 27 04-27-2004 06:15 AM



Advertisments