Algorithmic complexity & STL

Discussion in 'Computer Information' started by Generic Usenet Account, Apr 11, 2005.

  1. Consider two entities A and B such that there is a 1:n association
    between them. I mean that associated with each instance of A there are
    up to n instances of B. Currently in our software we are using an STL
    map in which instances of A are the key and each value is a set (STL
    set) of instances of B.

    There is some thought now that we should instead have a "transpose" of
    this data structure. By this I mean that the key should be an instance
    of B and the value should be a "singular" instance of A. I have
    attempted some preliminary analysis, but I am not sure about the
    results. Any independent insight would be appreciated.


    Let us assume that we have m instances of A and m*n instances of B.
    With our current implementation (key is an instance of A and the value
    is an STL set of instances of B), the time complexity is O(log-m) *
    O(log-n). With the new proposal (key is an instance of B and the value
    is an instance of A), the time complexity is O(log-mn)

    If we switch to hash_map instead of map (I know that hash_map is not
    part of the STL standard, but it seems to be widely supported now), the
    current implementation has a time complexity of O(1) * O(log-n) i.e.
    O(log-n). The new proposal has a time complexity of O(1).


    It seems that the new proposal has better time complexity in either
    case, although with a hash_map the improvement in performance is more
    dramatic. Is my analysis correct? Should we go ahead and change our
    implementation?


    Thanks,
    Bhat
    Generic Usenet Account, Apr 11, 2005
    #1
    1. Advertising

  2. Generic Usenet Account

    Artie Gold Guest

    [followups directed to news:comp.programming]
    Generic Usenet Account wrote:
    > Consider two entities A and B such that there is a 1:n association
    > between them. I mean that associated with each instance of A there are
    > up to n instances of B. Currently in our software we are using an STL
    > map in which instances of A are the key and each value is a set (STL
    > set) of instances of B.
    >
    > There is some thought now that we should instead have a "transpose" of
    > this data structure. By this I mean that the key should be an instance
    > of B and the value should be a "singular" instance of A. I have
    > attempted some preliminary analysis, but I am not sure about the
    > results. Any independent insight would be appreciated.
    >
    >
    > Let us assume that we have m instances of A and m*n instances of B.
    > With our current implementation (key is an instance of A and the value
    > is an STL set of instances of B), the time complexity is O(log-m) *
    > O(log-n). With the new proposal (key is an instance of B and the value
    > is an instance of A), the time complexity is O(log-mn)
    >
    > If we switch to hash_map instead of map (I know that hash_map is not
    > part of the STL standard, but it seems to be widely supported now), the
    > current implementation has a time complexity of O(1) * O(log-n) i.e.
    > O(log-n). The new proposal has a time complexity of O(1).
    >
    >
    > It seems that the new proposal has better time complexity in either
    > case, although with a hash_map the improvement in performance is more
    > dramatic. Is my analysis correct? Should we go ahead and change our
    > implementation?
    >
    >
    > Thanks,
    > Bhat
    >

    Your analysis is indeed correct; unfortunately, it may be meaningless.

    Actually, that's a little harsh. The problem is that there are *very*
    important questions -- or information -- you've left out. How large are
    `m' and `n' likely to be? What is the cost of comparison or hashing for
    these kinds of instances?

    Only armed with that information can a determination be made.

    HTH,
    --ag

    --
    Artie Gold -- Austin, Texas
    http://it-matters.blogspot.com (new post 12/5)
    http://www.cafepress.com/goldsays
    Artie Gold, Apr 11, 2005
    #2
    1. Advertising

  3. Generic Usenet Account

    CBFalconer Guest

    Generic Usenet Account wrote:
    >
    > Consider two entities A and B such that there is a 1:n association
    > between them. I mean that associated with each instance of A there
    > are up to n instances of B. Currently in our software we are using
    > an STL map in which instances of A are the key and each value is a
    > set (STL set) of instances of B.
    >
    > There is some thought now that we should instead have a "transpose"
    > of this data structure. By this I mean that the key should be an
    > instance of B and the value should be a "singular" instance of A.
    > I have attempted some preliminary analysis, but I am not sure about
    > the results. Any independent insight would be appreciated.


    This is nonsense. The critical thing is 'what kind of accesses are
    required'. The database design should reflect that. Think about
    how you find all the Bs associated with an A value.

    --
    "If you want to post a followup via groups.google.com, don't use
    the broken "Reply" link at the bottom of the article. Click on
    "show options" at the top of the article, then click on the
    "Reply" at the bottom of the article headers." - Keith Thompson
    CBFalconer, Apr 11, 2005
    #3
  4. Generic Usenet Account

    Mark P Guest

    Generic Usenet Account wrote:
    > Consider two entities A and B such that there is a 1:n association
    > between them. I mean that associated with each instance of A there are
    > up to n instances of B. Currently in our software we are using an STL
    > map in which instances of A are the key and each value is a set (STL
    > set) of instances of B.
    >
    > There is some thought now that we should instead have a "transpose" of
    > this data structure. By this I mean that the key should be an instance
    > of B and the value should be a "singular" instance of A. I have
    > attempted some preliminary analysis, but I am not sure about the
    > results. Any independent insight would be appreciated.
    >
    >
    > Let us assume that we have m instances of A and m*n instances of B.
    > With our current implementation (key is an instance of A and the value
    > is an STL set of instances of B), the time complexity is O(log-m) *
    > O(log-n). With the new proposal (key is an instance of B and the value
    > is an instance of A), the time complexity is O(log-mn)


    The time complexity of what? Clearly some sort of lookup, but you
    haven't provided enough information to allow any meaningful analysis.
    Tell us what it is you want to do with this data strucutre.

    Mark
    Mark P, Apr 11, 2005
    #4
    1. Advertising

Want to reply to this thread or ask your own question?

It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum.
Similar Threads
  1. shahzadliaqat

    Codec Complexity and Compression

    shahzadliaqat, Oct 10, 2012, in forum: Cisco
    Replies:
    0
    Views:
    677
    shahzadliaqat
    Oct 10, 2012
  2. Mark Storkamp

    Re: STL and OBJ Files

    Mark Storkamp, Feb 22, 2014, in forum: Digital Photography
    Replies:
    0
    Views:
    172
    Mark Storkamp
    Feb 22, 2014
Loading...

Share This Page