Velocity Reviews > Multiple disjoint sample sets?

Multiple disjoint sample sets?

Roy Smith
Guest
Posts: n/a

 01-11-2013
I have a list of items. I need to generate n samples of k unique items
each. I not only want each sample set to have no repeats, but I also
want to make sure the sets are disjoint (i.e. no item repeated between
sets).

random.sample(items, k) will satisfy the first constraint, but not the
second. Should I just do random.sample(items, k*n), and then split the
resulting big list into n pieces? Or is there some more efficient way?

Typical values:

len(items) = 5,000,000
n = 10
k = 100,000

MRAB
Guest
Posts: n/a

 01-11-2013
On 2013-01-11 14:15, Roy Smith wrote:
> I have a list of items. I need to generate n samples of k unique items
> each. I not only want each sample set to have no repeats, but I also
> want to make sure the sets are disjoint (i.e. no item repeated between
> sets).
>
> random.sample(items, k) will satisfy the first constraint, but not the
> second. Should I just do random.sample(items, k*n), and then split the
> resulting big list into n pieces? Or is there some more efficient way?
>
> Typical values:
>
> len(items) = 5,000,000
> n = 10
> k = 100,000
>

I don't know how efficient it would be, but couldn't you shuffle the
list and then use slicing to get the samples?

Dave Angel
Guest
Posts: n/a

 01-11-2013
On 01/11/2013 09:36 AM, MRAB wrote:
> On 2013-01-11 14:15, Roy Smith wrote:
>> I have a list of items. I need to generate n samples of k unique items
>> each. I not only want each sample set to have no repeats, but I also
>> want to make sure the sets are disjoint (i.e. no item repeated between
>> sets).
>>
>> random.sample(items, k) will satisfy the first constraint, but not the
>> second. Should I just do random.sample(items, k*n), and then split the
>> resulting big list into n pieces? Or is there some more efficient way?
>>
>> Typical values:
>>
>> len(items) = 5,000,000
>> n = 10
>> k = 100,000
>>

> I don't know how efficient it would be, but couldn't you shuffle the
> list and then use slicing to get the samples?

I like that answer best, but just to offer another choice...

your first sample, you could subtract them from the list, and use the
smaller list for the next sample.

One way is to convert list to set, subtract, then convert back to list.

--

DaveA

Peter Otten
Guest
Posts: n/a

 01-13-2013
Roy Smith wrote:

> I have a list of items. I need to generate n samples of k unique items
> each. I not only want each sample set to have no repeats, but I also
> want to make sure the sets are disjoint (i.e. no item repeated between
> sets).
>
> random.sample(items, k) will satisfy the first constraint, but not the
> second. Should I just do random.sample(items, k*n), and then split the
> resulting big list into n pieces? Or is there some more efficient way?
>
> Typical values:
>
> len(items) = 5,000,000
> n = 10
> k = 100,000

I would expect that your simple approach is more efficient than shuffling
the whole list.

Assuming there is a sample_iter(population) that generates unique items from
the population (which has no repetitions itself) you can create the samples
with

g = sample_iter(items)
samples =[list(itertools.islice(g, k) for _ in xrange(n)]

My ideas for such a sample_iter():

def sample_iter_mark(items):
n = len(items)
while True:
i = int(random()*n)
v = items[i]
if v is not None:
yield v
items[i] = None

This is destructive and will degrade badly as the number of None items
increases. For your typical values it seems to be OK though. You can make
this non-destructive by adding a bit array or a set (random.Random.sample()
has code that uses a set) to keep track of the seen items.

Another sample_iter() (which is also part of the random.Random.sample()
implementation):

def sample_iter_replace(items):
n = len(items)
for k in xrange(n):
i = int(random()*(n-k))
yield items[i]
items[i] = items[n-k-1]

You can micro-optimise that a bit to avoid the index calculation. Also,
instead of overwriting items you could swap them, so that no values would be
lost, only their initial order.