天天看點

python随機數生成并排序,按排序順序生成随機數

python随機數生成并排序,按排序順序生成随機數

I want to generate random number in sorted order.

I wrote below code:

void CreateSortedNode(pNode head)

{

int size = 10, last = 0;

pNode temp;

while(size-- > 0) {

temp = (pnode)malloc(sizeof(struct node));

last += (rand()%10);

temp->data = last;//randomly generate number in sorted order

list_add(temp);

}

}

[EDIT:]

Expecting number will be generated in increased or decreased order: i.e {2, 5, 9, 23, 45, 68 }

int main()

{

int size = 10, last = 0;

while(size-- > 0) {

last += (rand()%10);

printf("%4d",last);

}

return 0;

}

Any better idea?

解決方案

Without any information about sample size or sample universe, it's not easy to know if the following is interesting but irrelevant or a solution, but since it is in any case interesting, here goes.

The problem:

In O(1) space, produce an unbiased ordered random sample of size n from an ordered set S of size N: , such that the elements in the sample are in the same order as the elements in the ordered set.

The solution:

With probability n/|S|, do the following:

add S1 to the sample.

decrement n

Remove S1 from S

Repeat steps 1 and 2, each time with the new first element (and size) of S until n is 0, at which point the sample will have the desired number of elements.

The solution in python:

from random import randrange

# select n random integers in order from range(N)

def sample(n, N):

# insist that 0 <= n <= N

for i in range(N):

if randrange(N - i) < n:

yield i

n -= 1

if n <= 0:

break

The problem with the solution:

It takes O(N) time. We'd really like to take O(n) time, since n is likely to be much smaller than N. On the other hand, we'd like to retain the O(1) space, in case n is also quite large.

A better solution (outline only)

(The following is adapted from a 1987 paper by Jeffrey Scott Visser, "An Efficient Algorithm for Sequential Random Sampling", which thanks to the generosity of Dr. Visser is available freely from the ACM digital library. See Dr. Visser's publications page.. Please read the paper for the details.)

Instead of incrementing i and selecting a random number, as in the above python code, it would be cool if we could generate a random number according to some distribution which would be the number of times that i will be incremented without any element being yielded. All we need is the distribution (which will obviously depend on the current values of n and N.)

Of course, we can derive the distribution precisely from an examination of the algorithm. That doesn't help much, though, because the resulting formula requires a lot of time to compute accurately, and the end result is still O(N).

However, we don't always have to compute it accurately. Suppose we have some easily computable reasonably good approximation which consistently underestimates the probabilities (with the consequence that it will sometimes not make a prediction). If that approximation works, we can use it; if not, we'll need to fallback to the accurate computation. If that happens sufficiently rarely, we might be able to achieve O(n) on the average. And indeed, Dr. Visser's paper shows how to do this. (With code.)