On Sun, Jul 18, 2010 at 11:07, bob gailer <bgai...@gmail.com> wrote: > Check this out: > > import random, time > s = time.time() > cycles = 1000 > d = "0123456789"*100 > f = open("numbers.txt", "w") > for i in xrange(n): > l = [] > l.extend(random.sample(d, 1000)) > f.write(''.join(l)) > f.close() > print time.time() - s > > 1 million in ~1.25 seconds > > Therefore 1 billion in ~21 minutes. 3 ghz processor 2 g ram. > > Changing length up or down seems to increase time.
Putting "cycles" where you have "n", I used import random, time s = time.time() cycles = 1000000 d = "0123456789"*100 f = open("/p31working/Data/1billion_digits_rand_num_Gailor.txt", "w") for i in range(cycles): l = [] l.extend(random.sample(d, 1000)) f.write(''.join(l)) f.close() print(time.time() - s) to get 1 billion random digits into a file in 932 seconds (15.6 minutes). <http://tutoree7.pastebin.com/Ldh6SX3q>, which uses Steve D'Aprano's random_digits function, took 201 seconds (3.35 minutes). Still, I understand yours, and not his (the return line). I'd never noticed random.sample() before, nor tried out extend() on a list. So thanks, Bob. Dick _______________________________________________ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor