7;ll poke around
at the internals and see where the extra memory is going, since I'm
still curious where the extra memory is going. Is that just the
overhead of allocating a full object for each value (i.e. rather than
just a double[] or whatever)?
--
Evan Klitzke :wq
_
long string"),
> 1000, replace=TRUE))
8184 bytes
> object.size(factor(sample(c("a pretty long string", "another pretty long
> string"), 1000, replace=TRUE)))
4560 bytes
--
Evan Klitzke :wq
__
R-help@r-project.org mailing
ly
in an interpreter like this you'd be allocating on order-of-two
boundaries, i.e. sizeof(obj) << 21; this is how Python lists
internally work).
Is it possible that R is counting its memory usage naively, e.g. just
adding up the size of all of the constituent objects, rather than the
am
servlets.append(s)
timestamps.append(float(t))
elapsed.append(float(e))
show_mem()
if __name__ == '__main__':
show_mem()
read_data('/home/evan/20090708.tab')
--
Evan Klitzke :wq
__
4 matches
Mail list logo