On Wed, 2011-10-12 at 14:58 +0100, Ian Campbell wrote: > On Wed, 2011-10-12 at 14:11 +0100, Ben Hutchings wrote: > > On Wed, 2011-10-12 at 08:26 +0100, Ian Campbell wrote: > > > On Wed, 2011-10-12 at 08:46 +0300, Dmitry Musatov wrote: > > > > The config option XEN_MAX_DOMAIN_MEMORY controls how much memory a > > > > Xen instance is seeing. The default for 64bit is 32GB, which is the > > > > reason that m2.4xlarge Amazon EC2 instances only report this amount of > > > > memory. > > > > Please set this limit to 70GB as there is a known restriction for > > > > t1.micro instances at about 80GB. > > > > Similar bug exists and Ubuntu where it's already fixed > > > > (https://bugs.launchpad.net/ubuntu/+source/linux-ec2/+bug/667796) > > > > > > Is this the sort of change we can consider making in a stable update? > > > I'm not at all sure, although my gut feeling is that it would be safe. > > [...] > > > > I think so. But what is the trade-off? There must be some reason why > > this isn't set to however many TB the kernel can support. > > It effects the amount of space set aside for the P2M table (the mapping > of physical to machine addresses). In the kernel in Squeeze this space > is statically reserved in BSS so increasing it will waste some more > memory, according to the Kconfig comment it is 1 page per GB. > > In a more up to date kernel the space comes from BRK and is reclaimed if > it is not used, MAX_DOMAIN_MEMORY was bumped to default to 128G in the > same change.
How intrusive is the change? Could we reasonably backport it? Ben. -- Ben Hutchings Quantity is no substitute for quality, but it's the only one we've got.
signature.asc
Description: This is a digitally signed message part