Ryan Underwood <[EMAIL PROTECTED]> wrote: > > > On Tue, Apr 25, 2006 at 12:38:11AM -0700, Andrew Morton wrote: > > > > We already have infrastructure for managing and allocating large > > physically-contiguous hunks of memory: arch/*/mm/hugetlb.c. On x86 that > > operates in 2MB hunks (non-PAE) and 4MB hunks (PAE). > > > > Perhaps we could hook into there to get the needed memory? > > Could the upper limit of hugetlb be increased, or is it a function of > page size? Also, I found that the size given to bigphysarea is in > pages, not in KB. So what is being asked for is actually an 8MB buffer > (I guess it is a double buffered operation). 2MB is close but not quite > enough, it appears. I did Cc: the mjpegtools developer so maybe he can > elaborate. >
HPAGE_SIZE is set at compile time. It's typically 2MB or 4MB. The size is determined by the CPU's large-page addressing capability. It'd be best if this driver could be taught to work with >=2MB pages. If that can't be done, it might be possible to teach the hgepage allocator to work with (say) 4MB or 8MB pages, and gibe it the capability to split or coalesce them down to 2MB or 4MB units. That'd be a relatively simple generalisation on top of the hugepage allocator. Alternatively, there has been some work done (by Mel Gorman) to teach the apge allocator to avoid fragmentation of large physical regions. Mel's patches _shouls_ permit this driver to have a pretty good success rate at simply calling the main page allocator when it needs a large physically-contig region. Mel's patches are complex, but if someone is able to test them in a real-world sutiation and is able to confirm that they adequately meet this requirement then that would be a bonus in the evaluation of Mel's patches. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]