Wei Wang wrote:
> Michael, could we merge patch 3-5 first?
No! I'm repeatedly asking you to propose only VIRTIO_BALLOON_F_SG changes.
Please don't ignore me.
Patch 4 depends on patch 2. Thus, back to patch 2.
Your patch is trying to switch tell_host_sgs() and tell_host() based on
VIRTIO_BALLOO
Wei Wang wrote:
> - enable OOM to free inflated pages maintained in the local temporary
> list.
I do want to see it before applying this patch.
Please carefully check how the xbitmap implementation works, and you will
find that you are adding a lot of redundant operations with a bug.
Wei Wang wrote:
> On 01/03/2018 10:29 AM, Tetsuo Handa wrote:
> > Matthew Wilcox wrote:
> >> The radix tree convention is objectively awful, which is why I'm working
> >> to change it. Specifying the GFP flags at radix tree initialisation time
> >> rathe
Matthew Wilcox wrote:
> The radix tree convention is objectively awful, which is why I'm working
> to change it. Specifying the GFP flags at radix tree initialisation time
> rather than allocation time leads to all kinds of confusion. The preload
> API is a pretty awful workaround, and it will go
Wei Wang wrote:
> On 12/26/2017 06:38 PM, Tetsuo Handa wrote:
> > Wei Wang wrote:
> >> On 12/25/2017 10:51 PM, Tetsuo Handa wrote:
> >>> Wei Wang wrote:
> >>>
> >> What we are doing here is to free the pages that were just allocated in
> &
Wei Wang wrote:
> On 12/25/2017 10:51 PM, Tetsuo Handa wrote:
> > Wei Wang wrote:
> >>>>>> @@ -173,8 +292,15 @@ static unsigned fill_balloon(struct
> >>>>>> virtio_balloon *vb, size_t num)
> >>>>>> while ((page = b
Wei Wang wrote:
> @@ -173,8 +292,15 @@ static unsigned fill_balloon(struct
> virtio_balloon *vb, size_t num)
> while ((page = balloon_page_pop(&pages))) {
> balloon_page_enqueue(&vb->vb_dev_info, page);
> +if (use_sg) {
> +if (xb_
Matthew Wilcox wrote:
> > + unsigned long pfn = page_to_pfn(page);
> > + int ret;
> > +
> > + *pfn_min = min(pfn, *pfn_min);
> > + *pfn_max = max(pfn, *pfn_max);
> > +
> > + do {
> > + if (xb_preload(GFP_NOWAIT | __GFP_NOWARN) < 0)
> > + return -ENOMEM;
> > +
>
Matthew Wilcox wrote:
> On Sat, Dec 23, 2017 at 11:59:54AM +0900, Tetsuo Handa wrote:
> > Matthew Wilcox wrote:
> > > + bit %= IDA_BITMAP_BITS;
> > > + radix_tree_iter_init(&iter, index);
> > > + slot = idr_get_free_cmn(root, &iter, GFP_NOWAIT | _
Matthew Wilcox wrote:
> +/**
> + * xb_set_bit() - Set a bit in the XBitmap.
> + * @xb: The XBitmap.
> + * @bit: Index of the bit to set.
> + *
> + * This function is used to set a bit in the xbitmap.
> + *
> + * Return: 0 on success. -ENOMEM if memory could not be allocated.
> + */
> +int xb_set_bi
Matthew Wilcox wrote:
> > +/**
> > + * xb_find_set - find the next set bit in a range of bits
> > + * @xb: the xbitmap to search from
> > + * @offset: the offset in the range to start searching
> > + * @size: the size of the range
> > + *
> > + * Returns: the found bit or, @size if no set bit is fo
Wei Wang wrote:
> Thanks for the effort. That's actually caused by the previous "!node"
> path, which incorrectly changed "index = (index | RADIX_TREE_MAP_MASK) +
> 1". With the change below, it will run pretty well with the test cases.
>
> if (!node && !bitmap)
> return size;
>
> Would yo
Matthew Wilcox wrote:
> > I think xb_find_set() has a bug in !node path.
>
> Don't think. Write a test-case. Please. If it shows a bug, then great,
+unsigned long xb_find_set(struct xb *xb, unsigned long size,
+ unsigned long offset)
+{
+ struct radix_tree_root *r
Wei Wang wrote:
> ChangeLog:
> v19->v20:
> 1) patch 1: xbitmap
> - add __rcu to "void **slot";
> - remove the exceptional path.
> 2) patch 3: xbitmap
> - DeveloperNotes: add an item to comment that the current bit range
> related APIs operating on extremely large ranges (e
Wang, Wei W wrote:
> > Wei Wang wrote:
> > > > But passing GFP_NOWAIT means that we can handle allocation failure.
> > > > There is no need to use preload approach when we can handle allocation
> > > > failure.
> > >
> > > I think the reason we need xb_preload is because radix tree insertion
> > >
Wei Wang wrote:
> > But passing GFP_NOWAIT means that we can handle allocation failure. There is
> > no need to use preload approach when we can handle allocation failure.
>
> I think the reason we need xb_preload is because radix tree insertion
> needs the memory being preallocated already (it c
Wei Wang wrote:
> On 12/16/2017 02:42 AM, Matthew Wilcox wrote:
> > On Tue, Dec 12, 2017 at 07:55:55PM +0800, Wei Wang wrote:
> >> +int xb_preload_and_set_bit(struct xb *xb, unsigned long bit, gfp_t gfp);
> > I'm struggling to understand when one would use this. The xb_ API
> > requires you to han
Matthew Wilcox wrote:
> On Sat, Dec 16, 2017 at 01:31:24PM +0900, Tetsuo Handa wrote:
> > Michael S. Tsirkin wrote:
> > > On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> > > > My understanding is that virtio-balloon wants to handle sparsely
> >
Michael S. Tsirkin wrote:
> On Sat, Dec 16, 2017 at 01:21:52AM +0900, Tetsuo Handa wrote:
> > My understanding is that virtio-balloon wants to handle sparsely spreaded
> > unsigned long values (which is PATCH 4/7) and wants to find all chunks of
> > consecutive "1"
Matthew Wilcox wrote:
> On Fri, Dec 15, 2017 at 01:29:45AM +0900, Tetsuo Handa wrote:
> > > > Also, one more thing you need to check. Have you checked how long does
> > > > xb_find_next_set_bit(xb, 0, ULONG_MAX) on an empty xbitmap takes?
> > > > If it ca
Wei Wang wrote:
> I used the example of xb_clear_bit_range(), and xb_find_next_bit() is
> the same fundamentally. Please let me know if anywhere still looks fuzzy.
I don't think it is the same for xb_find_next_bit() with set == 0.
+ if (radix_tree_exception(bmap)) {
+
Wei Wang wrote:
> On 12/12/2017 09:20 PM, Tetsuo Handa wrote:
> > Wei Wang wrote:
> >> +void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long
> >> end)
> >> +{
> >> + struct radix_tree_root *root = &xb->xbrt;
>
Wei Wang wrote:
> +void xb_clear_bit_range(struct xb *xb, unsigned long start, unsigned long
> end)
> +{
> + struct radix_tree_root *root = &xb->xbrt;
> + struct radix_tree_node *node;
> + void **slot;
> + struct ida_bitmap *bitmap;
> + unsigned int nbits;
> +
> + for (; st
Matthew Wilcox wrote:
> On Fri, Dec 01, 2017 at 03:09:08PM +, Wang, Wei W wrote:
> > On Friday, December 1, 2017 9:02 PM, Tetsuo Handa wrote:
> > > If start == end is legal,
> > >
> > >for (; start < end; start = (start | (IDA_BITMAP_BITS - 1))
Matthew Wilcox wrote:
> On Thu, Nov 30, 2017 at 10:35:03PM +0900, Tetsuo Handa wrote:
> > According to xb_set_bit(), it seems to me that we are trying to avoid
> > memory allocation
> > for "struct ida_bitmap" when all set bits within a 1024-bits bitmap resid
Wei Wang wrote:
> On 11/30/2017 06:34 PM, Tetsuo Handa wrote:
> > Wei Wang wrote:
> >> + * @start: the start of the bit range, inclusive
> >> + * @end: the end of the bit range, inclusive
> >> + *
> >> + * This function is used to clear a bit
Tetsuo Handa wrote:
> > +
> > + if (ebit >= BITS_PER_LONG)
> > + continue;
>
> (I don't understand how radix tree works, but generally this patchset looks
> fuzzy
> to me about boundary cases. Thus, I want to c
Wei Wang wrote:
> +static inline int xb_set_page(struct virtio_balloon *vb,
> +struct page *page,
> +unsigned long *pfn_min,
> +unsigned long *pfn_max)
> +{
> + unsigned long pfn = page_to_pfn(page);
> + int
Wei Wang wrote:
> /**
> + * xb_clear_bit - clear a range of bits in the xbitmap
Name mismatch.
> + * @start: the start of the bit range, inclusive
> + * @end: the end of the bit range, inclusive
> + *
> + * This function is used to clear a bit in the xbitmap. If all the bits of
> the
> + * bitm
Wei Wang wrote:
> On 11/03/2017 07:25 PM, Tetsuo Handa wrote:
> >> @@ -184,8 +307,12 @@ static unsigned fill_balloon(struct virtio_balloon
> >> *vb, size_t num)
> >>
> >>num_allocated_pages = vb->num_pfns;
> >>
Wei Wang wrote:
> @@ -164,6 +284,8 @@ static unsigned fill_balloon(struct virtio_balloon *vb,
> size_t num)
> break;
> }
>
> + if (use_sg && xb_set_page(vb, page, &pfn_min, &pfn_max) < 0)
Isn't this leaking "page" ?
> + break;
Wei Wang wrote:
> Here's a detailed analysis of the deadlock by Tetsuo Handa:
>
> In leak_balloon(), mutex_lock(&vb->balloon_lock) is called in order to
> serialize against fill_balloon(). But in fill_balloon(),
> alloc_page(GFP_HIGHUSER[_MOVABLE] | __GFP_NOMEMALLOC |
I'm commenting without understanding the logic.
Wei Wang wrote:
> +
> +bool xb_preload(gfp_t gfp);
> +
Want __must_check annotation, for __radix_tree_preload() is marked
with __must_check annotation. By error failing to check result of
xb_preload() will lead to preemption kept disabled unexpected
Wei Wang wrote:
>On 10/10/2017 09:09 PM, Tetsuo Handa wrote:
>> Wei Wang wrote:
>>>> And even if we could remove balloon_lock, you still cannot use
>>>> __GFP_DIRECT_RECLAIM at xb_set_page(). I think you will need to use
>>>> "whether it is s
Wei Wang wrote:
> > And even if we could remove balloon_lock, you still cannot use
> > __GFP_DIRECT_RECLAIM at xb_set_page(). I think you will need to use
> > "whether it is safe to wait" flag from
> > "[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()" .
>
> Without the lock b
Wei Wang wrote:
> On 10/09/2017 11:20 PM, Michael S. Tsirkin wrote:
> > On Sat, Sep 30, 2017 at 12:05:52PM +0800, Wei Wang wrote:
> >> +static inline void xb_set_page(struct virtio_balloon *vb,
> >> + struct page *page,
> >> + unsigned long *pfn_min,
On 2017/09/30 13:05, Wei Wang wrote:
> /**
> + * xb_preload - preload for xb_set_bit()
> + * @gfp_mask: allocation mask to use for preloading
> + *
> + * Preallocate memory to use for the next call to xb_set_bit(). This function
> + * returns with preemption disabled. It will be enabled by xb_pr
37 matches
Mail list logo