On Tue, Mar 24, 2026 at 12:26 PM Andres Freund <[email protected]> wrote:
> > > > This is preparatory work for an upcoming commit that will need xs_blk
> > > > to manage buffer pin transfers between the scan and the executor slot.
> > >
> > > A subsequent commit adds an earlier ExecClearTuple(slot) to make the 
> > > buffer
> > > refcount decrement cheaper (due to hitting the one-element cache in
> > > bufmgr.c). I wonder if it's worth pulling that into this commit? Mostly to
> > > make that larger commit smaller.
> >
> > Is it really worth doing that without also doing the
> > xs_lastinblock/ExecStorePinnedBufferHeapTuple stuff? We need batches
> > to do the latter.
>
> I see perf benefits from it alone, yes.

Does that mean it should be in a separate commit?

The upcoming v18 will significantly change the patch set's structure
by breaking up the big patch so that the slot interface changes are in
their own commit. Along with pushing the VM accesses down into heapam.

That feels like a big enough change on its own. There is bound to be a
v19, so to me it makes sense to defer a decision on this kind of thing
until you see v18.

> > > The commit message doesn't mention that this affects ammarkpos/amrestrpos.
> >
> > It does not. But FWIW there's a prominent "Note" about it in the SGML docs.
>
> Approximately nobody looking at the commit to see what they need to change
> will see that...

In any case v17 updated the commit message to point out the removal of
ammarkpos/amrestrpos.

> > No, I didn't try. I doubt saving space is worthwhile, since we'll need
> > a relatively large allocation for batches used during index-only scans
> > regardless.
>
> Seems fine to not care for now. But, FWIW, the motivating reason wouldn't be
> to to really save memory, it'd be to make it more likely the data fits into a
> higher level of the cache.

Understood.

> > We're now advertising that indexam_util_batch_unlock is optional, and
> > that index AMs can go there own way when needed.
>
> Fair enough.

Cool.

> > > > +
> > > > +     /*
> > > > +      * heap blocks fetched counts (incremented by index_getnext_slot 
> > > > calls
> > > > +      * within table AMs, though only during index-only scans)
> > > > +      */
> > > > +     uint64          nheapfetches;
> > > >  } IndexScanInstrumentation;
> > >
> > > s/heap/table/ for anything new imo.
> >
> > Usually I'd agree, but here we're using the same name as the one
> > presented in EXPLAIN ANALYZE. IMV this should match that. (Maybe the
> > EXPLAIN ANALYZE output should change, and this field name along with
> > it, but that's another discussion entirely.)
>
> I think if we add it into more and more places it'll get harder and harder to
> eventually fix...

Okay, I'll make this change for v18.

> Code reuse and making it easier for other AMs to adapt this...

Of course, that was what I meant.

> > > I also suspect it'd be worth creating a new heapam.c file for this new
> > > code. heapam_index.c or such.
> >
> > I had thought about that myself. How would that be structured, in
> > terms of the commits?
>
> I'm imagining something like heapam_iscan.c or heapam_indexfetch.c or such.

> I'd introduce it by adding it in a commit that moves heap_hot_search_buffer(),
> and heapam_index_fetch_{begin,reset,end}() into it.
>
> Moving heap_hot_search_buffer() into the same file will be nice because it'll
> allow partial inlining of it into some really performance sensitive functions.

I'm not opposed, of course. But let's leave that question until after
I post v18.

> I think we should move the seq/tid scan stuff into its own file too, but
> that's obviously a separate thread.

Makes sense.

> > > Any reason this isn't in index_beginscan_internal(), given both
> > > index_beginscan() and index_beginscan_parallel() need it?  I realize you'd
> > > need to add arguments to index_beginscan_internal(), but I don't see a 
> > > problem
> > > with that.  Alternatively a helper for this seems like a possibility too.
> >
> > Fixed, by moving much more of the initialization done by each variant
> > (index_beginscan, index_beginscan_bitmap, index_beginscan_parallel)
> > into index_beginscan_internal itself.
>
> Nice.  Haven't checked out your new version yet - are you doing that as a
> separate commit?

Maybe. Again, let's see how things shake out in v18, and then revisit.

> I don't think we necessarily need the coverage of your full torture test suite
> in core, but I feel some basic sanity tests really ought to be in the core
> tests. There's very little, from what I can tell.  Even just making sure we
> have coverage for a index [only] scans going forward and backward, and the
> same for merge & nestloop joins would be quite a win.

We probably have that level of coverage. What we lack is coverage for
things that are actually tricky.

For example, the regression tests lack coverage for an nbtree scan
that uses a scrollable cursor + an SAOP scan key that backs up across
a page boundary, relying directly on a call to the new amposreset
routine for correct behavior. Nor do we have the equivalent SAOP merge
join that restores a mark across a page/batch boundary (note that we
can safely skip the amposreset in the restore "scanBatch == markBatch"
happy path).

The underlying difficulty is that these things tend to "acidentally
fail to fail". Some of these cases were rather difficult to write any
kind of test for, even for my own purposes.


> > IOW, when we're called through index_batchscan_end (specifically, when
> > we're called through index_batchscan_reset when it is called by
> > index_batchscan_end) we don't want to use the cache. It seemed to make
> > sense to implement this in a way that didn't require any special
> > handling from within indexam_util_batch_release (since it's not a
> > concern of index AMs).
>
> I am not really following.  Right now there's two close copies of this code:

> Yes, one of them has the additional "if (scan->xs_heapfetch)" condition, but
> that hardly seems like a real problem preventing sharing the code.

I'll try to address this for v18.

Thanks
-- 
Peter Geoghegan


Reply via email to