On Thu, Mar 26, 2026 at 6:24 PM Amit Langote <[email protected]> wrote:
> On Wed, Mar 25, 2026 at 4:39 PM Amit Langote <[email protected]> wrote:
> > On Fri, Mar 20, 2026 at 2:20 AM Amit Langote <[email protected]> 
> > wrote:
> > > On Mon, Mar 9, 2026 at 1:41 PM Amit Langote <[email protected]> 
> > > wrote:
> > > Stepping back -- the core question is whether running executor logic
> > > (pruning) inside GetCachedPlan() is acceptable at all. The plan cache
> > > and executor have always had a clean boundary: plan cache locks
> > > everything, executor runs. This optimization necessarily crosses that
> > > line, because the information needed to decide which locks to skip
> > > (pruning results) can only come from executor machinery.
> > >
> > > The proposed approach has GetCachedPlan() call ExecutorPrep() to do a
> > > limited subset of executor work (range table init, permissions,
> > > pruning), carry the results out through CachedPlanPrepData, and leave
> > > the CachedPlan itself untouched. The executor already has a multi-step
> > > protocol: start/run/end. prep/start/run/end is just a finer
> > > decomposition of what InitPlan() was already doing inside
> > > ExecutorStart().
> > >
> > > Of the attached patches, I'm targeting 0001-0003 for commit. 0004 (SQL
> > > function support) and 0005 (parallel worker reuse) are useful
> > > follow-ons but not essential.  The optimization works without them for
> > > most cases, and they can be reviewed and committed separately.
> > >
> > > If there's a cleaner way to avoid locking pruned partitions without
> > > the plumbing this patch adds, I haven't found it in the year since the
> > > revert.  I'd welcome a pointer if you see one.  Failing that, I think
> > > this is the right trade-off, but it's a judgment call about where to
> > > hold your nose.
> > >
> > > Tom, I'd value your opinion on whether this approach is something
> > > you'd be comfortable seeing in the tree.
> >
> > Attached is an updated set with some cleanup after another pass.
> >
> > - Removed ExecCreatePartitionPruneStates() from 0001. In 0001-0003,
> > ExecDoInitialPruning() handles both setup and pruning internally; the
> > split isn't needed yet.
> >
> > - Tightened commit messages to describe what each commit does now, not
> > what later commits will use it for. In particular, 0002 is upfront
> > that the portal/SPI/EXPLAIN plumbing is scaffolding that 0003 lights
> > up.
> >
> > - Updated setrefs.c comment for firstResultRels to drop a blanket
> > claim about one ModifyTable per query level.
> >
> > As before, 0001-0003 is the focus, maybe 0004 which teaches the new
> > GetCachedPlan() pruning-aware contract to its relatively new user in
> > function.c.
>
> While reviewing the patch more carefully, I realized there's a
> correctness issue when rule rewriting causes a single statement to
> expand into multiple PlannedStmts in one CachedPlan.
>
> PortalRunMulti() executes those statements sequentially, with
> CommandCounterIncrement() between them, so Q2's ExecutorStart()
> normally sees the effects of Q1.
>
> With the patch, though, AcquireExecutorLocksUnpruned() runs
> ExecutorPrep() on all PlannedStmts in one pass during GetCachedPlan(),
> before any statement executes. If a later statement has
> initial-pruning expressions that read data modified by an earlier one,
> pruning can see stale results.
>
> There's also a memory lifetime issue: PortalRunMulti() calls
> MemoryContextDeleteChildren(portalContext) between statements, which
> destroys EStates prepared for later statements.
>
> Here's a concrete case demonstrating the semantic issue:
>
>   create table multistmt_pt (a int, b int) partition by list (a);
>   create table multistmt_pt_1 partition of multistmt_pt for values in (1);
>   create table multistmt_pt_2 partition of multistmt_pt for values in (2);
>   insert into multistmt_pt values (1, 0), (2, 0);
>
>   create table prune_config (val int);
>   insert into prune_config values (1);
>
>   create function get_prune_val() returns int as $$
>     select val from prune_config;
>   $$ language sql stable;
>
>   -- rule action runs first, updating prune_config before the
>   -- original statement's pruning would normally be evaluated
>   create rule config_upd_rule as on update to multistmt_pt
>     do also update prune_config set val = 2;
>
>   set plan_cache_mode to force_generic_plan;
>   prepare multi_q as
>     update multistmt_pt set b = b + 1 where a = get_prune_val();
>   execute multi_q;  -- creates the generic plan
>
>   -- reset for the real test
>   update prune_config set val = 1;
>   update multistmt_pt set b = 0;
>
>   -- second execute reuses the plan
>   execute multi_q;
>   select * from multistmt_pt order by a;
>
> Without the patch: the rule action updates prune_config to val=2
> first, then after CCI the original statement's initial pruning calls
> get_prune_val(), gets 2, prunes to multistmt_pt_2, and updates it
> correctly: (1, 0), (2, 1).
>
> With the patch as it stood: both statements' pruning runs during
> GetCachedPlan() before either executes. The original statement's
> pruning sees val=1, prunes to multistmt_pt_1, and multistmt_pt_2 is
> never touched.
>
> The fix is to skip pruning-aware locking for CachedPlans containing
> multiple PlannedStmts, falling back to locking all partitions.
> Single-statement plans are unchanged.

For good measure, I also verified that Tom's test case from last May
[1] that prompted the revert of the previous commit works correctly
with this patch. When the DO ALSO rule is created mid-execution, the
plan gets invalidated and rebuilt as a multi-statement CachedPlan,
which triggers the fallback to locking all partitions. No assertions,
no crashes.

-- 
Thanks, Amit Langote

[1] https://postgr.es/m/[email protected]


Reply via email to