I am having ongoing trouble with a pair of tables, the design of which is
beyond my control.
There is a 'primary' table with hundreds of millions of rows. There is then
a 'subclass' table ~ 10% of the primary which has additional fields. The
tables logically share a primary key field (although tha
Unfortunately I'm not free to share the specific schema or the query plans.
They derive from an upstream vendor that is 'protective' of their data
model. To get to a proper example I'll need to recreate the behavior with
generic data in a generified schema.
I apologize for being frustratingly vagu
On Fri, Jan 15, 2021 at 12:27 PM Michael Lewis wrote:
> On Fri, Jan 15, 2021 at 10:22 AM Alexander Stoddard <
> alexander.stodd...@gmail.com> wrote:
>
>> The 'fast plans' use parallel seq scans. The 'slow plans' is using index
>> scans. It appears
On Tue, Jan 19, 2021 at 2:47 PM Michael Lewis wrote:
> On Fri, Jan 15, 2021 at 3:27 PM Alexander Stoddard <
> alexander.stodd...@gmail.com> wrote:
>
>> I am doing nothing to specify the optimizer. Do I have configurable
>> options in that regard? I was unaware
On Tue, Dec 19, 2017 at 10:39 AM, Stephen Frost wrote:
> Greetings,
>
> * James Keener (j...@jimkeener.com) wrote:
> > Would a storage block level incremental like zfs work?
>
> This really depends on what you want out of your backups and just
> exactly how the ZFS filesystem is set up. Remember
If a table is set to unlogged is it inherently non-durable? That, is any
crash or unsafe shutdown _must_ result in truncation upon recovery?
I can imagine a table that is bulk loaded in a warehousing scenario and
then sitting statically could be safe, but maybe the question becomes how
could the s