rmance then you might be able to solve that problem with an
index.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 28 February 2018 at 11:11, Andres Freund wrote:
> I'm fairly certain that only vacuum and table rewrites like cluster sets
> all-visible,
I don't think the pages are set all visible again after a rewrite.
--
David Rowley http://www.2ndQuadrant.com/
Postgr
o:
-- get all animals that are persons
select ... from animal where tableoid in (select
get_inherited_tables('person'::regclass);
-- get all animals that are not persons
select ... from animal where tableoid not in (select
get_inherited_tables('person'::regclass);
Just be carefu
e sense to create a primary key AND a unique
> index based on the same columns?
> Is PostgreSQL smart enough to use the unique index created for the primary
> key.
Doing this makes no sense in PostgreSQL. I'm struggling to imagine
why it would in MySQL.
--
David Rowley
_statement =
'ddl' would have kept a record of the same thing.
Perhaps the best fix would be a documentation improvement to mention
the fact and that it's best not to use plain text passwords in
CREATE/ALTER ROLE. Passwords can be md5 encrypted.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
LL. A record already deleted won't appear in the 2nd branch of
the UNION ALL result. However, that still might not fix your issue
with the index not being used, but you may find the anti-joined
version of the query is faster anyway.
--
David Rowley http://www.2ndQuadrant.com
show you the value, however, I think a problem here is unlikely
since that would just control the likelihood of an index-only-scan vs
an index-scan. You're getting a Seq-scan, which I imagine is going to
be quite a bit more expensive than even an index scan.
--
David Rowley
wo
arguments? I'm unsure why you're considering
'blah_history.original_id' not to be an argument here.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
> Can anyone help? Thank you so much!!
You might find https://www.postgresql.org/docs/10/static/triggers.html useful.
There's a complete example of a trigger function written in C there.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
er estimates 500
rows will match the status=1 query. So thinks ceil(1 * (6860 / 500.0)
* 10) = 138 rows will need looked at in the seqscan plan. That's
obviously more costly than 17 rows. So the index scan begins to look
more favourable.
The planner always assumes the rows are evenly distribu
ou analyze the table again.
If those stats didn't exist, the planner would have multiplied the
selectivity estimates of each item in the WHERE clause individually.
So if about 10% of records had year=2018, and 0.01% had '2018-06-28',
then the selectivity would have been 0.1 * 0.0
w all 2017 timestamps and only 2017
timestamps.
You've no need to consider precision of the type and how many 9's you
add to anything here.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
k.
Alvaro managed to simplify the problem and allow foreign keys to be
defined on partitioned tables and get that into PG11.
So it was a case of 50% is better than 0%, which I very agree with.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
educing my memory settings:
>
> maintenance_work_mem = 80MB
> work_mem = 5MB
> shared_buffers = 200MB
You may also get it to work by reducing shared_buffers further.
work_mem won't have any affect, neither will maintenance_work_mem.
Failing that, the suggestions of more RAM and/or swa
ause.
#2 might not be a great option since it may require building groups
that don't get used, but it would likely be the bast option if you
didn't have a LIMIT clause, or the LIMIT was a larger percentage of
the total records.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
d loop is
the only join method that supports non-equijoin.
Unsure why you didn't get a parallel plan. Parallel in pg10 supports a
few more plan shapes than 9.6 did. Unsure what version you're using.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
6yxr+w...@mail.gmail.com
[2]
https://www.postgresql.org/message-id/flat/20140317185255.20724.49675%40wrigleys.postgresql.org
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
to prevent transaction ID
wraparound. See Section 24.1.5 for more information."
https://www.postgresql.org/docs/10/static/runtime-config-autovacuum.html
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
uot;. Has that table
been analyzed recently? or is there some reason that auto-vacuum is
not getting to it?
There's a bit more reading of what I'm talking about in
https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L141
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
t=0.00..0.01 rows=1 width=0)
(2 rows)
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
roduct which similarly creates multiple rows on the right for each row on
> the left side.
postgres=# select proretset from pg_proc where proname = 'regexp_matches';
proretset
-------
t
t
(2 rows)
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL
On 28 August 2018 at 09:58, Ken Tanzer wrote:
> On Tue, Aug 14, 2018 at 5:24 PM Ken Tanzer wrote:
>> Basically, I'm wondering whether materialized views are likely to ever
>> support row-level security.
>
> Hi. Just wanted to follow up on my own email and see if anyone could answer
> this.
Yo
en you've previously
upgraded. For example, see [1].
[1] https://www.postgresql.org/docs/9.6/static/release-9-6-2.html
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
to suggest
the reasons why you might not be getting an answer.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
esql.git;a=blob;f=src/backend/partitioning/partbounds.c#l2056
[2]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/insert.sql#l241
[3]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/hash_part.sql#l10
--
David Rowley
hen you might get
somewhere, but that does not look possible since it appears to be
throwing away some bits.
It may not have been a great choice to decide to have
compute_partition_hash_value() do this unnecessary combine, but it's
likely a few months too late to change that now.
--
David Row
il there is some other
reason that is worthy of breaking it. The bar is likely set pretty
high for that.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
M ts) = 1` then the current code in
PartConstraintImpliedByRelConstraint() is not smart enough to know
that `extract(minute FROM ts) = 1` is strict and cannot match nulls.
Perhaps that could be improved, but that's how it is today.
Likely you'll have better luck with a check constraint that explicitly
checks
max_s1_date
from schedule
where schedule.ok = True
and s.product_id = request.product_id) s;
In this case, the subquery will be executed once per output row, so if
you have some restrictive WHERE clause on the outer query then the
subquery will be executed fewer times.
a parameter from
the outer query. LEAD(id_list) OVER (ORDER BY id) is never going to
return anything since those are both just effectively scalar values,
to which there is no "next" value.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ique constraint on that table includes 3 columns and you're just
filtering on 2 of them, then it would only take some rows in there
with the same mfg_key and project_key values but a different
vendor_key to get the same error from that part of the query. If that
shouldn't happen, then perhaps your U
would be more than just executing the query without any
partition pruning. I don't see any good way to know in advance if it
would be worthwhile or not.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ill the end of the current transaction"
Neither affects other sessions.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
uot;sub-partitioning". If
you're still uncertain after that, then ask here.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Tue, 10 Mar 2020 at 02:16, Sterpu Victor wrote:
> I'm testing on Postgresql 12.1 and I have a index like this:
> "check_dates_gist" EXCLUDE USING gist (id_test1 WITH =, id_test2 WITH =,
> tsrange(valid_from::timestamp without time zone, valid_to::timestamp without
> time zone) WITH &&)
>
> Wh
On Fri, 17 Apr 2020 at 20:13, Kiwoong Choi wrote:
> Is there a script and sample data to test the performance of postgresql?
There is pgbench. https://www.postgresql.org/docs/current/pgbench.html
David
On Tue, 5 May 2020 at 13:05, pinker wrote:
> I'm running standard pgbench and what's kinda strange copy pgbench_accounts
> from stdin is blocking my other query which is \dt+.
> Does copy hold any exclusive lock or there is something wrong with my
> system?
COPY does not. However, pgbench does TR
On Tue, 5 May 2020 at 14:50, pinker wrote:
> So it would need to run inside single transaction to cause lock, right? do
> you know if pgbench is opening transaction?
The TRUNCATE and COPY are done in the same transaction. You can see
the code in
https://github.com/postgres/postgres/blob/REL_11_
On Fri, 8 May 2020 at 10:00, David G. Johnston
wrote:
>
> On Thu, May 7, 2020 at 11:07 AM Amarendra Konda wrote:
>>
>> EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) SELECT pa.process_activity_id AS
>> pa_process_activity_id FROM process_activity pa WHERE pa.app_id =
>> '126502930200650' AND pa.c
On Fri, 8 May 2020 at 09:18, github kran wrote:
> 1) We haven't changed anything related to autovacuum except a work_mem
> parameter which was increased to 4 GB which I believe is not related to
> autovacuum
It might want to look into increasing vacuum_cost_limit to something
well above 200 or
On Fri, 8 May 2020 at 13:51, github kran wrote:
> I can't either DROP or ALTER any other tables ( REMOVE Inheritance for
> any of old tables where the WRITES are not getting written to). Any of the
> ALTER TABLE OR DROP TABLE DDL's arer not getting exeucted even I WAITED FOR
> SEVERAL MI
On Tue, 26 May 2020 at 19:23, Frank Millman wrote:
> The table sizes are -
> my_table : 167 rows
> table_1 : 21 rows
> table_2 : 11 rows
> table_3 : 3 rows
> table_4 : 16 rows
>
> Therefore for each tran_date in my_table there are potentially
> 21x11x3x16 = 11088 rows. Mos
On Tue, 26 May 2020 at 22:31, Frank Millman wrote:
> Thank you David. I tried that and it produced the correct result in
> 53ms, which is what I am looking for.
>
> It will take me some time to understand it fully, so I have some
> homework to do!
The main problem with your previous query was th
On Tue, 26 May 2020 at 23:41, Vik Fearing wrote:
>
> On 5/26/20 12:04 PM, David Rowley wrote:
> > Since "my_table" is small, you'd likely be much better doing a manual
> > rewrite of the query to join a subquery containing the required
> > details from
On Fri, 5 Jun 2020 at 14:41, Paul van der Linden
wrote:
> If I have a query like:
>
> SELECT * FROM (
> SELECT
> CASE
> WHEN field='value1' THEN 1
> WHEN field='value2' THEN 2
> END AS category
> FROM table1
> ) AS foo
> WHERE category=1
>
> doesn't use the index on field, while te
On Thu, 4 Jun 2020 at 17:59, Tim Dawborn wrote:
> tmp=> \timing on
> Timing is on.
> tmp=> BEGIN;
> BEGIN
> Time: 1.333 ms
> tmp=> ALTER TABLE foo ADD COLUMN d integer NULL;
> ALTER TABLE
> Time: 1.581 ms
> tmp=> CREATE UNIQUE INDEX myindex ON foo (a, b, c) where d = 2;
> CREATE INDEX
> Time: 3775
On Tue, 9 Jun 2020 at 01:07, Ron wrote:
>
> On 6/8/20 3:40 AM, Oleksandr Shulgin wrote:
> [snip]
>
> I've found the original commit adding this feature in version 11:
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=1aba8e651ac3e37e1d2d875842de1e0ed22a651e
> It says:
>
> "Hash par
On Tue, 9 Jun 2020 at 00:42, Laura Smith
wrote:
> What'st the current state of play with indexes and ON CONFLICT ? The docs
> seem to vaguely suggest it is possible, but this SO question
> (https://stackoverflow.com/questions/38945027/) seems to suggest it is not.
>
> I've got a unique named in
On Wed, 10 Jun 2020 at 07:31, Michael Lewis wrote:
>
> On Tue, Jun 9, 2020 at 12:34 PM Sebastian Dressler
> wrote:
>>
>> - Add an index on top of the whole PK
>> - Add indexes onto other columns trying to help the JOIN
>> - Add additional statistics on two related columns
>>
>> Another idea I ha
On Wed, 10 Jun 2020 at 09:05, Michael Lewis wrote:
>>
>> the join selectivity functions have yet to learn about extended statistics.
>
>
> That is very interesting to me. So, extended statistics would help to
> properly estimate the result set coming out of a single table when comparing
> each o
On Fri, 19 Jun 2020 at 17:42, Srinivasa T N wrote:
>After seeing the below, I feel partitioning is not working properly or it
> maybe case that my understanding is wrong. Can somebody explain me what is
> happening?
> postgres=# select * from busbar_version6;
> objectid | ver_id
> ---
On Mon, 22 Jun 2020 at 16:44, Sankar P wrote:
> select distinct record ->> 'destinationServiceName' from fluent;
> This query takes a lot of time, and does full table scan. The query planning
> is:
> I see that none of the indexes are used. I want to do a few
> aggregations, like "what are the
On Wed, 24 Jun 2020 at 20:41, Jaime Soler wrote:
>
> Hi, does anybody know what is wrong with pgbench in this case ?. Here is a
> simple query to generate a random date in a interval time.sql:
>
> (select timestamp '2005-09-01' + random() * ( timestamp '2006-03-01
> 00:00:00' - timestamp '2005
On Tue, 7 Jul 2020 at 00:43, Simon Riggs wrote:
>
> On Mon, 6 Jul 2020 at 12:37, Robins Tharakan wrote:
>
>>
>> When an SQL needs to UNION constants on either side, it should be possible to
>> implicitly apply a LIMIT 1 and get good speed up. Is this an incorrect
>> understanding,
>> or somethin
On Tue, 7 Jul 2020 at 09:03, Simon Riggs wrote:
>
> On Mon, 6 Jul 2020 at 21:49, David Rowley wrote:
>>
>> On Tue, 7 Jul 2020 at 00:43, Simon Riggs wrote:
>> >
>> > On Mon, 6 Jul 2020 at 12:37, Robins Tharakan wrote:
>> >> (SELECT 1 FROM pg
On Sat, 11 Jul 2020 at 02:48, Vishwa Kalyankar
wrote:
> Below is the output of the query explain and analyze result.
You may find better help here if you follow the advice given in
https://wiki.postgresql.org/wiki/Slow_Query_Questions
David
On Sun, 12 Jul 2020 at 06:59, wrote:
>
> I'm sorry for the bad example.
>
> Here is another, with some data on PG:
> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=ccfd1c4fa291e74a6db9db1772e2b5ac
> and Oracle:
> https://dbfiddle.uk/?rdbms=oracle_18&fiddle=21a98f499065ad4e2c35ff4bd1487e14.
I
On Wed, 15 Jul 2020 at 04:01, Marc Millas wrote:
> your answer helps me understand my first problem.
> so, I rewrote a simple loop so as to avoid the "volatile" behaviour.
Not sure what you're trying to do with the plpgsql, but you can just
escape the multiple evaluations by putting the volatile
On Thu, 16 Jul 2020 at 09:50, Thomas Munro wrote:
> I'd look at EXPLAIN (BUFFERS) or auto_explain.log_buffers to get more
> information on cache hits and misses. I'd look for settings
> differences with EXPLAIN (SETTINGS) to see if there's anything
> accidentally set differently (maybe JIT or par
On Fri, 24 Jul 2020 at 16:01, Christophe Pettus wrote:
> I realize I've never quite known this; where does the planner get the row
> estimates for an empty table? Example:
We just assume there are 10 pages if the relation has not yet been
vacuumed or analyzed. The row estimates you see are the
On Sat, 25 Jul 2020 at 08:46, Scott Ribe wrote:
> Given the magnitude of the miss in using JIT here, I am wondering: is it
> possible that the planner does not properly take into account the cost of
> JIT'ing a function for multiple partitions? Or is it that the planner doesn't
> have enough in
On Sat, 25 Jul 2020 at 10:37, Tom Lane wrote:
>
> David Rowley writes:
> > However, for now, you might just want to try raising various jit
> > thresholds so that it only is enabled for more expensive plans.
>
> Yeah. I'm fairly convinced that the v12 defaults are
On Sat, 25 Jul 2020 at 10:42, David Rowley wrote:
>
> On Sat, 25 Jul 2020 at 10:37, Tom Lane wrote:
> >
> > David Rowley writes:
> > > However, for now, you might just want to try raising various jit
> > > thresholds so that it only is enabled for more expens
On Sun, 26 Jul 2020 at 02:17, Tom Lane wrote:
>
> David Rowley writes:
> > On Sat, 25 Jul 2020 at 10:42, David Rowley wrote:
> >> I think plan cost overestimation is a common cause of unwanted jit too.
> >> It would be good to see the EXPLAIN ANALYZE so we knew
On Sun, 26 Jul 2020 at 02:23, Tom Lane wrote:
>
> Andres Freund writes:
> > On 2020-07-24 18:37:02 -0400, Tom Lane wrote:
> >> Yeah. I'm fairly convinced that the v12 defaults are far too low,
> >> because we are constantly seeing complaints of this sort.
>
> > I think the issue is more that we
On Sun, 26 Jul 2020 at 02:54, Tom Lane wrote:
>
> David Rowley writes:
> > ... nested at the bottom level join, about 6 joins deep. The lack of
> > any row being found results in upper level joins not having to do
> > anything, and the majority of the plan is (never exec
On Tue, 28 Jul 2020 at 11:00, Andres Freund wrote:
>
> On 2020-07-25 10:54:18 -0400, Tom Lane wrote:
> > David Rowley writes:
> > > ... nested at the bottom level join, about 6 joins deep. The lack of
> > > any row being found results in upper level joins not ha
On Tue, 28 Jul 2020 at 04:18, Scott Ribe wrote:
>
> > On Jul 25, 2020, at 8:02 AM, Christoph Moench-Tegeder
> > wrote:
> > pg_jit_available() boolean is JIT compilation available in this session
> >
> > https://www.postgresql.org/docs/12/functions-info.html
>
> Thanks, that seems to be exactly
On Tue, 28 Jul 2020 at 15:33, Tom Lane wrote:
>
> David Rowley writes:
> > Maybe this would be better?
>
> > "returns true if jit is enabled and JIT compilation is available in
> > this session (see Chapter 31)."
>
> The general, non-hacker meaning of
On Tue, 28 Jul 2020 at 15:55, David Rowley wrote:
>
> On Tue, 28 Jul 2020 at 15:33, Tom Lane wrote:
> >
> > David Rowley writes:
> > > Maybe this would be better?
> >
> > > "returns true if jit is enabled and JIT compilation is available in
&g
On Wed, 29 Jul 2020 at 03:45, alex maslakov wrote:
> int i = -1;
> while ((i = bms_next_member(pkattnos , i)) >= 0) {
> /* do stuff with i */
> /* you'll need to use i - FirstLowInvalidHeapAttributeNumber to
> get the pg_attribute.attnum */
>
>
> elog(INFO, "bm
On Wed, 29 Jul 2020 at 00:26, Scott Ribe wrote:
> But does compilation with JIT enable and LLVM dev tools mean that all the
> LLVM compilation/optimization is built into the PG binaries, or does it
> require LLVM presence on the machine where deployed? And if so, does the
> function take that i
On Fri, 31 Jul 2020 at 05:21, wrote:
> does the planner do estimates about heap fetches on index only scans and
> takes them into account?
> At least in Pg 13? If so, is it possible to lower those costs?
> random_page_costs seems not to have any influence.
> Looks like they cause bad planner de
On Wed, 29 Jul 2020 at 09:07, Andres Freund wrote:
> On 2020-07-28 11:54:53 +1200, David Rowley wrote:
> > Is there some reason that we can't consider jitting on a more granular
> > basis?
>
> There's a substantial "constant" overhead of doing JIT. And
On Mon, 3 Aug 2020 at 21:26, alex m wrote:
> I'm writting a function/extention in C for a trigger. Inside a trigger, in C,
> I want to get the name of the current database. However, not via SPI_exec(),
> SPI_prepare() and the like, but more directly, in a more faster way.
You can use MyDatabase
On Wed, 29 Jul 2020 at 09:28, Andres Freund wrote:
> FWIW, I created a demo workload for this, and repro'ed the issue with
> that. Those improvements does make a very significant difference:
> Before:
> Timing: Generation 335.345 ms, Inlining 51.025 ms, Optimization 11967.776
> ms, Emission
On Wed, 5 Aug 2020 at 08:36, Raj Gandhi wrote:
> The following alter table with default set to very large text used to work in
> Postgres 10 but fails in Postgres 12 with ERROR: row is too big: size 12960,
> maximum size 8160
I didn't go to the trouble of debugging this, but I imagine this is
On Thu, 20 Aug 2020 at 09:55, Philip Semanchuk
wrote:
> I could use some help interpreting EXPLAIN ANALYZE output.
>
> -> Index Scan using ix_foo on t1 (cost=0.69..68.57 rows=3283 width=105)
> (actual time=0.006..0.918 rows=3760 loops=94)
>
> The actual rows returned by this plan node ~= 3760 *
On Fri, 21 Aug 2020 at 03:21, Philip Semanchuk
wrote:
>
>
>
> > On Aug 19, 2020, at 6:24 PM, David Rowley wrote:
> >
> > On Thu, 20 Aug 2020 at 09:55, Philip Semanchuk
> > wrote:
> >> -> Parallel Index Scan using pk_xyz on xyz (cost=0.29..2354.67
On Sat, 22 Aug 2020 at 00:35, iulian dragos
wrote:
> I am trying to understand why the query planner insists on using a hash join,
> and how to make it choose the better option, which in this case would be a
> nested loop.
> | -> Index Scan using
> test_result_module
On Tue, 25 Aug 2020 at 22:10, iulian dragos
wrote:
> Thanks for the tip! Indeed, `n_distinct` isn't right. I found it in pg_stats
> set at 131736.0, but the actual number is much higher: 210104361. I tried to
> set it manually, but the plan is still the same (both the actual number and a
> perc
On Fri, 4 Sep 2020 at 20:21, Yang, Rong wrote:
> in OSS 10, how to make the top plan of the plan tree to GatherMerge with
> ‘except all’ in sql?
(I guess you're talking about PostgreSQL 10. I'm not sure what OSS 10 is.)
The ability for Gather Merge to work with Init Plan values was only
added
On Thu, 15 Oct 2020 at 03:53, Andreas Kretschmer
wrote:
> it seems to me a bug. i have a partitioned table:
I've just pushed a fix [1] for this to master only (PG14+)
The problem was that we only added the required information to allow
the executor to perform run-time pruning to the Append/Merge
On Sun, 14 Feb 2021 at 13:15, Seamus Abshere
wrote:
> The comment from Robert says: (src/backend/optimizer/path/allpaths.c)
>
> /*
> * If the use of parallel append is permitted, always request
> at least
> * log2(# of children) workers.
>
> In my
On Mon, 15 Feb 2021 at 10:16, Gavin Flower
wrote:
> Just wondering why there is a hard coded limit.
I don't see where the hardcoded limit is. The total number is limited
to max_parallel_workers_per_gather, but there's nothing hardcoded
about the value of that.
> While I agree it might be good t
On Tue, 16 Feb 2021 at 02:12, Fabio Pardi wrote:
>
> On 14/02/2021 22:16, Gavin Flower wrote:
> > While I agree it might be good to be able specify the number of workers,
> > sure it would be possible to derive a suitable default based on the number
> > of effective processors available?
>
> I h
On Fri, 26 Feb 2021 at 02:06, Alexander Farber
wrote:
> However the deletion still takes forever and I have to ctrl-c it:
>
> # delete from words_games where created < now() - interval '12 month';
>
> Do you please have any further suggestions?
>
> When I try to prepend "explain analyze" to the ab
On Tue, 2 Mar 2021 at 21:53, Brajendra Pratap Singh
wrote:
> We are getting the below issue while executing the backup through pgbackrest.
> 2021-03-02 02:10:01.622 P00 ERROR: [056]: unable to find primary cluster -
> cannot proceed
That's not an error that's coming from PostgreSQL.
You might
On Wed, 23 Mar 2022 at 04:03, PALAYRET Jacques
wrote:
> For example, in Oracle it is not possible to have a SELECT statement without
> a FROM clause (using DUAL table), so maybe " ambiguous " columns are not
> handled in the same way.
But if your original complaint, the column was not ambiguous
On Mon, 4 Apr 2022 at 23:33, Sebastien Flaesch
wrote:
> Any arguments I could give to programmers? Maybe:
> "You better replay all SQL of the whole transaction... (and make them as
> short as possible!)"
> ?
There are SAVEPOINTs [1].
David
[1] https://www.postgresql.org/docs/current/sql-save
On Tue, 5 Apr 2022 at 01:21, J. Roeleveld wrote:
> Personally, I think NULL should be treated as a seperate value and not lead to
> strange behaviour.
I think the rationale behind IN and NOT IN are that c IN(1,2,3) is
equivalent of writing: c = 1 OR c = 2 OR c = 3, whereas NOT IN(1,2,3)
would be
On Wed, 1 Jun 2022 at 08:04, Tim Kelly wrote:
> -> Seq Scan on data (cost=0.00..47132.93 rows=125 width=135)
> (actual time=0.542..182952.708 rows=1167810 loops=1)
> Filter: (data ~~ '%some text%'::text)
The problem is coming from the 125 row estimate in the above plan
On Mon, 27 Jun 2022 at 23:49, Bos, Fred wrote:
> "Settings: effective_cache_size = '128GB', force_parallel_mode = 'on',
> max_parallel_workers = '40',
You shouldn't be using force_parallel_mode. It does not do what you
think. See the documentation for that GUC, or read [1]
> I expected the quer
On Wed, 29 Jun 2022 at 00:45, Bos, Fred wrote:
> Finally, is there a way to force postgres to do the partial hash aggregate,
> either by changing a setting or by influencing the expected amount of output
> groups for each query?
You could do something like:
ALTER TABLE bhload_nohyp_noin ALTER CO
On Thu, 7 Jul 2022 at 04:07, DAVID ROTH wrote:
> I understand the planner can use multiple indexes to get the best plan.
> Can someone point me a paper that explains how this works.
I don't know of a paper, but if you're talking about using multiple
indexes to scan a single relation in order to s
On Mon, 15 Aug 2022 at 20:30, Marc Mamin wrote:
> in the example below, we can see that the view test_ab prevents the usage of
> the index to retrieve the top last rows.
> This is a special case, as the where clause excludes data from the second
> table, and the explain output do not references
On Tue, 16 Aug 2022 at 21:15, Walter Dörwald wrote:
> select count(*) over (), e.* from email.email e;
Depending on the complexity of the query, putting the count(*) as a
subquery in the SELECT clause might execute more quickly. i.e. select
(select count(*) from email.email) c, * from email.email
On Thu, 18 Aug 2022 at 15:32, Tom Lane wrote:
> The 12.5 plan looks like it thinks that the join condition is not
> hashable --- and probably not mergeable as well, else it would have
> done a mergejoin. This is odd if we assume that the lower()
> outputs are just text. But you haven't said anyt
On Fri, 26 Aug 2022 at 03:08, James Vanns wrote:
> CREATE TABLE foobar(
> id BIGINT NOT NULL PRIMARY KEY,
> baz VARCHAR NULL DEFAULT NULL
> ) PARTITION BY HASH(my_func(id));
>
> Error: primary key constraints cannot be used when partition keys
> include expressions.
> I couldn't find much
On Fri, 26 Aug 2022 at 03:08, James Vanns wrote:
> Also, is there a chance that this
> limitation will be relaxed in the future?
(forgot to answer this part)
Certainly not in the near future, I'm afraid. It would require
allowing a single index to exist over multiple tables. There has been
disc
1 - 100 of 308 matches
Mail list logo