archiving.
If something then happens to the primary then your recovery point will
be limited.
Regards,
--
-David
da...@pgmasters.net
On 2/27/19 4:48 PM, Achilleas Mantzios wrote:
On 27/2/19 4:16 μ.μ., David Steele wrote:
On 2/27/19 2:31 PM, Achilleas Mantzios wrote:
On 27/2/19 1:58 μ.μ., rich...@simkorp.com.br wrote:
Just to notice, I d o use backup from standby and WAL archive from
standby. It is possible. But you have to
On 2/28/19 9:21 AM, Achilleas Mantzios wrote:
On 28/2/19 1:08 π.μ., Ahmed, Nawaz wrote:
Hi,
I believe the "file copy" method (listed in the table) in pgbackrest
is based on pg_basebackup, so i think it should be "pg_basebackup over
ssh" as pgbackrest internally calls
/user-guide.html#installation
You may have files in other locations as well since I don't believe
/bin/pgbackrest has even been in our install directions.
Regards,
--
-David
da...@pgmasters.net
`stop-auto` option since it is not
relevant to PG10.
https://github.com/pgbackrest/pgbackrest/commit/6ce3310f8a2900d1af717da8d4c3345a9016933b
Thanks!
--
-David
da...@pgmasters.net
;t be fixed for 12. However, nothing is entirely certain that far
out.
It's probably best to reduce the number of partitions.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com]
Sent: Wednesday, March 13, 2019 10:24 AM
To: Johann Spies ; pgsql-gene...@postgresql.org
Subject: Re: Permission to refresh materialized view
On 3/13/19 6:27 AM, Johann Spies wrote:
> We did run this query:
>
> /
part of our disaster recovery process, so if we find this
problem during restore it will mean more downtime.
PG version 9.5.14. I’m attempting to find out now if it’s an issue in more
recent versions also.
TIA
David Wheeler
Software developer
[cid:2C4D0888-9F8B-463F-BD54-2B60A322210C]
E dwhe
s,
of course, not always possible, but useful to keep in mind when
deciding the order to have your columns in the table.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
roving TupleDesc so
it cached the first attnum without a NOT NULL constraint so that we
knew we could use the cached offset before that attnum even if the
tuple has NULLs later in the tuple. However, that puts a lot of
dependency on the NOT NULL constraint never being wrong.
--
David Rowley
manipulation of
the PostgreSQL logs.
Regards,
--
-David
da...@pgmasters.net
On 3/31/19 8:01 PM, Durgamahesh Manne wrote:
On Saturday, March 30, 2019, David Steele <mailto:da...@pgmasters.net>> wrote:
On 3/29/19 3:32 PM, Durgamahesh Manne wrote:
I could not find parameter related to pgaudit log_directory .
pgAudit does not suppor
efresh, otherwise, you might need to have a look at ungranted locks
in pg_locks and see if the number of locks spikes during the refresh.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Hi,
Is PostgreSQL 10+ supported on windows 2019 ?
If not is there a rough time frame for when it will be supported on this OS ?
Thanks,
David
?
Thanks,
David
From: Andres Freund
Sent: Wednesday 3 April 2019 20:12
To: Tom Lane
Cc: david moloney; pgsql-gene...@postgresql.org
Subject: Re: PostgreSQL Windows 2019 support ?
Hi,
On 2019-04-03 12:43:25 -0400, Tom Lane wrote:
> david moloney writes:
>
On Thu, 4 Apr 2019 at 22:15, david moloney wrote:
> Windows 2019 isn’t listed as a supported platform on
> https://www.postgresql.org/download/windows/
>
> I’m presuming there’s a qualification process before it’s listed here ? Is
> this in progress ?
That process is having a b
hink it is. Remember that variable length attributes can be
toasted and stored in the relation's toast table.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On Mon, 8 Apr 2019 at 14:19, Igal Sapir wrote:
>
> On Sun, Apr 7, 2019 at 6:20 PM David Rowley
> wrote:
>>
>> On Mon, 8 Apr 2019 at 10:09, Igal Sapir wrote:
>> >
>> > I have a table for which pg_relation_size() shows only 31MB, but
>> >
rge. A normal VACUUM will turn space used
by dead tuples back into free space, so if done often enough there
won't be a need to vacuum full.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
ded or a bug?
Overlooked for PG11. You'll see them in PG12 per [1]. It's not really
possible to backpatch a fix for that since these views are created
during initdb.
[1]
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f94cec64476f2752e91b10d7928a2fcd105e9fc3
--
David Rowley
ion time.
The closest thing we have to that is run-time partition pruning in
PG11, but that can't help you either since the partition key cannot
contain values from other tables. You'd still need to normalise the
record table. With that and a partitioned table, there might be
further advanta
. If for example, you can get it to crash with LIMIT 1, can you
send us the psql output of \d public.aclappliedtopep and also some
anonymised version of the row it crashes on? (ensuring it still
crashes once anonymised). You may be able to narrow it down by "binary
searching" through various L
to assume that though.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
easurement_automatic"
> DETAIL: Specified lower bound (1) is greater than or equal to upper
> bound (1).
>
> OOPS, so lets cheat:
Or you could partition by LIST.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
.
If your go-live date is near the end of the year or beyond, then it
might be a good idea to start testing with PG12 right away. The
release date for that will likely be around the middle of October.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
. It'll be pretty much the same as
if you were querying a normal table.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
partition and ln
it back to its original location.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Hi:
I'm going to be requesting a PG instance supported by an IT team in a large
corp. They will be creating the server as a VM. We will be loading the DB
using scripts (perl/dbi) on linux, possibly using bulk loading techniques
if that's required. Queries will come from both linux and the web,
>>Home-rolled application, or third party?
Are you asking about how they do VMs ?
They already provide PG v9.6.7 , so I gather they're not averse to
supporting PG DBs.
On Tue, May 7, 2019 at 4:05 PM Ron wrote:
> On 5/7/19 1:52 PM, David Gauthier wrote:
>
> Hi:
>
> I&
y libpq is not installed or
> cannot be found on this machine.
If pgBackRest was built from source here's the complete dependency list
for RHEL7:
yum install perl perl-Time-HiRes perl-Digest-SHA perl-DBD-Pg perl-JSON-PP
Regards,
--
-David
da...@pgmasters.net
On 5/9/19 10:57 AM, Prakash Ramakrishnan wrote:
>
> David am trying to restore the backup in dev server please find the
> below details ,
libpq should be installed with Postgres. Have you installed Postgres yet?
--
-David
da...@pgmasters.net
On 5/9/19 11:12 AM, Prakash Ramakrishnan wrote:
> Yes, I did.
OK, then I would say something is wrong with your package configuration,
but I don't know what it might be.
Regards,
--
-David
da...@pgmasters.net
but I thought pqlib was installed with the server as well.
Perhaps not.
--
-David
da...@pgmasters.net
On 5/9/19 12:58 PM, Adrian Klaver wrote:
> On 5/9/19 9:54 AM, David Steele wrote:
>> On 5/9/19 12:51 PM, Prakash Ramakrishnan wrote:
>>> Hi Adrian,
>>>
>>> I don't know about that client libraries can you please explain or share
>>> me the
r.
The custom installer installs stuff in non-standard locations so likely
won't work without a lot of tweaking. I don't have any experience with
the EDB packages.
Devrim?
--
-David
da...@pgmasters.net
nd up either NULL or be set to the value of the
column's DEFAULT clause, if it has one.
You might also want to look at INSERT ON CONFLICT DO NOTHING if you
want that INSERT to work in concurrent environments. Docs in
https://www.postgresql.org/docs/current/sql-insert.html
--
David Row
7;s in "s". "s" will be an empty relation if
"test" does not have any row matching WHERE area = 'test5'.
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
urrent
sessions running this at the same time. You could protect against
that by doing some advisory locking (
https://www.postgresql.org/docs/11/explicit-locking.html#ADVISORY-LOCKS
), or taking an access exclusive lock on "test".
--
David Rowley http://www.2ndQu
up_ins is always correctly set.
> However, I've noted that this behavior applies up to february
Did you perhaps reset the stats or failover to a standby around Feb?
What does: select stats_Reset from pg_stat_database where datname =
current_database(); say?
--
David Rowley
f performing a
sub-query, LEFT JOIN to a query similar to your subquery but after
removing the std_stk_key = stock.stk_key condition and adding a GROUP
BY std_stk_key. However, that may perform worse if there are many more
std_stk_key groups than there are matching rows in stock.stk_key.
Experimentatio
efit to users in the same release.
You may already be aware, but another use case for such variable-width
identifiers was with indirect indexes as discussed in [1]
[1]
https://www.postgresql.org/message-id/20161018182843.xczrxsa2yd47pnru%40alvherre.pgsql
--
David Rowley
psql (9.6.7, server 9.5.2) on linux.
I have 2 DBs, one for dev the other is live. I want to recreate several
tables in the dev db using the same metadata found in the live db. But I'm
too lazy to manually transcribe everything and that's prone to error anyway.
In the past, I would just run pg_d
psql (9.6.7, server 9.5.2)
I created a schema in my DB called "dvm". Then I created a table a-la...
create table dvm.foo (col1 tedxt); . I see the schema with \dnS+. But I
can't see my table using \d. I can see the dable with \d dvm.foo, so it's
in there. The first column of the \d output is
Hi:
I've been poking around
https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools to
see if there is anything that can read PG metadata in and display it
graphically in a gui. You know, the kind of picture that is 1 block per
table with 1->many arrows connecting up the primary/
All very good ideas. Thanks to all for the input.
I think I'm leaning toward DBeaver. But they're all good.
Thanks !
On Tue, Jun 25, 2019 at 5:02 PM Kevin Brannen wrote:
> *From:* David Gauthier
>
> > I've been poking around
> ht
I need to create a constraint on a column of a table such that it's value
is found in another table but may not be unique in that other table.
Example...
Let's say the DB is about students and the grades they got for 4
subjects... Math, English, Science, History. But instead of creating 4
records
> in the updated record is not lower-cased.
You'll need a BEFORE trigger, not an AFTER trigger.
https://www.postgresql.org/docs/current/sql-createtrigger.html
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Anyone with a test scripts or link to an example of bi-directional master using
pglogical. ?
I am led to believe from the documentation that this should be possible. (?)
"Limited multi-master support with conflict resolution exists, but mutual
replication connections must be added individually"
W
> On 25 Apr 2023, at 1:47 am, David G. Johnston
> wrote:
>
> There isn't anything special about a failed transaction compared to any other
> transaction that you leave open.
Now I’m curious. Does it have the same impact on performance that an idle in
transaction connec
1) Can I create a trigger on a view?
2) Do triggers cascade?
Say I have an insert trigger on a table.
And, I have an insert trigger on a view that references this table
If I do an insert on the view, will both triggers fire?
> Hi. With an integer identity primary key table,
> we fetch a number of rows with WHERE id = ANY($1),
> with $1 an int[] array. The API using that query must return
> rows in the input int[] array order, and uses a client-side
> mapping to achieve that currently.
>
> Is it possible to maintain
>> It was only used for small arrays but never noticed any performance issues
>
> Hmmm, sounds like this would be quadratic though...
True, but it’s cpu time not io, which tends to be orders of magnitude slower
> I wonder whether the int[] can be turned into a pseudo table with a ROWNUM
> ext
we have TidRange scans, to
have pg_dump split larger tables into chunks so that they can be
restored in parallel.
David
nificant when
multiple processes are concurrently loading data into the same table."
The current text of "PostgreSQL 16 can also improve the performance of
concurrent bulk loading of data using COPY up to 300%." does lead me
to believe that nothing has been done to improve things when only a
single backend is involved.
David
On Wed, 31 May 2023 at 14:11, Bruce Momjian wrote:
>
> On Wed, May 31, 2023 at 02:05:10PM +1200, David Rowley wrote:
> > "Relation extensions have been improved allowing faster bulk loading
> > of data using COPY. These improvements are more significant when
>
red.
However, if you do need to do some transformation before loading, then
you might want to do that in PostgreSQL. For that, something like:
psql -c "copy (select col1,col2,col3+col4 from your_table) to stdout"
would allow you to run a query, which you could maybe do your
transformations in before importing into Snowflake
David
relation at the same time. The
release announcement did seem to confirm that there had to be
concurrency, so it might be good to not lead anyone else down into
thinking that only concurrent cases are faster. I certainly understand
that's where the big wins are.
I'm fine with your proposed wording.
David
ause the planner
thinks the join will only produce 1 row.
Unfortunately, extended statistics only help for base relation
estimations and do nothing for join estimations, so your best bet
might be to just:
SET enable_nestloop TO off;
for this query.
David
r, that's all
speculation until you provide more details.
https://wiki.postgresql.org/wiki/Guide_to_reporting_problems
David
On Wed, 21 Jun 2023 at 08:34, Marc Millas wrote:
>
> On Tue, Jun 20, 2023 at 10:14 PM David Rowley wrote:
>>
>> On Wed, 21 Jun 2023 at 07:42, Marc Millas wrote:
>> > But if I do the same with clause one OR clause 2, I have to kill the
>> > request after an
On Wed, 21 Jun 2023 at 10:26, Marc Millas wrote:
> link to the plan with both clauses ORed (the one not finishing)
> https://explain.depesz.com/s/jHO2
I'd go with the UNION or UNION ALL idea I mentioned earlier.
David
On Mon, 26 Jun 2023 at 03:02, Marc Millas wrote:
> When I ask this list, David Rowley suggest to rewrite the SQL, replacing the
> OR by a union.
>
> Fine, this do work, even if a bit complex as the original SQL was a set of
> intricate joins.
>
>
> So, either this behavi
_page_cost and effective_cache_size are the main settings which
will influence plan A vs plan B. Larger values of
effective_cache_size will have the planner apply more seq_page_costs
to the index scan. Lower values of effective_cache_size will mean
more pages will be assumed to cost random_page_cost.
David
Is it ok to use physical replication between different 15.x minor releases
(on Ubuntu 22.04)? I haven't been able to find a definitive answer. Thanks.
den the filter to ensure it includes all possible
"ladate" values, run the query, then run the query again without the
date range filter. I would guess it'll only save you a few percent,
but I'm open to being proven wrong.
David
In a trigger function, is there a way to get a list of all of the columns in
the triggering table?
I want to be able to use a single trigger function with multiple tables.
Thanks
phe Pettus wrote:
>
>
> > On Jul 10, 2023, at 11:20, DAVID ROTH wrote:
> >
> > In a trigger function, is there a way to get a list of all of the columns
> > in the triggering table?
>
> You can get the table that the trigger fired on with TG_TABLE_SCHEMA a
Thanks for the example. I have a test trigger now that does that but my
application needs all of the columns.
> On 07/10/2023 2:31 PM EDT Christophe Pettus wrote:
>
>
> > On Jul 10, 2023, at 11:29, DAVID ROTH wrote:
> >
> > I want to use a single trigger function
I was hoping that NEW could be treated as a record or as an arrayy similar to
pg_argv.
> On 07/10/2023 2:31 PM EDT Christophe Pettus wrote:
>
>
> > On Jul 10, 2023, at 11:29, DAVID ROTH wrote:
> >
> > I want to use a single trigger function to log multiple tabl
I saw your message about a "few" columns and missed the new.* notation.
Is there a way to get new.* into a jsonb column?
> On 07/10/2023 2:38 PM EDT Christophe Pettus wrote:
>
>
> > On Jul 10, 2023, at 11:37, DAVID ROTH wrote:
> >
> > Thanks for the ex
I am a newbe at my current job.
They have a separate log table for every table because they want all the
columns.
I have been looking for a way to go to a common log table without requiring
major changes to the application.
With your help, I have a proof of principle that I can demonstrate now.
#x27;REPLACE INTO aggregate (channel_id, type, timestamp, value, count)
Perhaps that's something like PostgreSQL's INSERT ON CONFLICT [1]
David
[1] https://www.postgresql.org/docs/current/sql-insert.html
d PostgreSQL 16 will allow more efficient
execution of DISTINCT and ORDER BY aggregates by allowing indexed to
provide pre-sorted input. In the meantime, the query above will
probably help you.
David
o do?
It sounds to me like it would just create a load of needless WAL from
the deletes and the vacuum that cleans up the dead rows each of which
is more likely to cause lag problems on the replica servers, which the
OP is trying to avoid.
David
ndex on MS_CD is
unlikely to be a good idea as that would likely require random access
to the heap. It's likely better to perform a table scan and then just
filter out the 1% of rows that don't match.
Try executing the query after having done:
SET enable_seqscan TO off;
What plan does it use now?
Is that plan faster or slower than the seq scan plan?
David
ok
like ms_cd is the primary key of TBL_SHA. If it is then it's very
weird that you have 320 rows for MS_CD = 'MLD009'. You have some
data corruption if that's the case. I suspect you've just not
accurately described the table definition, however.
David
the default since
HDDs were common. SSDs are common now and, comparatively to sequential
I/O, their random I/O is faster than that of an HDD, so you may get
better results by lowering random_page_cost.
David
gre version 7.4. Version 10 works on port 5434 though the server list
tree spins eternally.
Any solution or assistance would be appreciated.
Best regards,
David Beck
is can stuff occur. The slowdown isn't linear.
I've no idea if this is happening for the reported case. I'm just
saying that it can happen. The OP should really post the results of:
SET track_io_timing = ON; EXPLAIN (ANALYZE, BUFFERS) for both queries
running independently then again when they run concurrently.
David
David
rom the kernel's cache or from disk. You'll probably want to
calculate the average time it took to get 1 buffer for each query to
make sense of that.
David
estion is: is this a valid bug? I am not sure if I should expect
the view version to find a way to materialize and produce a comparable
query plan. Also, making a minimal test case is going to take a bit
and I don't want to start unless this smells like a genuine bug.
--
David Gilman
:DG<
have both the desired composability and
also good performance? I hope I've explained this well enough. I asked
this question a while back on StackOverflow
<https://stackoverflow.com/questions/76525562/how-to-optimize-postgresql-row-security-policies-that-involve-related-tables>
but
got no interest. I also have sample code in a public GitHub repository here
<https://github.com/dventimihasura/hasura-projects/tree/master/rls-optimization-1>
that
illustrates the setup. Any advice would be greatly appreciated. Thank you!
Kind regards,
David A. Ventimiglia
Sorry, I had a typo in my Row Policy. Rather, it should look like this:
create policy album_rls_policy on "Track" for select to public
using (
exists (select * from "Album" where "Album"."AlbumId" =
"Track"."AlbumId"
I'm on PostgreSQL 15 with essentially a stock configuration.
On Tue, Aug 15, 2023 at 8:58 AM Ron wrote:
>
> On 8/14/23 09:54, David Gilman wrote:
> > I have a query that was originally written as a handful of CTEs out of
> > convenience. It is producing a reasonable que
t, there is some work going on by Richard Guo [1] where
he aims to implement this. It is quite a tricky thing to do without
causing needless pruning work in cases where no partitions can be
pruned. If you have an interest, you can follow the thread there to
see the discussion about the difficulties with i
uestioned him about this and as
it turned out, some version of Oracle once didn't optimise these very
well and when he learned this, he took that knowledge and seemingly
applied it to all versions of all RDBMSs in the universe. Rather
bizarre, but perhaps that's what's going on here too.
David
at the column is NOT NULL so can do
count(*) instead of counting the non-nulls from that column. That may
allow it to Index Only Scan? I'm not versed in reading Oracle plans so
don't know if it's possible to tell.
David
the
explain analyze.
I think all that's going on is that each tuple is on a different page
and the heap accesses are just causing many buffers to be accessed. It
seems to me that an IOS would likely fix that problem which is why I
suggested count(*) instead of
count()
David
I have not tried this in a while but I think a SELECT with a "hint" will return
rows in the order of the index in the hint. This does NOT work for distributed
queries.
> On 09/12/2023 9:10 AM EDT Daniel Gustafsson wrote:
>
>
> > On 12 Sep 2023, at 14:26, Matthias Apitz wrote:
> >
> > El
you're using a test
instance that's not needed by anyone else).
David
ws)
analyze tbl;
explain analyze select a,b,c,d,h from tbl where a = '10' and b
= '10' and c = '10' and d = '10' and e =
'10' and h = 1;
Index Only Scan using tbl_pkey on tbl (cost=0.55..4.58 rows=1
width=48) (actual time=0.071..0.072 rows=1 loops=1)
Index Cond: ((a = '10'::text) AND (b = '10'::text)
AND (c = '10'::text) AND (d = '10'::text) AND (e =
'10'::text) AND (h
= 1))
Heap Fetches: 0
Planning Time: 0.146 ms
Execution Time: 0.087 ms
(5 rows)
David
h aims to improve the
situation in [1].
Perhaps, if you're in a position to, you could help review that.
David
[1]
https://postgr.es/m/flat/CA%2Bq6zcWtUbT_Sxj0V6HY6EZ89uv5wuG5aefpe_9n0Jr3VwntFg%40mail.gmail.com
On Tue, 3 Oct 2023 at 22:03, wrote:
> So one last question, should I expect the patch to land in version 17 only or
> is there chance that it will also be in lower versions right away?
It wouldn't ever be put into anything earlier than 17.
David
Hello PostgreSQL Community,
I am writing to propose an alternative compilation of PostgreSQL that
allows for a 256-byte identifier length limit, alongside the existing
64-byte version.
*Problem:*
The current limit of 63 bytes can be quite restrictive, especially for
databases that use multi-byte
v11.5 on linux (I know, I know, IT is working on the upgrade to 15.2)
I have a table with millions of records and there are a lot of "almost"
duplicates that I want to get rid of in an efficient way. Best to
illustrate with a simple example...
We'll talk about deleting leaves on a tree where eac
oundaries
CHECK(not is_sold and purchase_time >= '-infinity' and purchase_time <
'infinity');
David
er of operations that can be
performed within 1 second? My initial guess would be ~1, but then
perhaps I'm overlooking something. I expect a more reliable figure would
be obtained through testing, but I'm looking for an *a priori
*back-of-the-envelope
estimate. Thanks!
Best,
David
s per operation * number of connections = 1000 ms /
101 ms * 1000 = ~1
- something else
- impossible to determine without more information
Best,
David
On Mon, Oct 30, 2023 at 8:46 AM Laurenz Albe
wrote:
> On Mon, 2023-10-30 at 08:05 -0700, David Ventimiglia wrote:
> > Can s
n EXCEPT queries, unfortunately. There won't
be any bug fixes to improve this. It may, however, be improved in some
future version of PostgreSQL.
David
I'm asking about the possibility of indexing portions of a column's value
where the column has a static field format. Example, a char(8) which
contains all hex values (basically a hex number that's always 8 chars wide,
leading zeros if needed). Someone might want to select all recs where the
first
301 - 400 of 2035 matches
Mail list logo