Re: Logical decoding on standby

2018-03-12 Thread Andreas Kretschmer
On 12 March 2018 21:18:22 CET, Andreas Joseph Krogh  wrote:
>Anybody knows if $subject will make it into v11?
> 
>--
>Andreas Joseph Krogh

Why do you think you needs this? 

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Logical decoding on standby

2018-03-12 Thread Andreas Kretschmer
On 13 March 2018 00:58:27 CET, Andreas Kretschmer  
wrote:
>On 12 March 2018 21:18:22 CET, Andreas Joseph Krogh
> wrote:
>>Anybody knows if $subject will make it into v11?
>> 
>>--
>>Andreas Joseph Krogh
>
>Why do you think you needs this? 
>
>Regards, Andreas

Let me explain my question. One of the key aspects of logical replication is, 
that you can define what to replicate. That wouldn't work in this way, that's 
why i'm asking.

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Logical decoding on standby

2018-03-13 Thread Andreas Kretschmer



Am 13.03.2018 um 02:40 schrieb Andres Freund:


The subject said logical decoding, not replication. There's a lot of
change data capture type workloads where decoding from the standby is
quite useful. And the design definitely would work for that, we've
explicitly took that into consideration.

Greetings,

Andres Freund



Ah, thx, didn't know that.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Circle and box intersect

2018-03-16 Thread Andreas Kretschmer



Am 16.03.2018 um 11:00 schrieb Martin Moore:

PG10

Is there an operator to determine if a box and circle intersect?
I can only see box && box and can use centre+rad and distance to calculate
circle:circle.

Thanks.


please don't hijack other mail-threads by answering & changing the 
subject, your mail still contains references to "SELECT .. FOR UPDATE: 
find out who locked a row"

Create a new mail for a new thread!


to your question: consider PostGIS and it's functions st_intersects()

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Andreas Kretschmer
On 19 March 2018 17:31:20 CET, Jimmy Augustine  wrote:
>Dear Friends,
>
>I am newbie to postgresql.
>I have 162 GB on my database but when I check size of all tables, I
>approximately obtain 80 GB.


Indexes?


>I also see that I have 68GB of temporary files however I only found

Where can you see that?


>2.4MB
>at postgres/data/base/pgsql_tmp.
>
>Could you tell me what are those temporary files and where are they at?
>Can
>I delete some of them?


No, never delete files in datadir!

>
>All values come from pgAdmin 4 and checked by my own SQL
>queries(postgresql-9.6).
>I already run vacuum full and there is few dead tuples.

A few dead tuples arn't a real problem.


>
>Best regards,
>Jimmy AUGUSTINE


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: PostgreSQL 9.6 Temporary files

2018-03-19 Thread Andreas Kretschmer
On 19 March 2018 18:21:42 CET, Jimmy Augustine  wrote:
>2018-03-19 18:15 GMT+01:00 Adrian Klaver :
>
>> On 03/19/2018 10:12 AM, Jimmy Augustine wrote:
>>
>>
>>>  On 03/19/2018 09:31 AM, Jimmy Augustine wrote:
>>>
>>>  Dear Friends,
>>>
>>>  I am newbie to postgresql.
>>>  I have 162 GB on my database but when I check size
>of all
>>>  tables, I approximately obtain 80 GB.
>>>  I also see that I have 68GB of temporary files
>however
>>> I only
>>>  found 2.4MB at postgres/data/base/pgsql_tmp.
>>>
>>>
>>>  Exactly how did you determine this?
>>>
>>> I used this command and sum result for all database :
>>> SELECT pg_size_pretty(pg_total_relation_size('table_name'));
>>>
>>> And this for complete database :
>>> SELECT pg_size_pretty(pg_database_size('Database Name'));
>>>
>>>
>>> So where did the 68GB number for temporary files come from?
>>>
>>> I don't measure this value by my own. I was disappointed by the gap
>>> between the two queries, so I checked pgAdmin 4 and I saw this
>value.
>>>
>>
>> In what section of pgAdmin4?
>>
>In section "Statistics" when I click on my database.
>
>Or do you know what query it used?
>>
>I have found this but not sure
>
>SELECT temp_files AS "Temporary files"
>   , temp_bytes AS "Size of temporary files"FROM   pg_stat_database db;


That's aggregated. Not current values.

Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: case and accent insensitive

2018-03-24 Thread Andreas Kretschmer



Am 23.03.2018 um 23:04 schrieb MOISES ESPINOSA:
I don't know how i could reproduced case insensitive and accent 
insensitive.


Maybe you can use lower() for case insensitive or citext for the same 
(https://www.postgresql.org/docs/10/static/citext.html) and the 
unaccent-extension
for the accent insensitive 
(https://www.postgresql.org/docs/10/static/unaccent.html)


*untested*


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: pg_dump -Fd -j2 on standby in 9.6.6

2018-03-27 Thread Andreas Kretschmer
On 28 March 2018 07:02:30 CEST, Stefan Petrea  wrote:
>Hi,
>
>
>I wonder why synchronized snapshots are not supported on standby
>servers. 
>
>
>
If you want to take backups using -Fd on a standby you should pause the 
reply-process. In this case you don' t need synchronized snapshots.

Regards, Andreas.


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: [EXTERNAL]Re: pg_dump -Fd -j2 on standby in 9.6.6

2018-03-28 Thread Andreas Kretschmer



Am 28.03.2018 um 09:46 schrieb Stefan Petrea:

Hi Andreas,

Thank you for your suggestion about pausing/resuming the standby!

I've tried running on the standby
SELECT pg_xlog_replay_pause()
Then running the dump, and it threw the same error I had seen before.


run pg_dump with --no-synchronized-snapshots, it is save in this szenario.

> This e-mail message, including any attachments,

this is a public mailing list ...


Regards, Andreas
--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Fixed chars

2018-03-28 Thread Andreas Kretschmer



Am 28.03.2018 um 11:11 schrieb Enrico Pirozzi:

Hi , I've seen this strange thing.

sitedb=# create table test_tb(codice char(7));
CREATE TABLE
sitedb=# insert into test_tb values('pippo');
INSERT 0 1
sitedb=# select codice || 'a'::char(1),length(codice) from test_tb
 ?column? | length
--+
 pippoa   |  5
(1 row)



where is the problem? length() returns the number of chars in string, 
and the string in codice is 5 chars long.





On the official documentation

|character(/|n|/)|,|char(/|n|/)| fixed-length, blank padded

https://www.postgresql.org/docs/10/static/datatype-character.html


test=*# select length('12345'::char(10)), pg_column_size('12345'::char(10));
 length | pg_column_size
+
  5 | 14
(1 Zeile)



helps that?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: pg_basebackup restore a single table

2018-04-11 Thread Andreas Kretschmer



Am 11.04.2018 um 15:53 schrieb camarillo:

Can I do a restore of a single table or single base using the archive
generated for the basebackup without having to delete the filesystem
(/var/lib/pgsql/9.5/*)?.


No, but you can use a spare machine to restore the hole database 
(point-in-time-recovery) and restore the table from that using pg_dump 
or copy.



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Doubts about replication..

2018-04-19 Thread Andreas Kretschmer



Am 19.04.2018 um 19:57 schrieb Edmundo Robles:

I will use replication as simple backup.


please keep in mind, replication is not a backup. All logical errors on 
the master (delete from table and forgot the where-condition) will 
replicated to the standby.



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Same condition in the CTE and in the subsequent JOIN using it

2018-04-25 Thread Andreas Kretschmer



Am 25.04.2018 um 17:45 schrieb Alexander Farber:
Thank you for any hints, I apologize if my question is too specific 
and difficult to answer...


i haven't checked the whole query, but where-conditions from the outer 
query are not pushed down into the CTE-query. First the whole CTE will 
be materialized, then the outer query executed.

that said, it is better to define the where-condition in the cte.

https://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Recommended way to copy database files on Windows OS (to perform file system level backup)

2018-05-14 Thread Andreas Kretschmer

>An easy way to perform backup of your data is to use PostgreSQL's very
>own
>pg_dump utility.

That's not a file level backup, but a logical backup. Maybe he wants to build 
standby for streaming replication, for this you needs an other backup. See the 
answer from Christoph.

Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company



Re: binaries for 11 beta compiled with --with-llvm?

2018-05-29 Thread Andreas Kretschmer
On 29 May 2018 13:12:33 CEST, Paul Linehan  wrote:
>Hi all,
>
>I have a problem that I just can't seem to solve:
>

Please create a new thread for a new question.


Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Setting up replication from 9.4 to 10.4

2018-06-05 Thread Andreas Kretschmer




Am 06.06.2018 um 08:16 schrieb Lionel Tressens:

Hello,

We are running a single PG 9.4 database node we are switching to PG 
10.4 (which will run as master + setup of a slave)


To minimize downtime when switching from the 9.4 server to the 10.4 
one, I would like to setup a replication stream instead of stopping 
the production, dumping 9.4 and restoring in 10.4 which my take a lot 
of time.


The next steps would be to reinstall the 9.4 server from scratch with 
10.4 and setup this host as a slave, ready to take the master role if 
needed (hot standby).


Can such a replication be established between 9.4 and 10.4 and what 
replication tool/schema to use ? I have been through the documentation 
on replication in 9.4 and 10.4 but things are very confusing to me 
about this point.




you can't use the in-build streaming replication between different major 
versions, but you can use trigger-based replications, for instance 
londiste or slony.


You can also use use our extension pglogical, see 
https://www.2ndquadrant.com/en/resources/pglogical/. If i were you i 
would use the latter ;-)



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Performance problem postgresql 9.5

2018-06-09 Thread Andreas Kretschmer




Am 08.06.2018 um 22:09 schrieb Alvaro Herrera:

On 2018-Jun-08, Miguel Angel Sanchez Sandoval wrote:


Hi guys, migrate from 8.4 to 9.5, all OK except that 2-3 days pass and the
database experiences slowness, I execute the linux top command and it shows
me a postgres user process executing a strange command (2yhdgrfrt63788)
that I consume a lot of CPU, I see the querys active and encounter select
fun ('./ 2yhdgrfrt63788') , this did not happen in version 8.4, any help
would appreciate it.

Hmm, has your database been compromised?  You may have an intruder there --
beware.


https://www.imperva.com/blog/2018/03/deep-dive-database-attacks-scarlett-johanssons-picture-used-for-crypto-mining-on-postgre-database/

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Catching unique_violation exception on specific column/index

2018-06-11 Thread Andreas Kretschmer




Am 11.06.2018 um 12:58 schrieb Alexey Dokuchaev:

What's wrong with:

   INSERT ...
   ON CONFLICT (foo_key) DO NOTHING

Nothing I guess, except that it is available since 9.5 (right?), and I try
to stay compatible with 9.3.  Sorry for not saying this in the first place.

./danfe



... 9.3 will be out of support soon


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Merging two database dumps

2018-06-13 Thread Andreas Kretschmer




Am 13.06.2018 um 13:17 schrieb Alex O'Ree:
I have a situation with multiple postgres servers running all with the 
same databases and table structure. I need to periodically export the 
data from each of there then merge them all into a single server. On  
occasion, it's feasible for the same record (primary key) to be stored 
in two or more servers


what should happen in this case?



I was using pgdump without the --insert option however I just noticed 
that pgrestore will stop inserting into a table when the conflict 
occurs, leaving me with an incomplete set.




Other solution:

* create the tables on the destination server without the PK or with an 
other, new PK (maybe SERIAL)
* use logical replication to replicate the table from all your 
source-db's to the destination table, see more here: 
https://www.2ndquadrant.com/en/resources/pglogical/


your problem seems as a typical task for logical replication to me. You 
needs 9.4 at least.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Append only replication over intermittent links (with local only delete?)

2018-06-15 Thread Andreas Kretschmer




Am 14.06.2018 um 14:04 schrieb Uri Braun:

Hi,

I'm looking to run Postgres -- flexible on exact version -- on some 
devices installed in cars and replicated to a central server over cell 
phone modems. I expect dropped connections due to: lack of coverage 
(remote areas), dead spots, tunnels, cars parked in garages, device 
power disconnected, etc. I expect all data to be generated on the car 
devices and want it to eventually be copied over to the central 
server. I would also like to be able to delete old data -- say months 
old -- on the car equipment and NOT have those deletes propagate to 
the central server.


To be clear, the car device will surely add data -- append rows -- and 
may very occasionally add a new table. I would expect the only case 
where a delete may occur -- other than culling old data -- is during 
recovery of a partial write or transaction rollbacks. The time 
requirements are loose, but I would like replication consistency 
within hours (once connected).


I'm wondering what replication scheme is appropriate for this use case 
and how to configure it appropriately.




i think pglogical can fit your requirements, please take a look:

https://www.2ndquadrant.com/en/resources/pglogical/


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: question on streaming replication

2018-06-15 Thread Andreas Kretschmer
On 14 June 2018 07:28:53 CEST, Atul Kumar  wrote:
>Hi,
>
>I have postgres edb 9.6 version, i have below query to solve it out.
>
>i have configured streaming replication having master and slave node
>on same  server just to test it.
>
>All worked fine but when i made slave service stop, and create some
>test databases in master, after then i made slave service start, slave
>didn't pick the changes.
>
>The replication was on async state.
>
>Then after doing some search on google i tried to make it sync state
>but even making changes in postgresql.conf file I am neither getting
>sync state nor getting any changes on slave server.
>
>Please suggest the needful.
>
>
>Regards,
>Atul

Sync replication isn't usefull with only one standby. 

I think, during the stop of the standby the master has overwitten needed wal's. 
You can prevent that by increasing wal_keep_segments or by using replication 
slots. Please use google, there are tons of docs about that all.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Suggestion about logging only every n-th statement

2018-06-20 Thread Andreas Kretschmer
On 20 June 2018 13:51:25 CEST, Janning Vygen  wrote:
>Back in 2009 I made a suggestion which is not implemented yet but would
>
>still be very valuable and easy to implement in my opinion (not for me 
>as I am not speaking C):
>
>https://www.postgresql.org/message-id/flat/200903161426.56662.vygen%40kicktipp.de#200903161426.56662.vy...@kicktipp.de
>
>We still can't afford to log each and every statement as we have too 
>many and we don't want to afford more disk performance/space. Therefore
>
>we log only statements slower than 50ms.
>
>But for analyzing usage patterns it would be very nice to have this 
>combined with a sample_rate for logging.
>
>   logging_sample_rate = n
>
>So each n-th statement will get logged regardless of execution time.
>
>What do you think?
>
>regards
>Janning

I would suggest using of pg_stat_statements for that.

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Problem Postgres

2018-06-26 Thread Andreas Kretschmer




Am 26.06.2018 um 10:05 schrieb Emanuele Musella:

Good morning,

we have the following error:

2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere 
informazioni sul file "base/16395/19739338": Permission denied
2018-06-26 09:48:44 CEST ISTRUZIONE:  select 
p.datname,pg_database_size(p.datname) from pg_database p





have you manually manipulate files below the postgres-data-directory?
Which os you are using?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Andreas Kretschmer




Am 26.06.2018 um 12:19 schrieb amandeep singh:
We have been observing our postgres database from past few days,We 
found few queries running three times simultaneously with same 
parameters.I would like to back track how a query is running multiple 
times.


they are independend each other. You can run those queries with EXPLAIN 
ANALYSE to see the execution plan.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Andreas Kretschmer
On 26 June 2018 12:32:44 CEST, Andreas Kretschmer  
wrote:
>
>
>Am 26.06.2018 um 12:19 schrieb amandeep singh:
>> We have been observing our postgres database from past few days,We 
>> found few queries running three times simultaneously with same 
>> parameters.I would like to back track how a query is running multiple
>
>> times.
>
>they are independend each other. You can run those queries with EXPLAIN
>
>ANALYSE to see the execution plan.
>
>Regards, Andreas

seems like i misunderstud your query. are you sure, that the parameter $1 is 
equal across all different connections?

Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Split daterange into sub periods

2018-07-05 Thread Andreas Kretschmer



On 05.07.2018 15:49, hmidi slim wrote:

Hi,
I'm looking for splitting a daterange into many subperiods following 
this example:


Base Date: [2018-01-01, 2018-01-31]
overlapped_periods:
1- [ 2018-01-04, 2018-01-06]
2- [ 2018-01-09, 2018-01-12]
3- [ 2018-01-18, 2018-01-19]

I try to get such a result:
1- [ 2018-01-01, 2018-01-03]
2- [ 2018-01-07, 2018-01-08]
3- [ 2018-01-13, 2018-01-17]
4- [ 2018-01-20, 2018-01-31]

The operator '-' does not support this :
SELECT daterange('2018-01-01', '2018-01-31', '[]') - 
daterange('2018-01-04', '2018-01-06', '[]');

I got this error:
*ERROR: result of range difference would not be contiguous *
Is there any operators to make the split of daterange?

andreas@[local]:5432/test# \d hmidi
    Table "public.hmidi"
 Column |   Type    | Collation | Nullable | Default
+---+---+--+-
 id | integer   |   | not null |
 d  | daterange |   |  |
Indexes:
    "hmidi_pkey" PRIMARY KEY, btree (id)

andreas@[local]:5432/test# insert into hmidi values 
(1,'[2018-01-04,2018-01-06]');INSERT 0 1
andreas@[local]:5432/test# insert into hmidi values 
(2,'[2018-01-09,2018-01-12]');INSERT 0 1
andreas@[local]:5432/test# insert into hmidi values 
(3,'[2018-01-18,2018-01-19]');INSERT 0 1
andreas@[local]:5432/test# with month as (select s::date from 
generate_series('2018-01-01'::date, '2018-01-31'::date,'1day'::interval) 
s), tmp as ( select month.s, case when hmidi.d @> month.s then 1 else 
NULL end as covered from month left join hmidi on month.s <@ 
hmidi.d),tmp2 as ( select *, coalesce((sum(case when covered = 1 then 1 
else NULL end) over (order by s))+1,1) as p from tmp) select p, min(s), 
max(s) from tmp2 where covered is null group by p order by p;

 p  |    min |    max
++
  1 | 2018-01-01 | 2018-01-03
  4 | 2018-01-07 | 2018-01-08
  8 | 2018-01-13 | 2018-01-17
 10 | 2018-01-20 | 2018-01-31
(4 rows)


Regards, Andreas
--
2ndQuadrant Deutschland


Re: Stored function | Grayed out option when I try modify the return type

2018-07-12 Thread Andreas Kretschmer




Am 12.07.2018 um 10:25 schrieb Abelardo León González:


Hello world,

Here a newbie in PgAdmin.

When I create a stored function and I try to modify it to change the 
return type (or anything else), I can’t modify it.


I am not sure if it is a bug….and I am not sure if this message should 
be in bugs mail list. Sorry if I am wrong.




it's not a bug. If you want to change the return-type or 
input-parameters you have to recreate (drop the current one and create a 
new) the function - or to create a new (additional) one.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Stored function | Grayed out option when I try modify the return type

2018-07-12 Thread Andreas Kretschmer




Am 12.07.2018 um 10:56 schrieb Abelardo León González:

Thanks Andreas for your response.

This is an annoying procedure to modify it.
Why did pgAdmin staff decide to do it this way?  It's a waste time!
No. Two functions with the same name, but different returns types (for 
instance) are valid and 2 different functions.
You can define a function and change the body of the function, that's 
not a problem, but with different returns you will have 2 different 
functions.


Please don't top-posting here, thx.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Stored function | Grayed out option when I try modify the return type

2018-07-12 Thread Andreas Kretschmer




Am 12.07.2018 um 11:28 schrieb Abelardo León González:

Sorry for top-posting here. :S I won't do it anymore.

The idea is to modify the original function in  order to meet the requirements.


yeah, i understand.



It's not to create two functions, it's to modify the original one.
Internally, pgAdmin should invoke to the "alter function" command so that this 
modification doesn't create another function.


it's not a limitation of PGAdmin.


If I have created funcA with return type smallint but I would like to modify to 
integer, pgAdmin shouldn't create another one but let us directly modify the 
original funcA's return type.


You can pull out the whole function definition, for instance within psql 
with \ef , safe this into a file, drop the function, edit 
the file and create the new function.

Seems a bit of strange, but that's the usual way.


Sorry for my English (I am non English native).


the same for me ;-)


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Database Refresh confusion

2018-07-13 Thread Andreas Kretschmer




Am 13.07.2018 um 13:10 schrieb Rijo Roy:

Hello Experts,

Today, my colleague asked me if there was any way to check the 
progress of recovery (backup-recovery) in Postgresql. I told him to 
check the postgresql logs and look out for keywords such as recovery. 
He was refreshing the database by restoring a database backup but the 
method he tried surprised me.

The below commands were used:
pg_basebackup -h remoteservername -p Port number -D /path -X s -c fast 
-n -P -v

And he just started the postgresql cluster using pg_ctl

I told him that he missed creating a recovery.conf with a recovery 
command,


that's not necessary, without recovery.conf postgres will start as a 
normal server.



but he told me that he always does this way.  I told him that the 
process is wrong and this will do a crash recovery and there are high 
chances of data corruption.


No, this way is safe


He replied saying he never faced an issue following his process of 
doing it without recovery.conf


2 questions:
1. Is this a good practice for refreshing data into a new server.
2. Is there any other mechanism to track the restoration process other 
than to check the postgresql logs.


pg_basebackup has a progress-feature, -P, see you command above




The postgresql version used in 10.0 on a Linux 6.9


the latest version is 10.4 and contains a lot of bugfixes, consider a 
update soon.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Shared buffers increased but cache hit ratio is still 85%

2018-07-18 Thread Andreas Kretschmer




Am 18.07.2018 um 10:26 schrieb Hans Schou:

Am I doing something wrong or should some history be cleared?


Reset the stats for that database. You can check the date of last reset 
with:


select stats_reset from pg_stat_database where datname = 'database_name';

and reset it with:

||pg_stat_reset()
||
||Reset all statistics counters for the current database to zero 
(requires superuser privileges by default, but EXECUTE for this function 
can be granted to others.)||



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Improving pg_dump performance

2018-07-23 Thread Andreas Kretschmer




Am 23.07.2018 um 09:23 schrieb Ron:

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database 
that needs to be migrated to a new data center and then restored to 
v9.6.9. 


you can use the pg_dump from the newer version (9.6) to dump the old 
database, over the net. In this way you can also use parallel backups 
(directory format)



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Improving pg_dump performance

2018-07-23 Thread Andreas Kretschmer




Am 23.07.2018 um 15:06 schrieb Ron:

On 07/23/2018 02:32 AM, Andreas Kretschmer wrote:



Am 23.07.2018 um 09:23 schrieb Ron:

Hi,

We've got an old (v8.4.17, thus no parallel backups) 2.9TB database 
that needs to be migrated to a new data center and then restored to 
v9.6.9. 


you can use the pg_dump from the newer version (9.6) to dump the old 
database, over the net. In this way you can also use parallel backups 
(directory format)


That DC circuit is too slow, and also used by lots of other production 
data.




install the 9.6 also on the old server, or use an other server in the 
same DC.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Connections on cluster not being logged

2018-07-23 Thread Andreas Kretschmer




Am 23.07.2018 um 17:14 schrieb Sandy Becker:
I have postgresql 9.4 on a cluster, hardware based.  I need to be able 
to see which users are connecting to which database and when to be in 
compliance with our security policies.


I have set the following in the postgresql.conf and did a pg_ctl reload:

 log_connections = on
 log_line_prefix = '%t [%p]:[%u]:[%h]-[%d] [%1-1]'


should work, can you check if those settings are in effect?

select name, setting from pg_settings where name in 
('log_connections','log_line_prefix');



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Connections on cluster not being logged

2018-07-23 Thread Andreas Kretschmer




Am 23.07.2018 um 17:25 schrieb Sandy Becker:

Yes, they are in effect.


strange. the logging is working? you can see other and actual entries in 
the logfile?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: DB Backup from WAL Slave

2018-07-24 Thread Andreas Kretschmer
On 24 July 2018 14:44:45 CEST, basti  wrote:
>Hello,
>
>we have a db master and a slave.

-
>
>How can I do an backup with pg_dumpall from slave?

Set hot_standby_feedback to on.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Multi client in subscription?

2018-07-30 Thread Andreas Kretschmer
On 30 July 2018 15:29:27 CEST, Adrian Klaver  wrote:
>On 07/29/2018 11:53 PM, xOChilpili wrote:
>> Hi everyone,
>> 
>> I have 2 virtual machines, one that i use at home and the other one
>that 
>> i use at work, so, in the server i have one publication and for each 
>> client i have the same subscription, but every time i go out from
>home 
>> to work, i have to stop postgresql service in order to use
>subscription 
>> at work, and viceversa, is it possible to use both at the same time ?
>

Sounds like you are searching a multi-master solution. If yes, BDR.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: WAL-dir filling up with wal_level = logical

2018-08-03 Thread Andreas Kretschmer
On 3 August 2018 12:12:33 CEST, Andreas Joseph Krogh  wrote:
>Version: PG-10.4
> 
>I don't (yet) have any replication-slots configured, and hence no
>standbys are 
>subscribed, but I have wal_level = logical configured to be able to add
>
>subscribers later. I'm seeing that WAL-dir is filling up with WAL-files
>(now 
>17GB and not declining), is this expected behaviour?
> 

No. Maybe a not-working archive_command?


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Sv: WAL-dir filling up with wal_level = logical

2018-08-03 Thread Andreas Kretschmer
On 3 August 2018 12:33:26 CEST, Andreas Joseph Krogh  wrote:
>På fredag 03. august 2018 kl. 12:12:33, skrev Andreas Joseph Krogh <
>andr...@visena.com >:
>Version: PG-10.4
> 
>I don't (yet) have any replication-slots configured, and hence no
>standbys are 
>subscribed, but I have wal_level = logical configured to be able to add
>
>subscribers later. I'm seeing that WAL-dir is filling up with WAL-files
>(now 
>17GB and not declining), is this expected behaviour?
> 
>Thanks.
> 
>Hm, a co-worker noticed that we have max_wal_size = 16GB, that might be
>the 
>reason:-)
> 

Yes ;-)


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: How to avoid dead rows in tables.

2018-08-06 Thread Andreas Kretschmer
On 6 August 2018 09:04:45 CEST, Raghavendra Rao J S V 
 wrote:
>Hi All,
>
>
>auto vacuum is enabled in our database. But few of the tables contains
>the
>dead tuples more than 5,000 records. Number of dead rows are keep on
>increasing day by day if we didn’t perform the vacuum full. Monthly
>once we
>are performing vacuum full by stopping application server process.
>Later we
>are restarting the application server process.
>
>
>
>How to avoid accumulating the dead tuples for those tables. Is there
>any
>other approach to remove the dead tuple’s without vacuum full/down
>time.
>
>
>
>Note:- We are using the postgres version 9.2

You can tweak the autovacuum-settings, per default autovacuum will run if the 
table contains more than 20% + 50 dead tuples.

Regards,Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Copy over large data Postgresql 9.5

2018-08-16 Thread Andreas Kretschmer
On 16 August 2018 15:41:31 CEST, Vikas Sharma  wrote:
>Hello Experts,
>
>I need to transfer pgsql 9.5 data of 90GB from one Cloud provider to
>other.  We have plenty of downtime to do this. I will be copying over
>data
>directory after shutting down pgsql services on the source.

Why not using streaming replication without downtime?

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: CTE with JOIN of two tables is much faster than a regular query

2018-08-18 Thread Andreas Kretschmer




Am 18.08.2018 um 11:36 schrieb kpi6...@gmail.com:
What can I do to improve the performance of the regular query without 
using a CTE? 


try to rewrite it to a subselect:

select ... from ... join (selec ... from ... where ...) x on ...


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Upgrade/Downgrade

2018-08-23 Thread Andreas Kretschmer
On 23 August 2018 12:51:08 CEST, Sonam Sharma  wrote:
>Hello,
>
>My postgres version is 10.4 and I want to downgrade it to 9.5.
>and one is at 9.2 and need to upgrade it to 9.5.
>Can someone please help how to do this.
>Unable to find anything online

Why downgrade? Dump and restore should do the job, have you tried it?

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Which background task looks for pg_xlog in 10?

2018-08-28 Thread Andreas Kretschmer




Am 28.08.2018 um 08:32 schrieb Johann Spies:

I see this in /var/log/postgresql/postgresql-10-main.log:

postgres postgres@template1 ERROR:  could not open directory
"pg_xlog": No such file or directory


wild guess: some outdated monitoring software. This directory is renamed 
to pg_wal.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: WAL replay issue from 9.6.8 to 9.6.10

2018-08-28 Thread Andreas Kretschmer
On 29 August 2018 06:02:45 CEST, Dave Peticolas  wrote:
>Hello, I'm seeing some issues with WAL replay on a test server running
>9.6.10 using WAL archived from a 9.6.8 primary server. It reliably
>

Can you tell us the wal-level?

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: WAL replay issue from 9.6.8 to 9.6.10

2018-08-28 Thread Andreas Kretschmer
On 29 August 2018 06:26:06 CEST, Dave Peticolas  wrote:
>On Tue, Aug 28, 2018 at 9:21 PM Andreas Kretschmer
>
>wrote:
>
>> On 29 August 2018 06:02:45 CEST, Dave Peticolas 
>wrote:
>> >Hello, I'm seeing some issues with WAL replay on a test server
>running
>> >9.6.10 using WAL archived from a 9.6.8 primary server. It reliably
>> >
>>
>> Can you tell us the wal-level?
>>
>
>Yes, it is "hot_standby".

ok, this was to just rule out a problem that we discovered with logical 
decoding, but this isn't the case here.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Ways to deal with large amount of columns;

2018-08-30 Thread Andreas Kretschmer




Am 30.08.2018 um 11:13 schrieb a:

Therefore, the column number would be 1000+.


just as a additional note: there is a limit, a table can contains not 
more than 250-100 columns, dependsing on column types.

https://wiki.postgresql.org/wiki/FAQ

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Ways to deal with large amount of columns;

2018-08-30 Thread Andreas Kretschmer




Am 30.08.2018 um 15:15 schrieb Robert Zenz:

As David said, you'd be better off having a table that looks like this (in terms
of columns):

  * MONTH
  * AGENT
  * CASHFLOW

So your query to get the sum of a single agent would be looking like:

 select
 sum(CHASFLOW)
 where
 AGENT = 'Agent'
 and MONTH between values;

It might be a little more work to create a GUI for that (or map this model to
the existing GUI) but it is much simpler to maintain and work with.


agree. it is also possible to use partitioning.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: TR: redundant constraint_schema

2018-09-01 Thread Andreas Kretschmer




Am 01.09.2018 um 17:50 schrieb Olivier Leprêtre:
I notice that a new constraint "table1_col2_fkeyxxx" is created each 
time the previous ALTER TABLE ADD COLUMN is called


smells like a bug.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: TR: redundant constraint_schema

2018-09-01 Thread Andreas Kretschmer




Am 01.09.2018 um 18:44 schrieb Adrian Klaver:
Forgot to include Postgres version, 10.5. 


also 9.6 and 11beta1

Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: TR: redundant constraint_schema

2018-09-01 Thread Andreas Kretschmer




Am 01.09.2018 um 18:52 schrieb Adrian Klaver:

On 09/01/2018 09:47 AM, Olivier Leprêtre wrote:

Mine is 9.6


I would submit a bug report here:

https://www.postgresql.org/account/login/?next=/account/submitbug/


Olivier, please do that!


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Max number of WAL files in pg_xlog directory for Postgres 9.2 version

2018-09-05 Thread Andreas Kretschmer




Am 05.09.2018 um 19:39 schrieb Raghavendra Rao J S V:

Hi All,

We are using postgres 9.2 verstion database.


9.2 is out of support. Please consider a upgrade. soon!



Please let me know, how many max number of wal files in pg_xlog directory?


depends on the workload and on several settings. There is no rule of thumb.


Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Which is the most stable PostgreSQL version yet present for CentOS 7?

2018-09-19 Thread Andreas Kretschmer




Am 19.09.2018 um 16:08 schrieb Raghavendra Rao J S V:

Hi All,

Which is the most stable PostgreSQL version yet present for CentOS 7?



the latest supported minor version.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-19 Thread Andreas Kretschmer




Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
I have created BRIN index on few columns of the table without any 
issues. But i am unable to create BRIN index on one column of the 
table as i got error listed below



[local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei using 
brin ("dFetch");
ERROR:  data type boolean has no default operator class for access 
method "brin"
HINT:  You must specify an operator class for the index or define a 
default operator class for the data type.


honestly, a BRIN-Index on a bool-column doesn't make much sense. What do 
you want to achive? Maybe a partial index with a where-condition on that 
column makes much more sense.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-19 Thread Andreas Kretschmer




Am 19.09.2018 um 16:43 schrieb Durgamahesh Manne:



On Wed, Sep 19, 2018 at 8:02 PM Andreas Kretschmer 
mailto:andr...@a-kretschmer.de>> wrote:




Am 19.09.2018 um 15:51 schrieb Durgamahesh Manne:
> I have created BRIN index on few columns of the table without any
> issues. But i am unable to create BRIN index on one column of the
> table as i got error listed below
>
>
> [local]:6263 postgres@huawei=# CREATE INDEX brin_idx on huawei
using
> brin ("dFetch");
> ERROR:  data type boolean has no default operator class for access
> method "brin"
> HINT:  You must specify an operator class for the index or define a
> default operator class for the data type.

honestly, a BRIN-Index on a bool-column doesn't make much sense.
What do
you want to achive? Maybe a partial index with a where-condition
on that
column makes much more sense.


Regards, Andreas

-- 
2ndQuadrant - The PostgreSQL Support Company.

www.2ndQuadrant.com <http://www.2ndQuadrant.com>



Hi


I want to execute distinct query at less possible time

for that reason ,Even i have already tried with BTREE indexes & HASH 
indexes on required columns .distinct query execution time was not reduced


select  distinct sub_head."vchSubmittersCode" ,rec."vchFileName" , 
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,  
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  "table1" rec 
join  "table2" sub_head on 
rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where 
rec."bFetch"=false and sub_head."bFetch"=false ;


Query taken around 7 minutes time to execute with BTREE indexes & HASH 
indexes on required columns




try an index like

create index idx_t1 on table1 ("vchSubmittersCode") where "bFetch"=false
and
create index idx_t2 on table2 ("vchSubmittersCode") where "bFetch"=false

and check if the plan changed and the indexes are in use. You can use 
create index concurrently to prevent lockings.




Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Andreas Kretschmer

Hi,


the problem is there:


Am 20.09.2018 um 11:48 schrieb Durgamahesh Manne:
 Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual 
time=326397.551..389515.863 rows=370 loops=1)                     
                            |
|   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) 
(actual time=326397.550..372470.846 rows=4050 loops=1)            
                                      |
|         Sort Key: sub_head."vchSubmittersCode", rec."vchFileName", 
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice", 
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
|         Sort Method: external merge  Disk: 3923224kB                 
                                 |


Please check the execution time without DISTINCT.

Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-20 Thread Andreas Kretschmer




Am 20.09.2018 um 13:11 schrieb Durgamahesh Manne:

Query was executed at less time without distinct

As well as query was taking around 7 minutes to complete execution 
with distinct


 select   distinct  sub_head."vchSubmittersCode" ,rec."vchFileName" , 
rec."vchCusipFundIDSubFundID" , rec."vchFundUnitPrice"  ,  
sub_head."vchValuationDate" ,rec."vchAdvisorLabel" from  table1 rec 
join table2 sub_head on 
rec."vchSubmittersCode"=sub_head."vchSubmittersCode" where 
rec."bFetch"=false and sub_head."bFetch"=false ;


I need to execute above distinct query at less time as distinct query 
was taking more time to execute  even i have created indexes on 
required columns of the tables




>  Unique  (cost=5871873.64..6580630.73 rows=7400074 width=89) (actual 
time=326397.551..389515.863 rows=370 
loops=1) |
> |   ->  Sort  (cost=5871873.64..5973124.65 rows=40500405 width=89) 
(actual time=326397.550..372470.846 rows=4050 
loops=1)  |
> | Sort Key: sub_head."vchSubmittersCode", rec."vchFileName", 
rec."vchCusipFundIDSubFundID", rec."vchFundUnitPrice", 
sub_head."vchValuationDate", rec."vchAdvisorLabel" |
> | Sort Method: external merge  Disk: 
3923224kB  |



as you can see: there are 40.500.000 rows to sort to filter out 
duplicate rows, the result contains 'only' 3.700.000 rows. But for this 
step the database needs nearly 4TB on-disk. This will, of course, need 
some time.


If you have enough ram you can try to set work_mem to 5 or 6 GB to 
change the plan to a in-memory - sort. But keep in mind, this is 
dangerous! If the machine don't have enough free ram the kernal can 
decide to Out-Of-Memory - killing processes.


What kind of disks do you have? Maybe you can use a separate fast SSD as 
temp_tablespaces?



Regards, Andreas
--

2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Andreas Kretschmer




Am 21.09.2018 um 17:13 schrieb Durgamahesh Manne:

query is below


query and plan still not readable. Store it into a textfile and attach 
it here.



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: *Regarding brin_index on required column of the table

2018-09-21 Thread Andreas Kretschmer




Am 21.09.2018 um 17:49 schrieb Durgamahesh Manne:

Please find below attached query plan file



query and plan still hard to read :-(

Query:

SELECT distinct
  Max(v."vchSubmittersCode") as vchSubmittersCode
  , Max(v."vchRecordType") as  vchRecordType
  , Max(v."vchSequenceNumber") as vchSequenceNumber
  , v."vchContractNumber"
  ,"vchContractPartyRoleQualifier"
  ,"vchPartyRole"
  ,Max("vchPartyNatural_Non_NaturalEntity") as 
vchPartyNatural_Non_NaturalEntity

  , Max("vchPartyLastName") as vchPartyLastName
  ,Max("vchPartyFirstName") as vchPartyFirstName
  ,Max("vchPartyMiddleName") as vchPartyMiddleName
  , Max("vchPartyPrefix") as  vchPartyPrefix
  ,Max("vchPartySuffix") as vchPartySuffix
  , NULL "vchContractEntityE_mailAddress"
  , "vchPartyID"
  , Max("vchPartyIDQualifier") as vchPartyIDQualifier
  ,Max("vchTrustRevocabilityIndicator") as vchTrustRevocabilityIndicator
  ,NULL  "vchContractEntityPhoneNumber"
  ,NULL "vchContractEntityPhoneExtension"
  ,Max(v."vchFiller1") as vchFiller1
  ,Max(v."vchRejectCode") as vchRejectCode
  , Max("vchContractEntityAddressLine1") as vchContractEntityAddressLine1
  ,   Max("vchContractEntityAddressLine2") as vchContractEntityAddressLine2
  , Max("vchContractEntityCity") as vchContractEntityCity
  ,  Max("vchContractEntityState") as vchContractEntityState
  ,  Max("vchContractEntityZip") as vchContractEntityZip
  ,  Max("vchContractEntityAddressLine3") as vchContractEntityAddressLine3
  , Max("vchContractEntityAddressLine4") as vchContractEntityAddressLine4
  , Max("vchContractEntityAddressLine5") as vchContractEntityAddressLine5
  ,Max("vchPartyDateofBirth") as vchPartyDateofBirth
  ,  Max("vchPartyAddressLine1") as vchPartyAddressLine1
  ,  Max("vchContractStatus") as vchContractStatus
  , string_agg(distinct trim(s."vchAgentTaxID"),',') as vchAgentTaxID
  , "vchPartyRole"
  ,Max(v."vchAdvisorLabel") as vchAdvisorLabel
  ,v."vchFileName"
  ,Max("vchpartycity") as vchpartycity
  ,Max("vchpartystate") as vchpartystate
  ,Max("vchpartypostalcode") as vchpartypostalcode
  ,string_agg(distinct trim(s."vchAgentFirstName")||' 
'||trim(s."vchAgentMiddleName")||' '||trim(s."vchAgentLastName"),',') as 
"AgentName"

FROM
  TABLE1 as v
  join"DTCC".TABLE2 AS s on v."vchContractNumber" = s."vchContractNumber"
where v."bFetch" = false
GROUP BY
  "vchPartyRole"
  ,v."vchFileName"
  ,"vchPartyID"
  ,"vchPartyRole"
  ,"vchContractPartyRoleQualifier"
  , v."vchContractNumber"

UNION SELECT distinct
  max(j."vchSubmittersCode")  as  vchSubmittersCode
  ,max(j."vchRecordType")  as  vchRecordType
  ,max(j."vchSequenceNumber") as vchSequenceNumber
  , j."vchContractNumber"
  , max("vchContractEntityTypeCode") as vchContractEntityTypeCode
  ,"vchContractEntityRole"
  ,max("vchContractEntityNatural_Non_NaturalNameIndicator") as 
vchContractEntityNatural_Non_NaturalNameIndicator

  ,max("vchContractEntityLastName") as vchContractEntityLastName
  ,  max("vchContractEntityFirstName") as vchContractEntityFirstName
  , max("vchContractEntityMiddleName") as vchContractEntityMiddleName
  , max("vchContractEntityPrefix") as vchContractEntityPrefix
  ,  max("vchContractEntitySuffix") as vchContractEntitySuffix
  ,   max("vchContractEntityE_mailAddress") as 
vchContractEntityE_mailAddress

  , "vchContractEntityPersonalIdentifier"
  ,  max("vchContractEntityPersonalQualifier") as 
vchContractEntityPersonalQualifier

  , max("vchTrustRevocabilityIndicator") as vchTrustRevocabilityIndicator
  ,  max("vchContractEntityPhoneNumber") as vchContractEntityPhoneNumber
  , max("vchContractEntityPhoneExtension") as 
vchContractEntityPhoneExtension

  ,  max(j."vchFiller1") as vchFiller1
  , max(j."vchRejectCode") as vchRejectCode
  , max("vchcontractentityaddressline1") as vchcontractentityaddressline1
  ,max("vchcontractentityaddressline2") as vchcontractentityaddressline2
  ,max("vchcontractentitycity") as vchcontractentitycity
  , max("vchcontractentitystate") as vchcontractentitystate
  ,max("vchcontractentityzip") as vchcontractentityzip
  , max("vchcontractentityaddressline3") as vchcontractentityaddressline3
  ,max("vchcontractentityaddressline4") as vchcontractentityaddressline4
  ,max("vchcontractentityaddressline5") as vchcontractentityaddressline5
  , NULL "vchPartyDateofBirth"
  ,  NULL "vchPartyAddressLine1"
  ,  NULL "vchContractStatus"
  , string_agg(distinct trim(j."vchagenttaxid"),',') as vchagenttaxid
  , "vchContractEntityRole"
  , max(j."vchAdvisorLabel") as vchAdvisorLabel
  ,j."vchFileName"
  ,NULL "vchpartycity"
  , NULL "vchpartystate"
  ,NULL "vchpartypostalcode"
  ,  trim(max(k."vchAgentFirstName"))||' 
'||trim(max(k."vchAgentMiddleName"))||' 
'||trim(max(k."vchAgentLastName")) as "AgentName"

FROM TABLE3 as j
  join  TABLE2 AS k on j."vchagenttaxid" = k."vchAgentTaxID"
  where j."bFetch" = false
GROUP BY
  j."vchFileName"
  ,"vchContractEntityRole"
  , "vchContractEntityRole"
  ,j."vchContractNumber"
  ,"vchContractEntityPersonalIdentifier"

;


i can see a lot of max(str

Re: heads up on large text fields.

2018-09-22 Thread Andreas Kretschmer




Am 22.09.2018 um 02:28 schrieb Rob Sargent:

However, I get into deep dodo when I try redirecting psql output such as

select ld from gt.ld\g /tmp/regen.file



works for me if i start psql with -t -A -o /path/to/file
(pg 10.5, but psql from 11beta3)


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Why my query not using index to sort?

2018-09-28 Thread Andreas Kretschmer




Am 28.09.2018 um 16:49 schrieb Arup Rakshit:

 Planning time: 1.867 ms
 Execution time: 0.252 ms
(6 rows)

Why it is showing *6 rows*? Also it seems less than what I had before:



the explain-output are 6 rows ;-)


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Why my query not using index to sort?

2018-09-28 Thread Andreas Kretschmer




Am 28.09.2018 um 18:03 schrieb Arup Rakshit:
Also I meatn the execution time is less than I had before with 
enable_sort = off. 


yeah, but not that much. different plan.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Price Request MXO-PPQ-101164

2018-10-01 Thread Andreas Kretschmer




Am 01.10.2018 um 20:02 schrieb Pichardo, Ari:


Hello team,

My name is Ari and I work for SoftwareOne, an international software 
consulting and reseller company.


We have an end user interested in:

·1 2ndQPostgres Platinium

For their work environment in Mexico.

Do you work with resellers?

Can we get a price quote?





Hi,

i'm working fpr 2ndQ and i will forward your mail to our headquarter.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: regarding bdr extension

2018-10-02 Thread Andreas Kretschmer




Am 02.10.2018 um 15:29 schrieb Adrian Klaver:




does this pgdg repository useful for configuration of bdr v3 ?


BDR is a third party extension from 2nd Quadrant, they would be the 
ones to ask about configuring. 


i said it already, BDR3 is not for public, only for our customers. You 
will need a own support contract.



Durgamahesh Manne, please contact us, if you are interesst in BDR version 3.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Andreas Kretschmer




Am 04.10.2018 um 17:29 schrieb Raghavendra Rao J S V:

Hi All,

*archive_mode *is turned *on *unfortunately in my Postgres 9.2 database.

Due to that disk space is full 100%. We have removed few old xlog 
files. Now space is available.But still we are facing below problem 
when we try to start the database.




Don't delete WAL-files!

If you don't want the archive_mode you can set it to off (restart 
required) or set archive_command to, for instance: "cd ." or 
"/bin/true", that requires only a reload.
After some time (checkpoint) the files will disappear and the space 
should be back.




Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Slot issues

2018-10-14 Thread Andreas Kretschmer




Am 14.10.2018 um 20:45 schrieb bhargav kamineni:

I am getting this ERROR while starting my second slave server

PANIC:  too many replication slots active before shutdown
HINT:  Increase max_replication_slots and try again.

max_replication_slots on my master is 2 and one of them is already 
active for another slave,
do i need to increase this parameter for the need of working of 
another slave ? if so whats the reason ?


please check on the master how many slots are in use:

select * from pg_replication_slots ;

please also tell us the exact version you are using.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: [External] Slot issues

2018-10-14 Thread Andreas Kretschmer




Am 14.10.2018 um 21:02 schrieb Vijaykumar Jain:

from the doc, it says defaults are 10,



that's only valid for version 10, not for 9.x. Default was 0.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Slot issues

2018-10-14 Thread Andreas Kretschmer




Am 14.10.2018 um 21:07 schrieb bhargav kamineni:
Postgres version is 9.5.14 , currently only one slot is there and it 
is active


that's strange. Please check if this setting is active (select from 
pg_settings). Please check also max_wal_senders.


no futher ideas at the moment :-(

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Which index is used in the index scan.

2018-10-16 Thread Andreas Kretschmer
On 17 October 2018 07:39:48 CEST, "Sakai, Teppei"  
wrote:
>Hi
>
>We have question about index scan.
>
>We created table and indexes and executed SQL, but the index used by
>day is different.
>One day the index of c1 was used, and on another day the index of c2
>was used.
> - CREATE TABLE tbl1 (c1 varchar, c2 varchar, c3 timestamptz);
> - CREATE INDEX tbl1_idx_c1 ON tbl1(c1);
> - CREATE INDEX tbl1_idx_c2 ON tbl2(c2);
>- SELECT * FROM tbl1 WHERE c1 = 'XXX' AND C2 = 'YYY' AND C3 >=
>'/mm/dd' AND C3 <= '/mm/dd';
>
>We calculated cost from the source code using pg_class and
>pg_statistic, but the cost values at each index were the same.
>We did not rebuild or update indexes.
>
>What are the reasons why the index has been changed besides the
>following factors?
> - cost values
> - Index creation order


there are a lot of things which planner has to consider, for instance data 
distribution, the age of the statistics, table and index bloat. You should 
compare the plans (using explain analyse with buffers). an combined index over 
all the columns would be better for this query.


>
>PostgreSQL version : 9.5.2

there are several minor updates available.


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Replication question

2018-10-22 Thread Andreas Kretschmer




Am 22.10.2018 um 15:53 schrieb Scot Kreienkamp:
I thought I read somewhere that in 9.6, as long as the WAL log is 
available on disk or in the archive the replication server will 
provide that to the replication client, and my archive NFS mount 
didn’t have to be available to all replication clients.  It doesn’t 
seem to be operating that way though.  Did I completely remember that 
wrong or did I misunderstand something?


no, but you can define a "restore_command" within your recovery_conf.

https://www.postgresql.org/docs/current/static/continuous-archiving.html


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: BDR: moving a node

2019-08-25 Thread Andreas Kretschmer




Am 25.08.19 um 18:11 schrieb E:
What is the process to update the DSN? I assume I'll have to relay the 
changes in my pg_hba.conf, but do not understand, and don't want to 
tinker, with BDR before obtaining some educated advice.


I apologize if my question comes across as dumb. I understand I have 
to update bdr.bdr_connections, but how, and what do I restart? What 
about the server that was moved, any updates needed there?


the safest way would be to part that node from the cluster and rejoin it 
afterwards.


You have (at least) to check all replication slots, please check also 
pg_replication_identifier (please use pg_replication_identifier_drop() 
to drop the node ) , bdr.bdr_nodes and bdr.bdr_connections. Please keep 
in mind, BDR version 1 will be out of support soon, and PG 9.4 as well. 
Current development of BDR is BDR 3.6.x., it works with PG 10+ .




Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_receivexlog or archive_command

2019-09-23 Thread Andreas Kretschmer




Am 23.09.19 um 10:25 schrieb Vikas Sharma:

Hi,

I am wondering which one is the best way to archive the xlogs for 
Backup and Recovery - pg_receivexlog or archive_command.


pg_receivexlog seems best suited because the copied/archived file is 
streamed as it is being written to in xlog while archive_command only 
copies when the WAL is fully written to.




you can use both of them, and you should consider "Barman".


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_receivexlog or archive_command

2019-09-23 Thread Andreas Kretschmer




Am 23.09.19 um 13:44 schrieb Luca Ferrari:

On Mon, Sep 23, 2019 at 10:55 AM Andreas Kretschmer
 wrote:

you can use both of them, and you should consider "Barman".

If I remember well Barman uses pg_receivexlog when streaming, and
archive_command when doing a "normal" backup.


Barman < version 2 can only archive_command, version 2 and higher can both.


Also pgbackrest is another good tool for backup.
The idea here should be not reinventing the wheel.

ack.

Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Postgres 12: backend crashes when creating non-deterministic collation

2019-10-04 Thread Andreas Kretschmer




Am 04.10.19 um 12:13 schrieb Thomas Kellerer:
I was trying to learn how the new non-deterministic collations in v12 
work, but the following makes the backend crash:


CREATE COLLATION de_ci (provider = icu, locale = 'de-x-icu', 
deterministic = false);


Which leads to:

2019-10-04 11:54:23 CEST   LOG:  server process (PID 7540) was 
terminated by exception 0xC005

2019-10-04 11:54:23 CEST   DETAIL:  Failed process was running:
CREATE COLLATION de_ci (provider = icu, locale = 'de-x-icu', 
deterministic = false)
2019-10-04 11:54:23 CEST   HINT:  See C include file "ntstatus.h" for 
a description of the hexadecimal value.
2019-10-04 11:54:23 CEST   LOG:  terminating any other active server 
processes
2019-10-04 11:54:23 CEST   WARNING:  terminating connection because of 
crash of another server process
2019-10-04 11:54:23 CEST   DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because 
another server process exited abnormally and possibly corrupted shared 
memory.


This is on Windows 10 with the Postgres 12 binaries from EDB.
Exact Postgres version is: PostgreSQL 12.0, compiled by Visual C++ 
build 1914, 64-bit

The database was pg_upgraded if that makes any difference




works for me, with:

psql (12rc1 (Ubuntu 12~rc1-1.pgdg18.04+1))
Type "help" for help.

test=# CREATE COLLATION de_ci (provider = icu, locale = 'de-x-icu', 
deterministic = false);

CREATE COLLATION
test=*# commit;
COMMIT
test=#


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: GPS coordinates problem

2019-10-08 Thread Andreas Kretschmer




Am 08.10.19 um 12:50 schrieb Timmy Siu:
Now, I need Global Position System coordinates as a data type. How do 
I define it in Postgresql 11 or 12?


consider PostGIS.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Postgres Point in time Recovery (PITR),

2019-10-18 Thread Andreas Kretschmer
On 18 October 2019 07:59:21 CEST, Daulat Ram  wrote:
>Hello All,
>Can you please share some ideas and scenarios how we can do the PITR in
>case of disaster.
>
>
>Thanks,


Consider Barman.


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: Can you please suggest how to configure hot_standby_feedback?

2019-10-31 Thread Andreas Kretschmer




Am 31.10.19 um 06:21 schrieb M Tarkeshwar Rao:


Can you please suggest how to configure hot_standby_feedback?




turn it on if you want execute long running queries on the standby, keep 
in mind it can lead to more bloat on the master.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: security on user for replication

2019-11-11 Thread Andreas Kretschmer




Am 11.11.19 um 14:26 schrieb PegoraroF10:
How can I hide that info from users which are connected to my replica 
server



you can use a .pgpass - file, see the documentation.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Upgrade PostgreSQL 9.6 to 10.6

2019-12-09 Thread Andreas Kretschmer




Am 09.12.19 um 16:43 schrieb github kran:

Hello PostgreSQL Team,
I would like to know what would be the best way to do Database 
migration from PostgreSQL 9.6 engine to 10.6 by creating a new cluster 
in 10.6 and then copy data.


Size of the cluster is 3.8 TB.

1) It would be a new cluster we want to create on 10.6 and then copy 
data form 9.6 and shutdown 9.6
2) Cluster today contains 1 master instance and another instance for 
reader
3) Can this be done without a downtime ?  what are the options I have 
to do this without making this complicated . We have about 30 -40 
tables to do the migration.





yes, you can use pg_logical from us (2ndquadrant). Please visit our 
homepage, there you can find it for 9.6. There are also some 
blogs-postings about all that in our blog.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Upgrade PostgreSQL 9.6 to 10.6

2019-12-10 Thread Andreas Kretschmer




Am 09.12.19 um 23:37 schrieb github kran:
Great, thanks Andreas, So this seems to be a good feature using the 
core concept of replication. Can I use this extension and do the major 
upgrade without paying ?.


yes, this extension is free.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Backup and Restore

2019-12-12 Thread Andreas Kretschmer




Am 12.12.19 um 16:12 schrieb Dor Ben Dov:

What is the most common used back and restore solution for postgres ?


most of our customers are using Barman, which is not a surprise since it 
is developed by us ;-)



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Backup and Restore

2019-12-25 Thread Andreas Kretschmer




Am 25.12.19 um 14:34 schrieb Dor Ben Dov:


Hi All,

What Is the best recommended / used tool for backup and restore that 
you suggest or work with postgres ?



depends on your needs, most of our customers using barman.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_stat_statements extension

2020-01-16 Thread Andreas Kretschmer
On 13 January 2020 20:15:21 CET, Rushikesh socha  wrote:
>HI, Is there any good link that shows how to install pg_stat_statements
>extension
>I am getting below error
>
>postgres=# CREATE EXTENSION pg_stat_statements;
>ERROR:  could not open extension control file
>"/usr/pgsql-11/share/extension/pg_stat_statements.control": No such
>file or
>directory
>
>
>Thanks.



Have you installed the contrib-package and also load the shared lib via 
postgresql.conf?

Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




Re: How to avoid UPDATE on same data in table ?

2020-02-02 Thread Andreas Kretschmer




Am 02.02.20 um 10:24 schrieb Condor:

CREATE TRIGGER last_changes
  BEFORE UPDATE ON status_table
  FOR EACH ROW
  WHEN (OLD.* IS DISTINCT FROM NEW.*) 


try to exclude the column lastchange from the comparison.


Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: How to avoid UPDATE on same data in table ?

2020-02-02 Thread Andreas Kretschmer




Am 02.02.20 um 14:37 schrieb Andreas Kretschmer:



Am 02.02.20 um 10:24 schrieb Condor:

CREATE TRIGGER last_changes
  BEFORE UPDATE ON status_table
  FOR EACH ROW
  WHEN (OLD.* IS DISTINCT FROM NEW.*) 


try to exclude the column lastchange from the comparison.




test=*# select ctid, * from status_table ;
 ctid  | rowid | status0 |  lastchage
---+---+-+-
 (0,3) |    11 |   1 |
 (0,5) |    12 |   4 | 2020-02-02 15:40:42
(2 rows)

test=*# UPDATE status_table SET status0 = 4 WHERE rowid = 12;
UPDATE 1
test=*# commit;
COMMIT
test=# select ctid, * from status_table ;
 ctid  | rowid | status0 |  lastchage
---+---+-+-
 (0,3) |    11 |   1 |
 (0,6) |    12 |   4 | 2020-02-02 15:40:42
(2 rows)

test=*# \d status_table
 Table "public.status_table"
  Column   |  Type  | Collation | Nullable | 
Default

---++---+--+-
 rowid | integer    |   |  |
 status0   | integer    |   |  |
 lastchage | timestamp(0) without time zone |   |  |
Triggers:
    last_changes BEFORE UPDATE ON status_table FOR EACH ROW WHEN 
(old.rowid IS DISTINCT FROM new.rowid OR old.status0 IS DISTINCT FROM 
new.status0) EXECUTE FUNCTION log_last_changed()



Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: How to avoid UPDATE on same data in table ?

2020-02-02 Thread Andreas Kretschmer




Am 02.02.20 um 18:18 schrieb Tom Lane:


https://www.postgresql.org/docs/current/functions-trigger.html

regards, tom lane

cool.


Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Backup & Restore

2020-02-24 Thread Andreas Kretschmer




Am 24.02.20 um 09:18 schrieb Dor Ben Dov:


Hi All,

What is your backup and restore solution in production when working 
with Postgres ?




most of our customers using Barman: https://www.pgbarman.org/


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Who mades the inserts?

2020-03-09 Thread Andreas Kretschmer




Am 09.03.20 um 13:52 schrieb Durumdara:

Do you know any query which can show me the inserts per databases?

And I don't know it works as TPS query? So I need to make differents 
between measured values in two time point?


yes, you can use tup_inserted from pg_stat_database.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Real application clustering in postgres.

2020-03-09 Thread Andreas Kretschmer




Am 05.03.20 um 13:07 schrieb Laurenz Albe:

There is a closed-source implementation that you can buy:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/

But multi-master replication is complicated to get right, and
an applicatoin that uses it has to be specifically designed for that.


Depends. We have an automation tool for setup (TPAexec), and as long you 
are not using special feature like CAMO (Commit At Most Once) you can 
use applications without changes.

(you have to set some parameters, for instance for sequences)

It works well, we have a lot of happy customers. Some of them using it 
for globally distributed databases, other for high availability with 5 
nines.

Here you can find more information:
https://www.2ndquadrant.com/en/resources/postgres-bdr-2ndquadrant/


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: PostgreSQL native multi-master

2020-04-07 Thread Andreas Kretschmer




Am 07.04.20 um 13:39 schrieb Vano Beridze:

Hello,

What are the plans to support multi-master natively?
What solution would you recommend at this point? preferably free.


BDR3 works well for our customers, but it isn't free. You can ask us for 
more information.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg11: Query using index, but only for half the partitions

2020-04-23 Thread Andreas Kretschmer




Am 23.04.20 um 10:13 schrieb Stefan Knecht:
Seq Scan on snap_20200225 s  (cost=0.00..1.19 rows=1 width=12) 


the partition is very small, so it's cheaper to scan only the table (one 
block) than index + table (1 + 1 block).



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg11: Query using index, but only for half the partitions

2020-04-23 Thread Andreas Kretschmer




Am 23.04.20 um 12:30 schrieb Stefan Knecht:
There's no question that this is more expensive than just reading the 
95 rows from the index directly and returning them


not sure, you can play with

enable_seqscan = off

and compare the costs. What is the setting for random_page_cost ?


Regards, Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Clarification related to BDR

2020-05-14 Thread Andreas Kretschmer




Am 14.05.20 um 06:37 schrieb Santhosh Kumar:
Can you please help me understand, why the following news is published 
in "postgresql" with an encouraging message acknowledging BDR as an 
open source? We invested time and effort to use BDR only to understand 
at a later point in time, that it is not. Kindly clarify, if I am 
missing anything. 


BDR version 1 was Open Source, version 2 and 3 are not. Version 1 
(patched 9.4) and Version 2 (community PG 9.6) are not under support 
now, stable and supported version is 3 (PG 10 and 11, 12 soon).
You need a usage license which is bundled with a diamond support 
subscription.



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: pg_dump crashes

2020-05-22 Thread Andreas Kretschmer




Am 22.05.20 um 14:37 schrieb Nico De Ranter:

Postgres version: 9.5


which minor-version?

Can you check if the table has TOAST-Tables? Can you try to select all 
columns but not TOASTed columns?

Maybe there is data-corruption only in toast-tables.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Regarding creation of gin index on column that has varchar datatype

2020-05-23 Thread Andreas Kretschmer




Am 23.05.20 um 12:37 schrieb Durgamahesh Manne:

Hi

Respected to PGDG GLOBAL TEAM

I am getting this error( ERROR:  data type character varying has no 
default operator class for access method "gin"
HINT:  You must specify an operator class for the index or define a 
default operator class for the data type ) while i try to create gin 
index on vch_message column of slp01 table (CREATE INDEX ON slp01 
using gin(vch_message);)


vch_message column has lot of this info like 
{"requestBody":{"firstName":"SALVATORE","lastName":"NAPOLITANO","email":"remuda...@hotmail.com 
","personID":"C48268","ClientSSN":"153520698","advisorPersonID":["10214","15270","15271","15272"]},"header":{"cache-control":"no-cache","content-type":"application/json","authorization":"Basic 
Y3JlYXRlVXNlcnM6ZGFrdm5laXdvbjRpOWZqb3duY3VpMzRmdW4zOTQ4aGY=","accept":"application/json, 
text/json, text/x-json, text/javascript, application/xml, text/xml"






the string looks like JSON (with missing the correct end of the string). 
Please check. If you are storing JSON or JSONB - Values, you should use 
the proper datatype -JSON/JSONB. In this case you can create an 
GIN-Index on it.



Regards, Andreas
(and please don't email me private the same question)

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: [HELP] query regarding replication

2020-06-09 Thread Andreas Kretschmer




Am 09.06.20 um 09:55 schrieb Praveen Kumar K S:
Can I achieve master/slave streaming replication by setting WAL_LEVEL 
to logical on master ? Are there any drawbacks of it ?


yes, no problem. the wal's would be a bit larger, that's all.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: [HELP] query regarding replication

2020-06-09 Thread Andreas Kretschmer




Am 09.06.20 um 10:44 schrieb Praveen Kumar K S:

Thanks. Will this approach replicate DDL changes ?


sure.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Postgresql HA Cluster

2020-06-29 Thread Andreas Kretschmer




Am 29.06.20 um 09:33 schrieb Laurenz Albe:

That would not provode a multi-master solution, though.  There are some
commercial solutions for that, but be warned that it would require non-trivial
changes to your application.


not really with BDR3 ;-)


Andreas


--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com





Re: Wal_keep_segment value too high

2020-07-10 Thread Andreas Kretschmer
On 10 July 2020 10:26:25 CEST, Brajendra Pratap Singh 
 wrote:
>Hi,
>
>What will happen if the wal_keep_segments value is too high ,is this

wasted disk space.

What do you want to achive?


Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company




bug in PG13?

2020-10-14 Thread Andreas Kretschmer

Hi all,

it seems to me a bug. i have a partitioned table:


test=*# select version();
version
-
 PostgreSQL 13.0 (Ubuntu 13.0-1.pgdg18.04+1) on x86_64-pc-linux-gnu, 
compiled by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit

(1 row)

test=*# \d+ kunden
    Partitioned table 
"public.kunden"
   Column   |  Type   | Collation | Nullable | Default   | 
Storage  | Stats target | Description

+-+---+--++--+--+-
 id | integer |   | not null | 
nextval('kunden_id_seq'::regclass) | plain    |  |
 kundenname | text    |   | not null 
|    | extended |  |
 datum  | date    |   | not null 
|    | plain    |  |

Partition key: HASH (kundenname)
Indexes:
    "kunden_pkey" PRIMARY KEY, btree (id, kundenname, datum)
Partitions: kunden_0 FOR VALUES WITH (modulus 4, remainder 0), PARTITIONED,
    kunden_1 FOR VALUES WITH (modulus 4, remainder 1), PARTITIONED,
    kunden_2 FOR VALUES WITH (modulus 4, remainder 2), PARTITIONED,
    kunden_3 FOR VALUES WITH (modulus 4, remainder 3), PARTITIONED

test=*# \d+ kunden_0
   Partitioned table 
"public.kunden_0"
   Column   |  Type   | Collation | Nullable | Default   | 
Storage  | Stats target | Description

+-+---+--++--+--+-
 id | integer |   | not null | 
nextval('kunden_id_seq'::regclass) | plain    |  |
 kundenname | text    |   | not null 
|    | extended |  |
 datum  | date    |   | not null 
|    | plain    |  |

Partition of: kunden FOR VALUES WITH (modulus 4, remainder 0)
Partition constraint: satisfies_hash_partition('16574'::oid, 4, 0, 
kundenname)

Partition key: RANGE (datum)
Indexes:
    "kunden_0_pkey" PRIMARY KEY, btree (id, kundenname, datum)
Partitions: kunden_0_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
    kunden_0_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
    kunden_0_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
    kunden_0_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
    kunden_0_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
    kunden_0_default DEFAULT

test=*#


this plan seems okay, partition pruning is working as expected:


test=*# explain analyse select * from kunden where kundenname = 'Kunde 
11' and datum = current_date;

QUERY PLAN
-
 Gather  (cost=1000.15..5030.91 rows=6 width=40) (actual 
time=10.068..14.326 rows=0 loops=1)

   Workers Planned: 1
   Workers Launched: 1
   ->  Parallel Append  (cost=0.15..4030.31 rows=6 width=40) (actual 
time=0.004..0.005 rows=0 loops=2)

 Subplans Removed: 5
 ->  Parallel Index Only Scan using kunden_0_default_pkey on 
kunden_0_default kunden_1  (cost=0.15..20.16 rows=1 width=40) (actual 
time=0.008..0.008 rows=0 loops=1)
   Index Cond: ((kundenname = 'Kunde 11'::text) AND (datum 
= CURRENT_DATE))

   Heap Fetches: 0
 Planning Time: 0.303 ms
 Execution Time: 14.364 ms
(10 rows)



but, if i switch ``parallel_leader_participation`` to off, the plan 
changed in a bad way:



test=*# set parallel_leader_participation to off;
SET
test=*# explain analyse select * from kunden where kundenname = 'Kunde 
11' and datum = current_date;

QUERY PLAN

 Gather  (cost=1000.00..4833.46 rows=6 width=21) (actual 
time=37.188..40.386 rows=0 loops=1)

   Workers Planned: 2
   Workers Launched: 2
   ->  Parallel Append  (cost=0.00..3832.86 rows=2 width=21) (actual 
time=23.559..23.562 rows=0 loops=2)
 ->  Seq Scan on kunden_0_2016 kunden_2 (cost=0.00..1446.92 
rows=1 width=17) (actual time=12.094..12.094 rows=0 loops=1)
   Filter: ((kundenname = 'Kunde 11'::text) AND (datum = 
CURRENT_DATE))

   Rows Removed by Filter: 60624
 ->  Seq Scan on kunden_0_2015 kunden_1 (cost=0.00..1445.22 
rows=1 width=17) (actual time=10.313..10.313 rows=0 loops=1)
   Filter: ((kundenname = 'Kunde 11'::text) AND (datum = 
CURRENT_DATE))

   Rows Removed by Filter: 60527
 ->  Seq Scan on kunden_0_2

  1   2   3   >