Re: Implementing pgaudit extension on Microsoft Windows

2019-02-12 Thread Pavel Stehule
út 12. 2. 2019 v 8:57 odesílatel Niels Jespersen  napsal:

> Thanks Tom
>
> alter system set pgaudit.log = 'all'; -- Works, it results in 2019-02-12
> 08:51:49.109 CET [13560] LOG:  parameter "pgaudit.log" changed to "all"
> after select pg_reload_conf();
> alter system set pgaudit.logx = 'all'; -- Notice spelling error logx:
> Fails, it results immediately in 2019-02-12 08:53:04.412 CET [12856]
> ERROR:  unrecognized configuration parameter "pgaudit.logx" 2019-02-12
> 08:53:04.412 CET [12856] STATEMENT:  alter system set pgaudit.logx = 'all';
>
> pgaudit is loaded, as show by:
>
> localhost postgres@postgres#show shared_preload_libraries;
>  shared_preload_libraries
> --
>  pgaudit
> (1 row)
>
> localhost postgres@postgres#
>
> pgaudit.dll is locked py postgres.exe, if I try to remove it from the lib
> folder.
>
> All looks normal, except no logs from auditing.
>

and it doesn't work too if you use postgresql.conf?

Pavel


> Regards Niels
>
>
> -Oprindelig meddelelse-
> Fra: Tom Lane 
> Sendt: 11. februar 2019 15:44
> Til: Niels Jespersen 
> Cc: 'pgsql-general@lists.postgresql.org' <
> pgsql-general@lists.postgresql.org>
> Emne: Re: Implementing pgaudit extension on Microsoft Windows
>
> Niels Jespersen  writes:
> > Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib Copied
> > pgaudit.control and pgaudit--1.3.sql to C:\Program
> > Files\PostgreSQL\11\share\extension
> > Set shared_preload_libraries = 'pgaudit'
> > Restart PostgreSQL
> > Run "CREATE EXTENSION pgaudit;"
> > alter system set pgaudit.log = 'all';
> > select pg_reload_conf();
>
> > Nothing enters the log-file from pgaudit. Creating tables (relations,
> sorry). Selecting from tables, nothing in the log.
>
> Hm, what you describe above looks right.
>
> > I can see that alter system set pgaudit.logx = 'all';  complains about
> "unrecognized configuration parameter", so something is known about pgaudit.
>
> No, that means the postmaster *doesn't* have pgaudit loaded.
>
> The fact that CREATE EXTENSION worked implies that you've got the library
> correctly built, so I think this must boil down to the
> "shared_preload_libraries" setting not having taken.  You could cross-check
> that by seeing what "SHOW shared_preload_libraries"
> prints.
>
> I wonder if you forgot to remove the comment marker (#) on the
> shared_preload_libraries line?
>
> regards, tom lane
>
>


Re: Implementing pgaudit extension on Microsoft Windows

2019-02-12 Thread Pavel Stehule
út 12. 2. 2019 v 8:59 odesílatel Pavel Stehule 
napsal:

>
>
> út 12. 2. 2019 v 8:57 odesílatel Niels Jespersen  napsal:
>
>> Thanks Tom
>>
>> alter system set pgaudit.log = 'all'; -- Works, it results in 2019-02-12
>> 08:51:49.109 CET [13560] LOG:  parameter "pgaudit.log" changed to "all"
>> after select pg_reload_conf();
>> alter system set pgaudit.logx = 'all'; -- Notice spelling error logx:
>> Fails, it results immediately in 2019-02-12 08:53:04.412 CET [12856]
>> ERROR:  unrecognized configuration parameter "pgaudit.logx" 2019-02-12
>> 08:53:04.412 CET [12856] STATEMENT:  alter system set pgaudit.logx = 'all';
>>
>> pgaudit is loaded, as show by:
>>
>> localhost postgres@postgres#show shared_preload_libraries;
>>  shared_preload_libraries
>> --
>>  pgaudit
>> (1 row)
>>
>> localhost postgres@postgres#
>>
>> pgaudit.dll is locked py postgres.exe, if I try to remove it from the lib
>> folder.
>>
>> All looks normal, except no logs from auditing.
>>
>
> and it doesn't work too if you use postgresql.conf?
>

don't forget reload

select pg_reload_conf();



> Pavel
>
>
>> Regards Niels
>>
>>
>> -Oprindelig meddelelse-
>> Fra: Tom Lane 
>> Sendt: 11. februar 2019 15:44
>> Til: Niels Jespersen 
>> Cc: 'pgsql-general@lists.postgresql.org' <
>> pgsql-general@lists.postgresql.org>
>> Emne: Re: Implementing pgaudit extension on Microsoft Windows
>>
>> Niels Jespersen  writes:
>> > Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib Copied
>> > pgaudit.control and pgaudit--1.3.sql to C:\Program
>> > Files\PostgreSQL\11\share\extension
>> > Set shared_preload_libraries = 'pgaudit'
>> > Restart PostgreSQL
>> > Run "CREATE EXTENSION pgaudit;"
>> > alter system set pgaudit.log = 'all';
>> > select pg_reload_conf();
>>
>> > Nothing enters the log-file from pgaudit. Creating tables (relations,
>> sorry). Selecting from tables, nothing in the log.
>>
>> Hm, what you describe above looks right.
>>
>> > I can see that alter system set pgaudit.logx = 'all';  complains about
>> "unrecognized configuration parameter", so something is known about pgaudit.
>>
>> No, that means the postmaster *doesn't* have pgaudit loaded.
>>
>> The fact that CREATE EXTENSION worked implies that you've got the library
>> correctly built, so I think this must boil down to the
>> "shared_preload_libraries" setting not having taken.  You could cross-check
>> that by seeing what "SHOW shared_preload_libraries"
>> prints.
>>
>> I wonder if you forgot to remove the comment marker (#) on the
>> shared_preload_libraries line?
>>
>> regards, tom lane
>>
>>


Re: Blank, nullable date column rejected by psql

2019-02-12 Thread Laurenz Albe
Rich Shepard wrote:
> Now I know to replace no dates with null I'll do so but I'm curious why this
> is needed.

NULL is a special "unknown" value in SQL.  You can use it for all
data types to signal that a value is unknown or not available.

If you insert a string into a "date" column, PostgreSQL will try
to convert the string to a date with the type input function.
The type input function fails on an empty string, since it cannot
parse it into a valid "date" value.
This also applies to the empty string.

But NULL is always a possible value (unless the column definition
excludes it).

Used properly, NULL solves many problems.

Imagine you want to know how long in the future the date is.
If you use "next_contact - current_timestamp", and "next_contact"
is NULL, then the result of the operation will automatically be
NULL (unknown) as well.  That is much better than any "zero" value
which would lead to an undesired result.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com




SV: Implementing pgaudit extension on Microsoft Windows

2019-02-12 Thread Niels Jespersen
Same result from

pgaudit.log = 'all'

in postgresql.conf and after both select pg_reload_conf(); and after service 
restart.

No entries in the log from audit.

Regards Niels





Fra: Pavel Stehule 
Sendt: 12. februar 2019 09:01
Til: Niels Jespersen 
Cc: Tom Lane ; pgsql-general@lists.postgresql.org
Emne: Re: Implementing pgaudit extension on Microsoft Windows



út 12. 2. 2019 v 8:59 odesílatel Pavel Stehule 
mailto:pavel.steh...@gmail.com>> napsal:


út 12. 2. 2019 v 8:57 odesílatel Niels Jespersen 
mailto:n...@dst.dk>> napsal:
Thanks Tom

alter system set pgaudit.log = 'all'; -- Works, it results in 2019-02-12 
08:51:49.109 CET [13560] LOG:  parameter "pgaudit.log" changed to "all" after 
select pg_reload_conf();
alter system set pgaudit.logx = 'all'; -- Notice spelling error logx: Fails, it 
results immediately in 2019-02-12 08:53:04.412 CET [12856] ERROR:  unrecognized 
configuration parameter "pgaudit.logx" 2019-02-12 08:53:04.412 CET [12856] 
STATEMENT:  alter system set pgaudit.logx = 'all';

pgaudit is loaded, as show by:

localhost postgres@postgres#show shared_preload_libraries;
 shared_preload_libraries
--
 pgaudit
(1 row)

localhost postgres@postgres#

pgaudit.dll is locked py postgres.exe, if I try to remove it from the lib 
folder.

All looks normal, except no logs from auditing.

and it doesn't work too if you use postgresql.conf?

don't forget reload

select pg_reload_conf();



Pavel


Regards Niels


-Oprindelig meddelelse-
Fra: Tom Lane mailto:t...@sss.pgh.pa.us>>
Sendt: 11. februar 2019 15:44
Til: Niels Jespersen mailto:n...@dst.dk>>
Cc: 
'pgsql-general@lists.postgresql.org' 
mailto:pgsql-general@lists.postgresql.org>>
Emne: Re: Implementing pgaudit extension on Microsoft Windows

Niels Jespersen mailto:n...@dst.dk>> writes:
> Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib Copied
> pgaudit.control and pgaudit--1.3.sql to C:\Program
> Files\PostgreSQL\11\share\extension
> Set shared_preload_libraries = 'pgaudit'
> Restart PostgreSQL
> Run "CREATE EXTENSION pgaudit;"
> alter system set pgaudit.log = 'all';
> select pg_reload_conf();

> Nothing enters the log-file from pgaudit. Creating tables (relations, sorry). 
> Selecting from tables, nothing in the log.

Hm, what you describe above looks right.

> I can see that alter system set pgaudit.logx = 'all';  complains about 
> "unrecognized configuration parameter", so something is known about pgaudit.

No, that means the postmaster *doesn't* have pgaudit loaded.

The fact that CREATE EXTENSION worked implies that you've got the library 
correctly built, so I think this must boil down to the 
"shared_preload_libraries" setting not having taken.  You could cross-check 
that by seeing what "SHOW shared_preload_libraries"
prints.

I wonder if you forgot to remove the comment marker (#) on the 
shared_preload_libraries line?

regards, tom lane


Re: Implementing pgaudit extension on Microsoft Windows

2019-02-12 Thread Pavel Stehule
út 12. 2. 2019 v 10:34 odesílatel Niels Jespersen  napsal:

> Same result from
>
>
>
> pgaudit.log = 'all'
>
>
>
> in postgresql.conf and after both select pg_reload_conf(); and after
> service restart.
>
>
>
> No entries in the log from audit.
>

It is strange - can you try same setup on linux?



>
> Regards Niels
>
>
>
>
>
>
>
>
>
>
>
> *Fra:* Pavel Stehule 
> *Sendt:* 12. februar 2019 09:01
> *Til:* Niels Jespersen 
> *Cc:* Tom Lane ; pgsql-general@lists.postgresql.org
> *Emne:* Re: Implementing pgaudit extension on Microsoft Windows
>
>
>
>
>
>
>
> út 12. 2. 2019 v 8:59 odesílatel Pavel Stehule 
> napsal:
>
>
>
>
>
> út 12. 2. 2019 v 8:57 odesílatel Niels Jespersen  napsal:
>
> Thanks Tom
>
> alter system set pgaudit.log = 'all'; -- Works, it results in 2019-02-12
> 08:51:49.109 CET [13560] LOG:  parameter "pgaudit.log" changed to "all"
> after select pg_reload_conf();
> alter system set pgaudit.logx = 'all'; -- Notice spelling error logx:
> Fails, it results immediately in 2019-02-12 08:53:04.412 CET [12856]
> ERROR:  unrecognized configuration parameter "pgaudit.logx" 2019-02-12
> 08:53:04.412 CET [12856] STATEMENT:  alter system set pgaudit.logx = 'all';
>
> pgaudit is loaded, as show by:
>
> localhost postgres@postgres#show shared_preload_libraries;
>  shared_preload_libraries
> --
>  pgaudit
> (1 row)
>
> localhost postgres@postgres#
>
> pgaudit.dll is locked py postgres.exe, if I try to remove it from the lib
> folder.
>
> All looks normal, except no logs from auditing.
>
>
>
> and it doesn't work too if you use postgresql.conf?
>
>
>
> don't forget reload
>
>
>
> select pg_reload_conf();
>
>
>
>
>
>
>
> Pavel
>
>
>
>
> Regards Niels
>
>
> -Oprindelig meddelelse-
> Fra: Tom Lane 
> Sendt: 11. februar 2019 15:44
> Til: Niels Jespersen 
> Cc: 'pgsql-general@lists.postgresql.org' <
> pgsql-general@lists.postgresql.org>
> Emne: Re: Implementing pgaudit extension on Microsoft Windows
>
> Niels Jespersen  writes:
> > Copied pgaudit.dll to C:\Program Files\PostgreSQL\11\lib Copied
> > pgaudit.control and pgaudit--1.3.sql to C:\Program
> > Files\PostgreSQL\11\share\extension
> > Set shared_preload_libraries = 'pgaudit'
> > Restart PostgreSQL
> > Run "CREATE EXTENSION pgaudit;"
> > alter system set pgaudit.log = 'all';
> > select pg_reload_conf();
>
> > Nothing enters the log-file from pgaudit. Creating tables (relations,
> sorry). Selecting from tables, nothing in the log.
>
> Hm, what you describe above looks right.
>
> > I can see that alter system set pgaudit.logx = 'all';  complains about
> "unrecognized configuration parameter", so something is known about pgaudit.
>
> No, that means the postmaster *doesn't* have pgaudit loaded.
>
> The fact that CREATE EXTENSION worked implies that you've got the library
> correctly built, so I think this must boil down to the
> "shared_preload_libraries" setting not having taken.  You could cross-check
> that by seeing what "SHOW shared_preload_libraries"
> prints.
>
> I wonder if you forgot to remove the comment marker (#) on the
> shared_preload_libraries line?
>
> regards, tom lane
>
>


Re: Aurora Postgresql RDS DB Latency

2019-02-12 Thread github kran
On Mon, Feb 11, 2019 at 6:00 PM github kran  wrote:

>
>
> On Mon, Feb 11, 2019 at 3:29 PM Michael Lewis  wrote:
>
>> Are default statistics target the same on both prod and AWS? Have you
>> analyzed all tables being used in this query to ensure stats are up proper?
>> If the optimizer is choosing a different plan, then the stats must be
>> different IMO.
>>
>>
>> *Michael Lewis |  Software Engineer*
>> *Entrata*
>>
>
>
> Thanks for your reply  I have verified few of the tables and their default
> statistics target and they seem to be same but is there anything in
> particular you want me to look at it to differentiate Prod and Non prod
> databases ?. ( Also the DB instance size is same but there is little
> more data in the Non prod Aurora RDS instance compared to Prod instance).
>
> Query used.
>>select * from pg_stats where tablename = 'tableName'
>>
>>
>> On Mon, Feb 11, 2019 at 2:15 PM github kran  wrote:
>>
>>> Hi Postgres Team,
>>> We are using AWS Aurora RDS Postgres DB 9.6.8 Engine. We recently
>>> deleted few million rows from the database and ran into a issue in one of
>>> our dev account where the
>>> DB was not normal after this deletion. We did re index, vacuuming entire
>>> database but we couldnt bring it to the same state as earlier. So next
>>> steps we deleted the database and
>>> recreated the database by copying the snapshot from a production
>>> instance. Further did vacumming, re-index on the database.
>>>
>>> After this now the dev database seems to be in a better state than
>>> earlier but we are seeing few of our DB calls are taking more than 1 minute
>>> when we are fetching data and we observed
>>> this is because the query plan was executing a hash join as part of the
>>> query whereas a similar query on prod instance is not doing any hash join
>>> and is returning faster.
>>>
>>> Also we did not want to experiment by modifing the DB settings by doing
>>> enable_hash_join to off or random_page_count to 1 as we dont have these
>>> settings in Prod instance.
>>>
>>> Note:
>>> The partition table sizes we have here is between 40 GB to 75 GB and
>>> this is our normal size range, we have a new partition table for every 7
>>> days.
>>>
>>> Appreciate your ideas on what we could be missing and what we can
>>> correct here to reduce the query latency.
>>>
>>> Thanks
>>> githubKran
>>>
>>


Re: Blank, nullable date column rejected by psql

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, Laurenz Albe wrote:


If you insert a string into a "date" column, PostgreSQL will try to
convert the string to a date with the type input function. The type input
function fails on an empty string, since it cannot parse it into a valid
"date" value. This also applies to the empty string.


Laurenz,

All my previous databases with date columns were required to have an entry
because each row contatined spatio-temporal sampling data. The table in this
business-oriented application is not required to have a next_contact date
and (the crucial point) is that I conflated date values with string values.
As you wrote, a date column is converted from the entered string to a
non-string date type and requires an explicit null when there is no value to
be stored for that column.

Thanks very much,

Rich



Re: SV: Implementing pgaudit extension on Microsoft Windows

2019-02-12 Thread Tom Lane
Niels Jespersen  writes:
> Same result from
> pgaudit.log = 'all'
> in postgresql.conf and after both select pg_reload_conf(); and after service 
> restart.
> No entries in the log from audit.

Hm.  I don't know much about pgaudit, but just from scanning its
documentation, it doesn't seem like there's anything else that
has to be set.

I wonder whether pgaudit actually works on Windows?  It might have
some weird dependency on children being spawned with fork not exec,
for instance.  You probably ought to contact the authors and ask.

regards, tom lane



BDR 1.0: background worker wants to start that should not be there

2019-02-12 Thread Daniel Fink (PDF)
Hi all,



After I used bdr.bdr_part_by_node_names(*p_nodes text[]*) and removed the
nodes from bdr.bdr_nodes table I still get log errors about the nonexistent
pg_replication_slot:



< 2019-02-12 06:26:21.166 PST >LOG:  starting background worker process
"bdr (6208877715678412212,1,22576474,)->bdr (6449651545875285115"



How does postgresql deduce which workers need starting?

Is there a table I am missing?



Best Regards,




*DANIEL FINK*

*Senior Software Engineer*

*tel* (+49) 89.767062.20
*fax*(+49) 89.767062.11
email daniel.f...@pdf.com

*PDF Solutions GmbH*
* (**a PDF Solutions Company)*
Managing Director: Kimon Michaels
Schwanthalerstr. 10
D-80336 München, Germany

München HRB 87307
DE 128214899

*www.pdf.com *

-- 
This message may contain confidential and privileged information. If it has 
been sent to you in error, please reply to advise the sender of the error 
and then immediately permanently delete it and all attachments to it from 
your systems. If you are not the intended recipient, do not read, copy, 
disclose or otherwise use this message or any attachments to it. The sender 
disclaims any liability for such unauthorized use.  PLEASE NOTE that all 
incoming e-mails sent to PDF e-mail accounts will be archived and may be 
scanned by us and/or by external service providers to detect and prevent 
threats to our systems, investigate illegal or inappropriate behavior, 
and/or eliminate unsolicited promotional e-mails (“spam”).  If you have any 
concerns about this process, please contact us at legal.departm...@pdf.com 
.


Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?

2019-02-12 Thread Justin Pryzby
On Mon, Nov 26, 2018 at 07:00:35PM -0800, Andres Freund wrote:
> The fix is easy enough, just adding a
> v_hoff = LLVMBuildZExt(b, v_hoff, LLVMInt32Type(), "");
> fixes the issue for me.

On Tue, Jan 29, 2019 at 12:38:38AM -0800, pabloa98 wrote:
> And perhaps should I modify this too?
> If that is the case, I am not sure what kind of modification we should do.

Andres commited the fix in November, and it's included in postgres11.2, which
is scheduled to be released Thursday.  So we'll both be able to re-enable JIT
on our wide tables again.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b23852766

Justin



Re: BDR 1.0: background worker wants to start that should not be there

2019-02-12 Thread Alvaro Aguayo Garcia-Rada
Hi.

You have deleted the node from BDR setup, but you still have to delete it from 
the postgres logical replication:

SELECT * FROM pg_replication_slots;
SELECT pg_drop_replication_slot('YOURSLOT');

As a remark, based on my BDR experience, when your cluster has been damaged, 
your best option is to rebuild it.

Saludos,

Alvaro Aguayo
Jefe de Operaciones
Open Comb Systems E.I.R.L.

(+51-1) 337-7813 Anexo 4002
www.ocs.pe

- Original Message -
From: "Daniel Fink, PDF" 
To: "pgsql-general" 
Sent: Tuesday, 12 February, 2019 09:52:09
Subject: BDR 1.0: background worker wants to start that should not be there

Hi all,



After I used bdr.bdr_part_by_node_names(*p_nodes text[]*) and removed the
nodes from bdr.bdr_nodes table I still get log errors about the nonexistent
pg_replication_slot:



< 2019-02-12 06:26:21.166 PST >LOG:  starting background worker process
"bdr (6208877715678412212,1,22576474,)->bdr (6449651545875285115"



How does postgresql deduce which workers need starting?

Is there a table I am missing?



Best Regards,




*DANIEL FINK*

*Senior Software Engineer*

*tel* (+49) 89.767062.20
*fax*(+49) 89.767062.11
email daniel.f...@pdf.com

*PDF Solutions GmbH*
* (**a PDF Solutions Company)*
Managing Director: Kimon Michaels
Schwanthalerstr. 10
D-80336 München, Germany

München HRB 87307
DE 128214899

*www.pdf.com *

-- 
This message may contain confidential and privileged information. If it has 
been sent to you in error, please reply to advise the sender of the error 
and then immediately permanently delete it and all attachments to it from 
your systems. If you are not the intended recipient, do not read, copy, 
disclose or otherwise use this message or any attachments to it. The sender 
disclaims any liability for such unauthorized use.  PLEASE NOTE that all 
incoming e-mails sent to PDF e-mail accounts will be archived and may be 
scanned by us and/or by external service providers to detect and prevent 
threats to our systems, investigate illegal or inappropriate behavior, 
and/or eliminate unsolicited promotional e-mails (“spam”).  If you have any 
concerns about this process, please contact us at legal.departm...@pdf.com 
.



pg_dump on a standby for a very active master

2019-02-12 Thread Arjun Ranade
I have a Production machine which is having objects
dropped/created/truncated at all hours of the day (Read: No zero activity
window).  I have multiple standbys (repmgr streaming replication) for this
machine including a cascading standby.  Each night, I am attempting to take
a logical backup on the standby databases via pg_dump of key schemas.

Recently, due to the activity on the primary, pg_dump is failing on the
standby usually with "ERROR:  could not obtain lock on relation."

I've had the following settings set in postgresql.conf which gave me
successful backups for a while:

hot_standby = on# "off" disallows queries during
recovery
max_standby_archive_delay = -1  # max delay before canceling queries
max_standby_streaming_delay = -1# max delay before canceling queries
hot_standby_feedback = on   # send info from standby to prevent
wal_receiver_timeout = 300s # time that receiver waits for

I have it set up this way because I don't mind any replication lag on the
standbys during the logical backup.  However, recently logical backups have
been failing either due to a table dropped/truncated on the master.

Also, I use pg_dump with the parallel option in directory format.  However,
even single threaded pg_dump fails when a table is truncated on the primary.

Is there any way to guarantee consistent logical backups on a standby
server with a master that has constant DDL/activity?

I am on Postgres 10.3; RHEL 7; 128gb RAM

Thanks,
Arjun


Out of memory: Kill process nnnn (postmaster) score nn or sacrifice child

2019-02-12 Thread Vikas Sharma
Hello All,

I have a 4 node PostgreSQL 9.6 cluster with streaming replication.  we
encounter today the Out of Memory  Error on the Master which resulted in
All postres  processes restarted and cluster recovered itself. Please let
me know the best way to diagnose this issue.



The error seen in the postgresql log:

2019-02-12 10:55:17 GMT LOG:  terminating any other active server processes
2019-02-12 10:55:17 GMT WARNING:  terminating connection because of crash
of another server process
2019-02-12 10:55:17 GMT DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2019-02-12 10:55:17 GMT HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2019-02-12 10:55:17 GMT WARNING:  terminating connection because of crash
of another server process
2019-02-12 10:55:17 GMT DETAIL:  The postmaster has commanded this server
process to roll back the current transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
2019-02-12 10:55:17 GMT HINT:  In a moment you should be able to reconnect
to the database and repeat your command.
2019-02-12 10:55:17 GMT WARNING:  terminating connection because of crash
of another server process
-

Error from dmesg on linux:
---
[4331093.885622] Out of memory: Kill process n (postmaster) score nn or
sacrifice child
[4331093.890225] Killed process n (postmaster) total-vm:18905944kB,
anon-rss:1747460kB, file-rss:4kB, shmem-rss:838220kB

Thanks & Best Regards
Vikas Sharma


Re: postgresql v11.1 Segmentation fault: signal 11: by running SELECT... JIT Issue?

2019-02-12 Thread pabloa98
I tried. It works
Thanks for the information.
P

On Mon, Jan 28, 2019, 7:28 PM Tom Lane  wrote:

> pabloa98  writes:
> > I just migrated our databases from PostgreSQL version 9.6 to version
> 11.1.
> > We got a segmentation fault while running this query:
>
> > SELECT f_2110 as x FROM baseline_denull
> > ORDER BY eid ASC
> > limit 500
> > OFFSET 131000;
>
> > the table baseline_denull has 1765 columns, mainly numbers, like:
>
> Hm, that sounds like it matches this recent bug fix:
>
> Author: Andres Freund 
> Branch: master [b23852766] 2018-11-27 10:07:03 -0800
> Branch: REL_11_STABLE [aee085bc0] 2018-11-27 10:07:43 -0800
>
> Fix jit compilation bug on wide tables.
>
> The function generated to perform JIT compiled tuple deforming failed
> when HeapTupleHeader's t_hoff was bigger than a signed int8. I'd
> failed to realize that LLVM's getelementptr would treat an int8 index
> argument as signed, rather than unsigned.  That means that a hoff
> larger than 127 would result in a negative offset being applied.  Fix
> that by widening the index to 32bit.
>
> Add a testcase with a wide table. Don't drop it, as it seems useful to
> verify other tools deal properly with wide tables.
>
> Thanks to Justin Pryzby for both reporting a bug and then reducing it
> to a reproducible testcase!
>
> Reported-By: Justin Pryzby
> Author: Andres Freund
> Discussion: https://postgr.es/m/20181115223959.gb10...@telsasoft.com
> Backpatch: 11, just as jit compilation was
>
>
> This would result in failures on wide rows that contain some null
> entries.  If your table is mostly-not-null, that would fit the
> observation that it only crashes on a few rows.
>
> Can you try REL_11_STABLE branch tip and see if it works for you?
>
> regards, tom lane
>


Re: Out of memory: Kill process nnnn (postmaster) score nn or sacrifice child

2019-02-12 Thread Adrian Klaver

On 2/12/19 8:20 AM, Vikas Sharma wrote:

Hello All,

I have a 4 node PostgreSQL 9.6 cluster with streaming replication.  we 
encounter today the Out of Memory  Error on the Master which resulted in 
All postres  processes restarted and cluster recovered itself. Please 
let me know the best way to diagnose this issue.


For a start look back further in the Postgres log then the below. What 
is shown below is the effects of the OOM killer. What you need to look 
for is the statement that caused Postgres memory to increase to the 
point that the OOM killer was invoked.






The error seen in the postgresql log:

2019-02-12 10:55:17 GMT LOG:  terminating any other active server processes
2019-02-12 10:55:17 GMT WARNING:  terminating connection because of 
crash of another server process
2019-02-12 10:55:17 GMT DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because 
another server process exited abnormally and possibly corrupted shared 
memory.
2019-02-12 10:55:17 GMT HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2019-02-12 10:55:17 GMT WARNING:  terminating connection because of 
crash of another server process
2019-02-12 10:55:17 GMT DETAIL:  The postmaster has commanded this 
server process to roll back the current transaction and exit, because 
another server process exited abnormally and possibly corrupted shared 
memory.
2019-02-12 10:55:17 GMT HINT:  In a moment you should be able to 
reconnect to the database and repeat your command.
2019-02-12 10:55:17 GMT WARNING:  terminating connection because of 
crash of another server process

-

Error from dmesg on linux:
---
[4331093.885622] Out of memory: Kill process n (postmaster) score nn 
or sacrifice child
[4331093.890225] Killed process n (postmaster) total-vm:18905944kB, 
anon-rss:1747460kB, file-rss:4kB, shmem-rss:838220kB


Thanks & Best Regards
Vikas Sharma



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Copy entire schema A to a different schema B

2019-02-12 Thread Tiffany Thang
Thanks Adrian!

Tiff

On Mon, Feb 11, 2019 at 11:53 AM Adrian Klaver 
wrote:

> On 2/11/19 8:30 AM, Tiffany Thang wrote:
> > Thanks Adrian and Ron. Sorry, I was not clear. What I'm trying to
> > achieve was to dump the schema quickly and be able to restore a single
> > or subset of objects from the dump. As far as I understand, the only way
> > of achieving that is to use the custom format and the -j option. Is that
> > correct? Are there any other alternatives?
>
> If you want to use -j then you need to use the -Fd output:
>
> https://www.postgresql.org/docs/10/app-pgdump.html
>
> "-j njobs
> --jobs=njobs
>
>  Run the dump in parallel by dumping njobs tables simultaneously.
> This option reduces the time of the dump but it also increases the load
> on the database server. You can only use this option with the directory
> output format because this is the only output format where multiple
> processes can write their data at the same time."
>
> If you need to grab just a subset of the schema then there are options
> to do that depending on the object. From above link as examples:
>
> "-n schema
> --schema=schema
>
>  Dump only schemas matching schema; this selects both the schema
> itself, and all its contained objects. ..."
>
>
> "-t table
> --table=table
>
>  Dump only tables with names matching table.  .."
>
>
> >
> > Thanks.
> >
> > Tiff
> >
> > On Mon, Feb 11, 2019 at 11:10 AM Ron  > > wrote:
> >
> > On 2/11/19 10:00 AM, Tiffany Thang wrote:
> >  > Hi,
> >  > To copy the source schema A to target schema B in the same
> > database in
> >  > PG10.3, I use psql to dump schema A and manually removes anything
> > specific
> >  > to the schema in the text dump file before importing into schema
> > B. How do
> >  > I achieve the same exporting from Schema A and importing into
> > schema B
> >  > using pg_dump with the -Fc option? Since the dump file generated
> is
> >  > binary, I could not make modifications to the file. Is the
> > procedure the
> >  > same in version 11?
> >
> > Why do you need to use "--format=custom" instead of "--format=plain"?
> >
> > For example:
> > $ pg_dump --format=plain --schema-only --schema=A
> >
> >
> > --
> > Angular momentum makes the world go 'round.
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


RE: pg_dump on a standby for a very active master

2019-02-12 Thread Scot Kreienkamp
How about pausing replication while you’re running the backup?  I have a mirror 
dedicated to backups, it pauses replication by cron job every night before the 
backup, then resumes midday after I’ve had enough time to find out if the 
backup was successful.


Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate
One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | | Mobile: 
7349151444 | Email: scot.kreienk...@la-z-boy.com
From: Arjun Ranade [mailto:ran...@nodalexchange.com]
Sent: Tuesday, February 12, 2019 11:33 AM
To: pgsql-general@lists.postgresql.org
Subject: pg_dump on a standby for a very active master


ATTENTION:   This email was sent to La-Z-Boy from an external source. Be 
vigilant when opening attachments or clicking links.
I have a Production machine which is having objects dropped/created/truncated 
at all hours of the day (Read: No zero activity window).  I have multiple 
standbys (repmgr streaming replication) for this machine including a cascading 
standby.  Each night, I am attempting to take a logical backup on the standby 
databases via pg_dump of key schemas.

Recently, due to the activity on the primary, pg_dump is failing on the standby 
usually with "ERROR:  could not obtain lock on relation."

I've had the following settings set in postgresql.conf which gave me successful 
backups for a while:

hot_standby = on# "off" disallows queries during 
recovery
max_standby_archive_delay = -1  # max delay before canceling queries
max_standby_streaming_delay = -1# max delay before canceling queries
hot_standby_feedback = on   # send info from standby to prevent
wal_receiver_timeout = 300s # time that receiver waits for
I have it set up this way because I don't mind any replication lag on the 
standbys during the logical backup.  However, recently logical backups have 
been failing either due to a table dropped/truncated on the master.

Also, I use pg_dump with the parallel option in directory format.  However, 
even single threaded pg_dump fails when a table is truncated on the primary.

Is there any way to guarantee consistent logical backups on a standby server 
with a master that has constant DDL/activity?

I am on Postgres 10.3; RHEL 7; 128gb RAM

Thanks,
Arjun


This message is intended only for the individual or entity to which it is 
addressed.  It may contain privileged, confidential information which is exempt 
from disclosure under applicable laws.  If you are not the intended recipient, 
you are strictly prohibited from disseminating or distributing this information 
(other than to the intended recipient) or copying this information.  If you 
have received this communication in error, please notify us immediately by 
e-mail or by telephone at the above number. Thank you.


Re: pg_dump on a standby for a very active master

2019-02-12 Thread Arjun Ranade
Yeah, that was one thing I was planning to try.  The other potential
solution is to use barman (we are using barman on all db servers including
standbys) to restore the latest backup to a VM and then take the pg_dump
from there.  But I was hoping there would be a way in the settings to
prevent such a workaround.



On Tue, Feb 12, 2019 at 12:36 PM Scot Kreienkamp <
scot.kreienk...@la-z-boy.com> wrote:

> How about pausing replication while you’re running the backup?  I have a
> mirror dedicated to backups, it pauses replication by cron job every night
> before the backup, then resumes midday after I’ve had enough time to find
> out if the backup was successful.
>
>
>
> *Scot Kreienkamp |Senior Systems Engineer | La-Z-Boy Corporate*
> One La-Z-Boy Drive| Monroe, Michigan 48162 | Office: 734-384-6403 | |
> Mobile: 7349151444 | Email: scot.kreienk...@la-z-boy.com
>
> *From:* Arjun Ranade [mailto:ran...@nodalexchange.com]
> *Sent:* Tuesday, February 12, 2019 11:33 AM
> *To:* pgsql-general@lists.postgresql.org
> *Subject:* pg_dump on a standby for a very active master
>
>
>
>
> *ATTENTION:   This email was sent to La-Z-Boy from an external source.
> Be vigilant when opening attachments or clicking links.*
>
> I have a Production machine which is having objects
> dropped/created/truncated at all hours of the day (Read: No zero activity
> window).  I have multiple standbys (repmgr streaming replication) for this
> machine including a cascading standby.  Each night, I am attempting to take
> a logical backup on the standby databases via pg_dump of key schemas.
>
>
>
> Recently, due to the activity on the primary, pg_dump is failing on the
> standby usually with "ERROR:  could not obtain lock on relation."
>
>
>
> I've had the following settings set in postgresql.conf which gave me
> successful backups for a while:
>
>
>
> hot_standby = on# "off" disallows queries during
> recovery
>
> max_standby_archive_delay = -1  # max delay before canceling
> queries
> max_standby_streaming_delay = -1# max delay before canceling
> queries
> hot_standby_feedback = on   # send info from standby to prevent
>
> wal_receiver_timeout = 300s # time that receiver waits for
>
> I have it set up this way because I don't mind any replication lag on the
> standbys during the logical backup.  However, recently logical backups have
> been failing either due to a table dropped/truncated on the master.
>
>
>
> Also, I use pg_dump with the parallel option in directory format.
> However, even single threaded pg_dump fails when a table is truncated on
> the primary.
>
>
>
> Is there any way to guarantee consistent logical backups on a standby
> server with a master that has constant DDL/activity?
>
>
>
> I am on Postgres 10.3; RHEL 7; 128gb RAM
>
>
>
> Thanks,
>
> Arjun
>
>
>
> This message is intended only for the individual or entity to which it is
> addressed.  It may contain privileged, confidential information which is
> exempt from disclosure under applicable laws.  If you are not the intended
> recipient, you are strictly prohibited from disseminating or distributing
> this information (other than to the intended recipient) or copying this
> information.  If you have received this communication in error, please
> notify us immediately by e-mail or by telephone at the above number.
> Thank you.
>


Re: pg_dump on a standby for a very active master

2019-02-12 Thread Stephen Frost
Greetings,

* Arjun Ranade (ran...@nodalexchange.com) wrote:
> Yeah, that was one thing I was planning to try.  The other potential
> solution is to use barman (we are using barman on all db servers including
> standbys) to restore the latest backup to a VM and then take the pg_dump
> from there.  But I was hoping there would be a way in the settings to
> prevent such a workaround.

Performing a file-level backup and then restoring that and then
taking a pg_dump of restored cluster works quite well as a solution, in
my experience, even better is when you can do a delta restore over top
of the prior restore, updating just the files which were different, as
that can be much faster.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: pg_dump on a standby for a very active master

2019-02-12 Thread Arjun Ranade
Will barman automatically do a delta restore assuming the postgres server
is stopped and the old cluster exists at the same location it's restoring
to?

On Tue, Feb 12, 2019 at 12:59 PM Stephen Frost  wrote:

> Greetings,
>
> * Arjun Ranade (ran...@nodalexchange.com) wrote:
> > Yeah, that was one thing I was planning to try.  The other potential
> > solution is to use barman (we are using barman on all db servers
> including
> > standbys) to restore the latest backup to a VM and then take the pg_dump
> > from there.  But I was hoping there would be a way in the settings to
> > prevent such a workaround.
>
> Performing a file-level backup and then restoring that and then
> taking a pg_dump of restored cluster works quite well as a solution, in
> my experience, even better is when you can do a delta restore over top
> of the prior restore, updating just the files which were different, as
> that can be much faster.
>
> Thanks!
>
> Stephen
>


Re: pg_dump on a standby for a very active master

2019-02-12 Thread Stephen Frost
Greetings,

* Arjun Ranade (ran...@nodalexchange.com) wrote:
> Will barman automatically do a delta restore assuming the postgres server
> is stopped and the old cluster exists at the same location it's restoring
> to?

I don't know if barman supports that today, it might.  I do know that
pgbackrest supports delta restores with the --delta option.

Thanks!

Stephen


signature.asc
Description: PGP signature


Postgresql RDS DB Latency Chossing Hash join Plan

2019-02-12 Thread github kran
On Tue, Feb 12, 2019 at 7:07 AM github kran  wrote:

>
>
> On Mon, Feb 11, 2019 at 6:00 PM github kran  wrote:
>
>>
>>
>> On Mon, Feb 11, 2019 at 3:29 PM Michael Lewis  wrote:
>>
>>> Are default statistics target the same on both prod and AWS? Have you
>>> analyzed all tables being used in this query to ensure stats are up proper?
>>> If the optimizer is choosing a different plan, then the stats must be
>>> different IMO.
>>>
>>>
>>> *Michael Lewis |  Software Engineer*
>>> *Entrata*
>>>
>>
>>
>> Thanks for your reply  I have verified few of the tables and their
>> default statistics target and they seem to be same but is there anything in
>> particular you want me to look at it to differentiate Prod and Non prod
>> databases ?. ( Also the DB instance size is same but there is little
>> more data in the Non prod Aurora RDS instance compared to Prod instance).
>>
>> Query used.
>>>select * from pg_stats where tablename = 'tableName'
>>>
>>>
>>> On Mon, Feb 11, 2019 at 2:15 PM github kran 
>>> wrote:
>>>
 Hi Postgres Team,
 We are using AWS Aurora RDS Postgres DB 9.6.8 Engine. We recently
 deleted few million rows from the database and ran into a issue in one of
 our dev account where the
 DB was not normal after this deletion. We did re index, vacuuming
 entire database but we couldnt bring it to the same state as earlier. So
 next steps we deleted the database and
 recreated the database by copying the snapshot from a production
 instance. Further did vacumming, re-index on the database.

 After this now the dev database seems to be in a better state than
 earlier but we are seeing few of our DB calls are taking more than 1 minute
 when we are fetching data and we observed
 this is because the query plan was executing a hash join as part of the
 query whereas a similar query on prod instance is not doing any hash join
 and is returning faster.

 Also we did not want to experiment by modifing the DB settings by doing
 enable_hash_join to off or random_page_count to 1 as we dont have these
 settings in Prod instance.

 Note:
 The partition table sizes we have here is between 40 GB to 75 GB and
 this is our normal size range, we have a new partition table for every 7
 days.

 Appreciate your ideas on what we could be missing and what we can
 correct here to reduce the query latency.

 Thanks
 githubKran

>>>


Temp tables and replication identities

2019-02-12 Thread Michael Lewis
I am curious about receiving an error on updating/inserting into a temp
table when a replication for "all tables' is created in PG 10.6. Given temp
tables are not replicated, it seems odd that an update fails unless a
replication identity is defined.

To reproduce, try the below code. Uncomment line 3 and the error is gone of
course. It just seems like the identity should not be need to be defined on
a temp table since it won't be replicated anyway.

CREATE publication test1 FOR ALL TABLES;
CREATE TEMP TABLE testing123 ON COMMIT DROP AS ( SELECT 1 AS value );
/* ALTER TABLE pg_temp.testing123 REPLICA IDENTITY FULL; */
UPDATE testing123 SET value = 2;


*Michael Lewis  |  Software Engineer*
*Entrata*
*c: **619.370.8697 <619-370-8697>*


Re: Postgresql RDS DB Latency Chossing Hash join Plan

2019-02-12 Thread Michael Lewis
Did you update the stats by running ANALYZE on the tables involved, or
perhaps the entire database on the 'Non prod Aurora RDS instance'? Can you
share the two execution plans?

*Michael Lewis  |  Software Engineer*
*Entrata*
*c: **619.370.8697 <619-370-8697>*


On Tue, Feb 12, 2019 at 11:27 AM github kran  wrote:

>
>
> On Tue, Feb 12, 2019 at 7:07 AM github kran  wrote:
>
>>
>>
>> On Mon, Feb 11, 2019 at 6:00 PM github kran  wrote:
>>
>>>
>>>
>>> On Mon, Feb 11, 2019 at 3:29 PM Michael Lewis 
>>> wrote:
>>>
 Are default statistics target the same on both prod and AWS? Have you
 analyzed all tables being used in this query to ensure stats are up proper?
 If the optimizer is choosing a different plan, then the stats must be
 different IMO.


 *Michael Lewis |  Software Engineer*
 *Entrata*

>>>
>>>
>>> Thanks for your reply  I have verified few of the tables and their
>>> default statistics target and they seem to be same but is there anything in
>>> particular you want me to look at it to differentiate Prod and Non prod
>>> databases ?. ( Also the DB instance size is same but there is little
>>> more data in the Non prod Aurora RDS instance compared to Prod instance).
>>>
>>> Query used.
select * from pg_stats where tablename = 'tableName'


 On Mon, Feb 11, 2019 at 2:15 PM github kran 
 wrote:

> Hi Postgres Team,
> We are using AWS Aurora RDS Postgres DB 9.6.8 Engine. We recently
> deleted few million rows from the database and ran into a issue in one of
> our dev account where the
> DB was not normal after this deletion. We did re index, vacuuming
> entire database but we couldnt bring it to the same state as earlier. So
> next steps we deleted the database and
> recreated the database by copying the snapshot from a production
> instance. Further did vacumming, re-index on the database.
>
> After this now the dev database seems to be in a better state than
> earlier but we are seeing few of our DB calls are taking more than 1 
> minute
> when we are fetching data and we observed
> this is because the query plan was executing a hash join as part of
> the query whereas a similar query on prod instance is not doing any hash
> join and is returning faster.
>
> Also we did not want to experiment by modifing the DB settings by
> doing enable_hash_join to off or random_page_count to 1 as we dont have
> these settings in Prod instance.
>
> Note:
> The partition table sizes we have here is between 40 GB to 75 GB and
> this is our normal size range, we have a new partition table for every 7
> days.
>
> Appreciate your ideas on what we could be missing and what we can
> correct here to reduce the query latency.
>
> Thanks
> githubKran
>



Subquery to select max(date) value

2019-02-12 Thread Rich Shepard

The query is to return the latest next_contact date for each person. Using
the max() aggregate function and modeling the example of lo_temp on page 13
of the rel. 10 manual I wrote this statement:

select (P.person_id, P.lname, P.fname, P.direct_phone, O.org_name, 
A.next_contact)
from People as P, Organizations as O, Activities as A
where P.org_id = O.org_id and P.person_id = A.person_id and
  A.next_contact = select (max(A.next_contact) from A)
group by A.next_contact, P.person_id;

The syntax error returned by psql is:

psql:next_contact_date.sql:7: ERROR:  syntax error at or near "select"
LINE 4:   A.next_contact = select (max(A.next_contact) from A)
   ^
and I fail to see what I've done incorrectly.

Do I need to insert DISTINCT ON in the main or sub query? If so, what is the
correct syntax to extract all desired columns from each selected row?

If this is covered in the manual please point me to the proper section; if
not, please educate me on the appropriate syntax to produce the desired
output.

TIA,

Rich



Re: Subquery to select max(date) value

2019-02-12 Thread David G. Johnston
On Tue, Feb 12, 2019 at 3:24 PM Rich Shepard  wrote:
> psql:next_contact_date.sql:7: ERROR:  syntax error at or near "select"
> LINE 4:   A.next_contact = select (max(A.next_contact) from A)
> ^
> and I fail to see what I've done incorrectly.

You put the open parenthesis after the word select instead of before.

A.next_contact = (SELECT max(A.next_contact) FROM A)

David J.



Re: Subquery to select max(date) value [RESOLVED]

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, David G. Johnston wrote:


You put the open parenthesis after the word select instead of before.
A.next_contact = (SELECT max(A.next_contact) FROM A)


David.

Color me suitably embarrassed.

Thank you,

Rich



Re: Subquery to select max(date) value

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, Jeff Ross wrote:


Try (select (max(A.next_contact) from A)


Thanks, Jeff.

The syntax accepted by psql is
A.next_contact = (select (max(A.next_contact)) from Activities as A)
but the date is not included in the output.

The revised statement is now:
select (P.person_id, P.lname, P.fname, P.direct_phone, O.org_name, 
A.next_contact)
from People as P, Organizations as O, Activities as A
where P.org_id = O.org_id and P.person_id = A.person_id and
  A.next_contact = (select (max(A.next_contact)) from Activities as A)
group by A.next_contact, O.org_id, P.person_id;

This produces the first 5 colums in the outer select but no next_contact
date. When I move A.next_contact to the head of the select list each row
begins with 'infinity'.

I'm still missing the requisite knowledge.

Rich



Re: Subquery to select max(date) value

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, Rich Shepard wrote:


 A.next_contact = (select (max(A.next_contact)) from Activities as A)


Errata:

The parentheses around the max aggregate are not necessary.

A.next_contact now displays at the end of each returned row as 'infinity'.

Rich



Re: Subquery to select max(date) value

2019-02-12 Thread Matt Zagrabelny
Hey Rich,

On Tue, Feb 12, 2019 at 4:24 PM Rich Shepard 
wrote:

> The query is to return the latest next_contact date for each person. Using
> the max() aggregate function and modeling the example of lo_temp on page 13
> of the rel. 10 manual I wrote this statement:
>

I use DISTINCT ON and ORDER BY to get the single latest value from a table
with multiple date entries:

https://stackoverflow.com/questions/9795660/postgresql-distinct-on-with-different-order-by

HTH,

-m


Re: Subquery to select max(date) value

2019-02-12 Thread Adrian Klaver

On 2/12/19 2:48 PM, Rich Shepard wrote:

On Tue, 12 Feb 2019, Rich Shepard wrote:


 A.next_contact = (select (max(A.next_contact)) from Activities as A)


Errata:

The parentheses around the max aggregate are not necessary.

A.next_contact now displays at the end of each returned row as 'infinity'.


'infinity' is the max date, so this is what you want?



Rich





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Subquery to select max(date) value

2019-02-12 Thread Ken Tanzer
On Tue, Feb 12, 2019 at 2:48 PM Rich Shepard 
wrote:

> On Tue, 12 Feb 2019, Rich Shepard wrote:
>
> >  A.next_contact = (select (max(A.next_contact)) from Activities as A)
>
> Errata:
>
> The parentheses around the max aggregate are not necessary.
>
> A.next_contact now displays at the end of each returned row as 'infinity'.
>
> Your subquery isn't doing anything to match on person_id, so it's going to
match all the records with the highest next_contact in activities.

I think you want something more like:

A.next_contact = (select (max(A.next_contact)) from Activities as A2 WHERE
A2.person_id=A.person_id)

Or, for that matter, since next_contact is all that you're drawing from
activities, you can also just put it in the select:

select (P.person_id, P.lname, P.fname, P.direct_phone, O.org_name,
(select max(A.next_contact) from Activities as A WHERE
p.person_id=A.person_id)
FROM ...


Cheers,
Ken
-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Subquery to select max(date) value

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, Adrian Klaver wrote:


'infinity' is the max date, so this is what you want?


Adrian,

Nope. When I went to make a cup of coffee I realized that I need the other
date constraints (such as IS NOT NULL), too. I'm re-wording the statement to
put everything in the correct order. Will probably try DISTINCT ON, too, if
that makes it simpler or faster.

Thanks,

Rich



Re: Subquery to select max(date) value

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, Ken Tanzer wrote:


select (P.person_id, P.lname, P.fname, P.direct_phone, O.org_name,
(select max(A.next_contact) from Activities as A WHERE
p.person_id=A.person_id)
FROM ...



Ken,

Yes, cheers indeed. A bit of thinking and re-organizing resulted in a
working statement that's close to what I want:

select (P.person_id, P.lname, P.fname, P.direct_phone, O.org_name,
   (select max(A.next_contact)))
from People as P, Organizations as O, Activities as A
where P.org_id = O.org_id and P.person_id = A.person_id and
  /*A.next_contact = (select max(A.next_contact) from Activities as A) and 
*/
  A.next_contact <= 'today' and A.next_contact > '2018-12-31' and
  A.next_contact is not null
group by A.next_contact, O.org_id, P.person_id;

The two issues I now focus on resolving are the multiple rows per person
rather than only the most recent and the date displayed at the end of each
output row. DISTINCT ON will eliminate the first issue.

Thanks,

Rich





Getting wrong time using now()

2019-02-12 Thread Om Prakash Jaiswal
Create table service_record(Id into, time timestamp without time zone default 
now()).Postgresql version 9.6.6, RHEL7.6 , when I am executing the query. I am 
getting time 5:30hours behind current time. Please solve it

RegardsOm PrakashBangalore, India

Sent from Yahoo Mail on Android

Re: Subquery to select max(date) value

2019-02-12 Thread Ken Tanzer
>
>
> Ken,
>
> Yes, cheers indeed. A bit of thinking and re-organizing resulted in a
> working statement that's close to what I want:
>
> select (P.person_id, P.lname, P.fname, P.direct_phone, O.org_name,
> (select max(A.next_contact)))
> from People as P, Organizations as O, Activities as A
> where P.org_id = O.org_id and P.person_id = A.person_id and
>/*A.next_contact = (select max(A.next_contact) from Activities as
> A) and */
>A.next_contact <= 'today' and A.next_contact > '2018-12-31' and
>A.next_contact is not null
> group by A.next_contact, O.org_id, P.person_id;
>
> The two issues I now focus on resolving are the multiple rows per person
> rather than only the most recent and the date displayed at the end of each
> output row. DISTINCT ON will eliminate the first issue.
>
>
If that's getting you what you want, then great and more power to you.  It
looks like you'll only get people who have a next_contact in your target
window there.  You might also consider something like this...

select
p.person_id,
p.lname,
p.fname,
p.direct_phone,
o.org_name,
a.next_contact
from
people as p
LEFT JOIN organizations o USING (person_id)
LEFT JOIN (
SELECT
DISTINCT ON (person_id)
person_id,
next_contact
FROM activities a
-- WHERE ???
ORDER BY person_id,next_contact DESC
) a USING (person_id)
;

Cheers,
Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Getting wrong time using now()

2019-02-12 Thread Adrian Klaver

On 2/12/19 3:36 PM, Om Prakash Jaiswal wrote:
Create table service_record(Id into, time timestamp without time zone 
default now()).
Postgresql version 9.6.6, RHEL7.6 , when I am executing the query. I am 
getting time 5:30hours behind current time. Please solve it


Move clock back 5:30 hours:).

On serious note:

1) I would change the time field to timestamp with timezone.

2) Verify what timezone the server is running on:
show timezone;
  TimeZone

 US/Pacific





Regards
Om Prakash
Bangalore, India

Sent from Yahoo Mail on Android 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Getting wrong time using now()

2019-02-12 Thread Tom Lane
Om Prakash Jaiswal  writes:
> Create table service_record(Id into, time timestamp without time zone default 
> now()).Postgresql version 9.6.6, RHEL7.6 , when I am executing the query. I 
> am getting time 5:30hours behind current time. Please solve it

Well, you probably ought to be using LOCALTIMESTAMP, not now(),
as the initializer for a "timestamp without time zone" value,
if only to save a useless runtime datatype conversion.  See

https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT

However, if the results don't look right to you, it probably
means that you don't have the timezone parameter set correctly.

regards, tom lane



Re: Subquery to select max(date) value

2019-02-12 Thread Rich Shepard

On Tue, 12 Feb 2019, Ken Tanzer wrote:


If that's getting you what you want, then great and more power to you. It
looks like you'll only get people who have a next_contact in your target
window there. You might also consider something like this...


  

Ken,

I'll work with your example. This looks most promising.

What I want is a report of folks I need to contact. Some have next_contact
dates in the past when I did not act so I need to do so as long as there is
a next_contact date (no NULLs) and the prospect is active.

When I have a fully working statement I'll post it to the list so others can
read the accepted working solution.

Thanks again,

Rich



Re: Subquery to select max(date) value

2019-02-12 Thread Andrew Gierth
> "Rich" == Rich Shepard  writes:

 Rich> Will probably try DISTINCT ON, too, if that makes it simpler or
 Rich> faster.

You want LATERAL.

-- 
Andrew (irc:RhodiumToad)