[unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}

2019-01-03 Thread Mihalidesová Jana
Hi,

I try to connect from oracle 11.2.0.4 to postgres but I still got. Do you have 
any idea what configuration should I check. What is wrong?

ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}
ORA-28511: lost RPC connection to heterogeneous remote agent using 
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=port))(CONNECT_DATA=(SID=sid)))
ORA-02063: preceding 3 lines from POSTGRES


Thx,
jana



Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností Česká telekomunikační infrastruktura a.s. 
uzavírány v písemné formě nebo v podobě a postupem podle příslušných 
všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s., a 
pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány oprávněnou 
osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou uzavírány 
výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným elektronickým 
podpisem. Podmínky, za nichž Česká telekomunikační infrastruktura a.s. 
přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. Česká telekomunikační infrastruktura a.s. 
concludes contracts or amendments thereto in a written form or in the form and 
the procedure in accordance with relevant general terms and conditions of Česká 
telekomunikační infrastruktura a.s., if all requirements are agreed. Contracts 
are concluded by an authorized person entitled on the basis of a written 
authorization. Contracts on a future contract are concluded solely in a written 
form, self-signed or signed by means of an advanced electronic signature. The 
conditions under which Česká telekomunikační infrastruktura a.s. negotiates 
contracts and under which it proceeds are available 
here.


Re: [unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}

2019-01-03 Thread Pavel Stehule
Hi

čt 3. 1. 2019 v 12:00 odesílatel Mihalidesová Jana <
jana.mihalides...@cetin.cz> napsal:

> Hi,
>
>
>
> I try to connect from oracle 11.2.0.4 to postgres but I still got. Do you
> have any idea what configuration should I check. What is wrong?
>
>
>
> ERROR at line 1:
>
> ORA-28500: connection from ORACLE to a non-Oracle system returned this
> message:
>
> [unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}
>
> ORA-28511: lost RPC connection to heterogeneous remote agent using
> SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=port))(CONNECT_DATA=(SID=sid)))
>
> ORA-02063: preceding 3 lines from POSTGRES
>
>
>

I checked internet - and I found some relations to ODBC driver. Maybe you
have not correct (or fresh) PostgreSQL ODBC driver.

Regards

Pavel


>
>
> Thx,
>
> jana
>
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností Česká telekomunikační infrastruktura a.s.
> uzavírány v písemné formě nebo v podobě a postupem podle příslušných
> všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s.,
> a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž Česká telekomunikační
> infrastruktura a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou
> dostupné zde .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. Česká telekomunikační
> infrastruktura a.s. concludes contracts or amendments thereto in a written
> form or in the form and the procedure in accordance with relevant general
> terms and conditions of Česká telekomunikační infrastruktura a.s., if all
> requirements are agreed. Contracts are concluded by an authorized person
> entitled on the basis of a written authorization. Contracts on a future
> contract are concluded solely in a written form, self-signed or signed by
> means of an advanced electronic signature. The conditions under which Česká
> telekomunikační infrastruktura a.s. negotiates contracts and under which it
> proceeds are available here
> .
>


RE: [unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}

2019-01-03 Thread Mihalidesová Jana
It could be problem with odbc driver even when isql works?


isql -v postgres
+---+
| Connected!|
|   |
| sql-statement |
| help [tablename]  |
| quit  |
|   |
+---+
SQL>

Thx a lot!

J

From: Pavel Stehule 
Sent: Thursday, January 3, 2019 12:26 PM
To: Mihalidesová Jana 
Cc: pgsql-general@lists.postgresql.org
Subject: Re: [unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr 
= 209}

Hi

čt 3. 1. 2019 v 12:00 odesílatel Mihalidesová Jana 
mailto:jana.mihalides...@cetin.cz>> napsal:
Hi,

I try to connect from oracle 11.2.0.4 to postgres but I still got. Do you have 
any idea what configuration should I check. What is wrong?

ERROR at line 1:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message:
[unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}
ORA-28511: lost RPC connection to heterogeneous remote agent using 
SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=port))(CONNECT_DATA=(SID=sid)))
ORA-02063: preceding 3 lines from POSTGRES


I checked internet - and I found some relations to ODBC driver. Maybe you have 
not correct (or fresh) PostgreSQL ODBC driver.

Regards

Pavel


Thx,
jana



Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností Česká telekomunikační infrastruktura a.s. 
uzavírány v písemné formě nebo v podobě a postupem podle příslušných 
všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s., a 
pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány oprávněnou 
osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou uzavírány 
výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným elektronickým 
podpisem. Podmínky, za nichž Česká telekomunikační infrastruktura a.s. 
přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. Česká telekomunikační infrastruktura a.s. 
concludes contracts or amendments thereto in a written form or in the form and 
the procedure in accordance with relevant general terms and conditions of Česká 
telekomunikační infrastruktura a.s., if all requirements are agreed. Contracts 
are concluded by an authorized person entitled on the basis of a written 
authorization. Contracts on a future contract are concluded solely in a written 
form, self-signed or signed by means of an advanced electronic signature. The 
conditions under which Česká telekomunikační infrastruktura a.s. negotiates 
contracts and under which it proceeds are available 
here.


Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na 
uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či 
jejich změny jsou společností Česká telekomunikační infrastruktura a.s. 
uzavírány v písemné formě nebo v podobě a postupem podle příslušných 
všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s., a 
pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány oprávněnou 
osobou na základě písemného pověření. Smlouvy o smlouvě budoucí jsou uzavírány 
výhradně v písemné formě, vlastnoručně podepsané nebo s uznávaným elektronickým 
podpisem. Podmínky, za nichž Česká telekomunikační infrastruktura a.s. 
přistupuje k jednání o smlouvě a jakými se řídí, jsou dostupné 
zde.

The content of this message is intended for communication purposes only. It 
does neither represent any contract proposal, nor its amendment or acceptance 
of any potential contract proposal. Česká telekomunikační infrastruktura a.s. 
concludes contracts or amendments thereto in a written form or in the form and 
the procedure in accordance with relevant general terms and conditions of Česká 
telekomunikační infrastruktura a.s., if all requirements are agreed. Contracts 
are concluded by an authorized person entitled on the basis of a written 
authorization. Contracts on a future contract are concluded solely in a written 
form, self-signed or signed by means of an advanced electronic signature. The 
conditions under which Česká telekomunikační infrastruktura a.s. negotiates 
contracts and under which it proceeds are available 
here.


Re: Memory exhaustion due to temporary tables?

2019-01-03 Thread Thomas Carroll
 
On Monday, December 10, 2018, 7:45:07 PM EST, Tom Lane  
wrote:  
 
 Thomas Carroll  writes:
>  On Monday, December 10, 2018, 5:50:22 PM EST, Tom Lane  
>wrote:  
>> Also, as mentioned upthread, it'd be interesting to see if there's
>> a memory context dump showing up in your server log.

> There are many memory context dumps, and they are long (well over 100 lines 
> apiece).  To me these are unfamiliar and hard to read, but I tried to pick 
> one that referenced all the same tables used in the function.  I put one such 
> dump (188 lines) here:
> https://drive.google.com/file/d/1QI4nffdZByIehb_-GULOagI_dKpKElg_/view

Thanks.  Here's the smoking gun:

  CacheMemoryContext: 221241400 total in 38 blocks; 152192 free (90 chunks); 
221089208 used

This looks like something is leaking memory directly in CacheMemoryContext
(not in any of its child contexts).  Can you confirm that all the memory
context dumps in your logs are showing similarly very-large total space
values for CacheMemoryContext?

This doesn't seem to be related to temp table usage, at least not
directly.  (As a cross-check, I created and dropped 1 temp tables in a
session, and the process's VM footprint didn't budge.)  My guess is that
something else you are doing is tickling the leak.

You said you'd been able to reproduce this problem outside production.
Any chance you could boil that down to a sharable test case?  I'd bet
long odds that it's not related to your data particularly, but rather
to your schema or something your function is doing, so you probably
could reproduce it with some dummy data (and, perhaps, not even very
much of that; repeating some DDL action many times is probably what
is making the leak grow to something noticeable).

If I had to guess at this point, I'd wonder if it's something to do
with partitioned tables.  That code is still relatively wet behind
the ears, and memory leaks are the kind of bug I'd be expecting to
still be shaking out of it now.  But you might not be using partitioned
tables at all.  Anyway, anything you're doing with new-in-v10 features,
particularly schema-related features, would be a good bet for the
source of the problem.

            regards, tom lane


Thanks for your response and sorry for the delay in getting back to this.  
Because holidays.
To refresh your memory, the issue I encountered is that a long-persistent
connection that periodically issues function calls consumes more and more
memory - eventually causing memory exhaustion.  Admittedly, it does take
weeks.

This email is about creating a shareable test case.

The test case is simple - so simple that I am surprised no one has encountered
this before, and therefore I am really more fearing that I am doing something
stupid in my effort to make it shareable and simple.  In creating this example
I have not run the code till it crashed the server; instead I used the
measuring methodology you will see below (monitoring the smaps file).

I have put needed files here:

https://drive.google.com/open?id=1ff074Qxn7Bx9Ud9GRaegDfuJn_Tf_NTi

Table of contents:

testbed.public.backup # database dump, not compressed, includes "create 
database"
pg_hba.conf # self explanatory; allows local postgres to log in as trusted
postgresql.conf # self explanatory. nothing weird
loop_query.bash # sends a query via named pipe to Postgres once per second

To recreate the issue, load the "testbed" database into a Postgres server,
configured so that psql does not need to send a password for user postgres.
I created the server with this:initdb $PGDATA --locale="en_US.UTF-8" -E "UTF-8" 
-k

Then run as postgres:

cd /my/directory
mkfifo qpipe # named pipe conduit to persistent Postgres connection
cat > qpipe & # keep pipe from closing and terminating the PG connection
psql -h localhost -d testbed -U postgres < qpipe & # listener connection
./query_loop.bash # Send function call over and over once per second
# Observe output; should be 78 rows per second

# Switch to a new terminal session
# Identify the PID of the psql process
# Monitor with this command, but plug in the real PID for the 1234
grep "^Private" /proc/1234/smaps | awk '{a+=$2}END{print a/1024.0}'

Give it a few minutes, and you will observe memory usage climbing steadily 
(if my measurement technique is good).  From last night to this morning I 
observed memory consumption grow from 3.5 to 46.6 MB.

Tested on Postgres version 10.5.
Again, thanks for any help.

Tom Carroll
  

Re: [unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}

2019-01-03 Thread Pavel Stehule
čt 3. 1. 2019 v 12:59 odesílatel Mihalidesová Jana <
jana.mihalides...@cetin.cz> napsal:

> It could be problem with odbc driver even when isql works?
>
>
>
>
>
> isql -v postgres
>
> +---+
>
> | Connected!|
>
> |   |
>
> | sql-statement |
>
> | help [tablename]  |
>
> | quit  |
>
> |   |
>
> +---+
>
> SQL>
>
>
>
> Thx a lot!
>

There is a problem with SQLGetInfo function - so there can be two issues -
1. driver, 2. odbc environment



>
> J
>
>
>
> *From:* Pavel Stehule 
> *Sent:* Thursday, January 3, 2019 12:26 PM
> *To:* Mihalidesová Jana 
> *Cc:* pgsql-general@lists.postgresql.org
> *Subject:* Re: [unixODBC]Unrecognized key passed to SQLGetInfo.
> {S1C00,NativeErr = 209}
>
>
>
> Hi
>
>
>
> čt 3. 1. 2019 v 12:00 odesílatel Mihalidesová Jana <
> jana.mihalides...@cetin.cz> napsal:
>
> Hi,
>
>
>
> I try to connect from oracle 11.2.0.4 to postgres but I still got. Do you
> have any idea what configuration should I check. What is wrong?
>
>
>
> ERROR at line 1:
>
> ORA-28500: connection from ORACLE to a non-Oracle system returned this
> message:
>
> [unixODBC]Unrecognized key passed to SQLGetInfo. {S1C00,NativeErr = 209}
>
> ORA-28511: lost RPC connection to heterogeneous remote agent using
> SID=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host)(PORT=port))(CONNECT_DATA=(SID=sid)))
>
> ORA-02063: preceding 3 lines from POSTGRES
>
>
>
>
>
> I checked internet - and I found some relations to ODBC driver. Maybe you
> have not correct (or fresh) PostgreSQL ODBC driver.
>
>
>
> Regards
>
>
> Pavel
>
>
>
>
>
> Thx,
>
> jana
>
>
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností Česká telekomunikační infrastruktura a.s.
> uzavírány v písemné formě nebo v podobě a postupem podle příslušných
> všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s.,
> a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž Česká telekomunikační
> infrastruktura a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou
> dostupné zde .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. Česká telekomunikační
> infrastruktura a.s. concludes contracts or amendments thereto in a written
> form or in the form and the procedure in accordance with relevant general
> terms and conditions of Česká telekomunikační infrastruktura a.s., if all
> requirements are agreed. Contracts are concluded by an authorized person
> entitled on the basis of a written authorization. Contracts on a future
> contract are concluded solely in a written form, self-signed or signed by
> means of an advanced electronic signature. The conditions under which Česká
> telekomunikační infrastruktura a.s. negotiates contracts and under which it
> proceeds are available here
> .
>
>
>
> Obsah této zprávy má výlučně komunikační charakter. Nepředstavuje návrh na
> uzavření smlouvy či na její změnu ani přijetí případného návrhu. Smlouvy či
> jejich změny jsou společností Česká telekomunikační infrastruktura a.s.
> uzavírány v písemné formě nebo v podobě a postupem podle příslušných
> všeobecných podmínek společnosti Česká telekomunikační infrastruktura a.s.,
> a pokud jsou dohodnuty všechny náležitosti. Smlouvy jsou uzavírány
> oprávněnou osobou na základě písemného pověření. Smlouvy o smlouvě budoucí
> jsou uzavírány výhradně v písemné formě, vlastnoručně podepsané nebo s
> uznávaným elektronickým podpisem. Podmínky, za nichž Česká telekomunikační
> infrastruktura a.s. přistupuje k jednání o smlouvě a jakými se řídí, jsou
> dostupné zde .
>
> The content of this message is intended for communication purposes only.
> It does neither represent any contract proposal, nor its amendment or
> acceptance of any potential contract proposal. Česká telekomunikační
> infrastruktura a.s. concludes contracts or amendments thereto in a written
> form or in the form and the procedure in accordance with relevant general
> terms and conditions of Česká telekomunikační infrastruktura a.s., if all
> requirements are agreed. Contracts are concluded by an authorized person
> entitled on the basis of a written authorization. Contracts on a future
> contract are concluded solely i

Re: Relocatable Binaries (RPMs) : custom installation path for PostgreSQL

2019-01-03 Thread chiru r
Hi Kevin,

Thank you for the responce.

I have tried to intall the RPMs with -relocate option,however it is not
working as expected and throwing below error.

[root@Server1dev:/root/PG11]#
#-> rpm -ivh --relocate /usr/pgsql-11/=/u01/postgres/pg11_relocate/
postgresql11-server-11.1-1PGDG.rhel7.x86_64.rpm
warning: postgresql11-server-11.1-1PGDG.rhel7.x86_64.rpm: Header V4
DSA/SHA1 Signature, key ID 442df0f8: NOKEY
Preparing...  #
[100%]
path /usr/pgsql-11 in package
postgresql11-server-11.1-1PGDG.rhel7.x86_*64 is not relocatable*
[root@server1dev:/root/PG11]#


Thanks,
Chiranjeevi


On Wed, Jan 2, 2019 at 1:25 PM Kevin Brannen  wrote:

> From: chiru r 
>
> > I have installed Community  PostgreSQL RPMs and are going into "
> /usr/pgsql-11/" by default.
> > Please let us know how to get the PostgreSQL-11 installed in above
> custom paths using RPMs? .
>
>
> I've never tried it, but look at the "--relocate" option for the rpm
> command. The "--prefix" option might do it too, but I think "--relocate" is
> what you need.
>
> HTH,
> Kevin
>
>
> This e-mail transmission, and any documents, files or previous e-mail
> messages attached to it, may contain confidential information. If you are
> not the intended recipient, or a person responsible for delivering it to
> the intended recipient, you are hereby notified that any disclosure,
> distribution, review, copy or use of any of the information contained in or
> attached to this message is STRICTLY PROHIBITED. If you have received this
> transmission in error, please immediately notify us by reply e-mail, and
> destroy the original transmission and its attachments without reading them
> or saving them to disk. Thank you.
>


query with regular expression

2019-01-03 Thread Glenn Schultz
All,

I am writing a query to select * from where ~ '[regx]  an example of the
sting that I am matching is below

FHLG16725

The first two alpha characters to match are FN, FH, GN any alpha characters
between those and the numeric don't matter as the first two alpha + numeric
will create a unique.

reading the docs I am pretty sure I need to use ~ for bracket expression

I tried '^[FN-FG-GN][0-9]' but does not seem to work.  I have to admit I am
weak on regex - never quite seem to be able to get it through my coconut.

Any help would be appreciated,
Glenn


Re: query with regular expression

2019-01-03 Thread David G. Johnston
On Thursday, January 3, 2019, Glenn Schultz  wrote:

> All,
>
> I am writing a query to select * from where ~ '[regx]  an example of the
> sting that I am matching is below
>
> FHLG16725
>
> The first two alpha characters to match are FN, FH, GN any alpha
> characters between those and the numeric don't matter as the first two
> alpha + numeric will create a unique.
>
> reading the docs I am pretty sure I need to use ~ for bracket expression
>
> I tried '^[FN-FG-GN][0-9]' but does not seem to work.  I have to admit I
> am weak on regex - never quite seem to be able to get it through my coconut.
>

Yeah...that’s not even close...not exactly sure what it would match but
it’s only two characters, one letter maybe and one number.  You seem to
want capturing groups though so using the ~ operator isn’t going to work,
you need to use the function.


>
> Any help would be appreciated,
> Glenn
>

~ ‘^(FN|FH|GN)[A-Z]*[0-9]+$’

David J.


Re: query with regular expression

2019-01-03 Thread Glenn Schultz
Thanks for the tip!

On Thu, Jan 3, 2019 at 12:58 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thursday, January 3, 2019, Glenn Schultz  wrote:
>
>> All,
>>
>> I am writing a query to select * from where ~ '[regx]  an example of the
>> sting that I am matching is below
>>
>> FHLG16725
>>
>> The first two alpha characters to match are FN, FH, GN any alpha
>> characters between those and the numeric don't matter as the first two
>> alpha + numeric will create a unique.
>>
>> reading the docs I am pretty sure I need to use ~ for bracket expression
>>
>> I tried '^[FN-FG-GN][0-9]' but does not seem to work.  I have to admit I
>> am weak on regex - never quite seem to be able to get it through my coconut.
>>
>
> Yeah...that’s not even close...not exactly sure what it would match but
> it’s only two characters, one letter maybe and one number.  You seem to
> want capturing groups though so using the ~ operator isn’t going to work,
> you need to use the function.
>
>
>>
>> Any help would be appreciated,
>> Glenn
>>
>
> ~ ‘^(FN|FH|GN)[A-Z]*[0-9]+$’
>
> David J.
>


Re: Memory exhaustion due to temporary tables?

2019-01-03 Thread Tom Lane
[ please don't re-quote the entire thread when replying ]

Thomas Carroll  writes:
> On Monday, December 10, 2018, 7:45:07 PM EST, Tom Lane 
>  wrote:  
>> You said you'd been able to reproduce this problem outside production.
>> Any chance you could boil that down to a sharable test case?

> The test case is simple - so simple that I am surprised no one has encountered
> this before, and therefore I am really more fearing that I am doing something
> stupid in my effort to make it shareable and simple.  In creating this example
> I have not run the code till it crashed the server; instead I used the
> measuring methodology you will see below (monitoring the smaps file).

Hmm.  So what this test case is doing is repeatedly creating a temp table
with ON COMMIT DROP, querying that table, and then closing the transaction
(allowing the temp table to go away).  It does leak memory in
CacheMemoryContext, and as far as I can tell, the leakage consists
entirely of negative catcache entries for pg_statistic rows.  That happens
because while querying the temp table, the planner tries to look up
statistics for the table; but there are none, since you haven't ANALYZEd
it.  So negative catcache entries get made in hopes of saving the cost of
probing pg_statistic again later.  But, when we drop the table, those
catcache entries don't get cleared because they do not match any catalog
rows that get deleted during the drop.

In a lot of use-cases, this wouldn't matter too much, either because the
session isn't long-lived enough to accumulate huge numbers of negative
entries, or because other catalog activity causes the entries to get
flushed anyway.  But if you don't have much DDL going on other than
this temp table activity, then yeah it could build up.

Not sure about good ways to fix this.  I can think of various more-or-less
klugy fixes that are specific to the pg_statistic case.  There's been some
ongoing discussion about trying to limit accumulation of negative catcache
entries more generally, but that's not very close to being committable
I think.

In the meantime, you might think about switching over to some process
that doesn't create and drop the same table constantly.  Perhaps
along the lines of

create temp table if not exists tt_preTally (...) on commit delete rows;

if (tt_preTally contains no rows) then
   insert into tt_preTally select ...
end if;

This'd have the advantage of reducing catalog churn in other catalogs
besides pg_statistic, too.

regards, tom lane



Re: getting pg_basebackup to use remote destination

2019-01-03 Thread Stephen Frost
Greetings Chuck,

* Chuck Martin (clmar...@theombudsman.com) wrote:
> Using iperf, the transfer speed between the two servers (from the main to
> the standby) was 938 Mbits/sec. If I understand the units correctly, it is
> close to what it can be.

That does look like the rate it should be going at, but it should only
take about 2 hours to copy 750GB at that rate.

How much WAL does this system generate though...?  If you're generating
a very large amount then it's possible the WAL streaming is actually
clogging up the network and causing the rate of copy on the data files
to be quite slow.  You'd have to be generating quite a bit of WAL
though.

> Your earlier suggestion was to do the pg_basebackup locally and rsync it
> over. Maybe that would be faster. At this point, it is saying it is 6%
> through, over 24 hours after being started.

For building out a replica, I'd tend to use my backups anyway instead of
using pg_basebackup.  Provided you have good backups and reasonable WAL
retention, restoring a backup and then letting it replay WAL from the
archive until it can catch up with the primary works very well.  If you
have a very high rate of WAL then you might consider taking a full
backup and then taking an incremental backup (which is much faster, and
reduces the amount of WAL required to be only that needed for the length
of time that the incremental backup is started until the replica has
caught up to WAL that the primary has).

There's a few different backup tools out there which can do parallel
backup and in-transit compression, which loads up the primary's CPUs
with process doing compression but should reduce the overall time if the
bottleneck is the network.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Thoughts on row-level security for webapps?

2019-01-03 Thread Stephen Frost
Greetings,

* Siegfried Bilstein (sbilst...@gmail.com) wrote:
> I'm evaluating using a tool called Postgraphile that generates a GraphSQL
> server from a postgres setup. The recommended way of handling security is
> to implement RLS within postgres and simply have the webserver take a
> cookie or similar and define which user is querying data.
> 
> I've normally built webapps like this: pull out user id from a session
> cookie -> the API endpoint verifies the user and whether or not it has
> access to the given data -> app code mutates the data.
> 
> With Postgraphile the request specifies the mutation and the server
> processes the request and relies on Postgres to determine if the user has
> correct access rights.
> 
> It seems like I would need to create a ROLE for every single member that
> signs up for my website which I'm a little concerned about. Is this a
> common usage pattern for SQL security? Any gotchas relying on RLS?

You don't have to create a role for every member, though depending on
your expectation you might want to.  You could just set a custom GUC
which is used in the policy, but you then have to trust the web
application code to always do that correctly and to always properly
validate the client (without bugs, of course).

RLS has been around for a while now and it works really rather well in
most cases.  There are some corner cases where you're doing some kind of
filtering that might be able to use an index but the functions aren't
leakproof and therefore can't be used, causing a performance regression,
but that's not too hard to test for and only an issue if the policy
itself isn't very selective.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Thoughts on row-level security for webapps?

2019-01-03 Thread Siegfried Bilstein
Thank you for the responses.

I did some research and now understand that in my query I'll need to do
something like 'SET LOCAL user_id=5; SET ROLE app_user' and define a policy
that references a 'user_id' variable. I think I have enough info now to get
started.

On Thu, Jan 3, 2019 at 12:49 PM Stephen Frost  wrote:

> Greetings,
>
> * Siegfried Bilstein (sbilst...@gmail.com) wrote:
> > I'm evaluating using a tool called Postgraphile that generates a GraphSQL
> > server from a postgres setup. The recommended way of handling security is
> > to implement RLS within postgres and simply have the webserver take a
> > cookie or similar and define which user is querying data.
> >
> > I've normally built webapps like this: pull out user id from a session
> > cookie -> the API endpoint verifies the user and whether or not it has
> > access to the given data -> app code mutates the data.
> >
> > With Postgraphile the request specifies the mutation and the server
> > processes the request and relies on Postgres to determine if the user has
> > correct access rights.
> >
> > It seems like I would need to create a ROLE for every single member that
> > signs up for my website which I'm a little concerned about. Is this a
> > common usage pattern for SQL security? Any gotchas relying on RLS?
>
> You don't have to create a role for every member, though depending on
> your expectation you might want to.  You could just set a custom GUC
> which is used in the policy, but you then have to trust the web
> application code to always do that correctly and to always properly
> validate the client (without bugs, of course).
>
> RLS has been around for a while now and it works really rather well in
> most cases.  There are some corner cases where you're doing some kind of
> filtering that might be able to use an index but the functions aren't
> leakproof and therefore can't be used, causing a performance regression,
> but that's not too hard to test for and only an issue if the policy
> itself isn't very selective.
>
> Thanks!
>
> Stephen
>


-- 
Siggy Bilstein
CTO of Ayuda Care 
Book some time  with me!


Re: getting pg_basebackup to use remote destination

2019-01-03 Thread Chuck Martin
On Thu, Jan 3, 2019 at 3:46 PM Stephen Frost  wrote:

> Greetings Chuck,
>
> * Chuck Martin (clmar...@theombudsman.com) wrote:
> > Using iperf, the transfer speed between the two servers (from the main to
> > the standby) was 938 Mbits/sec. If I understand the units correctly, it
> is
> > close to what it can be.
>
> That does look like the rate it should be going at, but it should only
> take about 2 hours to copy 750GB at that rate.


That’s what I was expecting.

>
> How much WAL does this system generate though...?  If you're generating
> a very large amount then it's possible the WAL streaming is actually
> clogging up the network and causing the rate of copy on the data files
> to be quite slow.  You'd have to be generating quite a bit of WAL
> though.


It shouldn’t be excessive, but I’ll look closely at that.

>
>
> > Your earlier suggestion was to do the pg_basebackup locally and rsync it
> > over. Maybe that would be faster. At this point, it is saying it is 6%
> > through, over 24 hours after being started.
>
> For building out a replica, I'd tend to use my backups anyway instead of
> using pg_basebackup.  Provided you have good backups and reasonable WAL
> retention, restoring a backup and then letting it replay WAL from the
> archive until it can catch up with the primary works very well.  If you
> have a very high rate of WAL then you might consider taking a full
> backup and then taking an incremental backup (which is much faster, and
> reduces the amount of WAL required to be only that needed for the length
> of time that the incremental backup is started until the replica has
> caught up to WAL that the primary has).
>
> There's a few different backup tools out there which can do parallel
> backup and in-transit compression, which loads up the primary's CPUs
> with process doing compression but should reduce the overall time if the
> bottleneck is the network.


I’ll check out some solutions this weekend.

I appreciate the tips.

Chuck

>
>
> Thanks!
>
> Stephen
>
-- 
Chuck Martin
Avondale Software


SPI Interface to Call Procedure with Transaction Control Statements

2019-01-03 Thread Jack LIU
Hi All,

In PG-11, procedures were introduced. In the pg_partman tool, a procedure
named run_maintenance_proc was developed to replace run_maintenance
function. I was trying to call this procedure in pg_partman with
SPI_execute() interface and this is the command being executed:
CALL "partman".run_maintenance_proc(p_analyze := true, p_jobmon := true)

 I received the following error:

2019-01-02 20:13:04.951 PST [26446] ERROR:  invalid transaction termination
2019-01-02 20:13:04.951 PST [26446] CONTEXT:  PL/pgSQL function
partman.run_maintenance_proc(integer,boolean,boolean,boolean) line 45
at COMMIT

Apparently, the transaction control command 'COMMIT' is not allowed in a
procedure CALL function. But I can CALL this procedure in psql directly.

According to the documentation of CALL, "If CALL is executed in a
transaction block, then the called procedure cannot execute transaction
control statements. Transaction control statements are only allowed if CALL is
executed in its own transaction."

Therefore, it looks like that SPI_execute() is calling the procedure within
a transaction block. So my question is that is there any SPI interface that
can call a procedure with transaction control commands? (I tried to use
SPI_connect_ext(SPI_OPT_NONATOMIC) to establish a nonatomic  connection but
it doesn't help.)

Thanks,

Jiayi Liu