Re:Re: ERROR: XX000: cannot update SecondarySnapshot during a parallel operation

2019-03-15 Thread fuzk
When I set max_parallel_workers_per_gather=0, the select statement can execute 
successfully. The problem has been solved. Thank you all very much. Alan








At 2019-03-14 22:29:24, "Adrian Klaver"  wrote:
>On 3/13/19 10:54 PM, fuzk wrote:
>> Dear Adrian,
>> 
>> My setting is as following.
>> 
>> max_parallel_workers_per_gather=32
>
>Not sure if it is possible without affecting other operations, but you 
>could set the above to 0 to temporarily disable parallel queries and see 
>if that eliminates the error.
>
>Also see:
>
>https://trac.osgeo.org/postgis/ticket/4129
>
>> 
>> I am looking forward to hearing from you.
>> 
>> Many thanks
>> Alan.
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> At 2019-03-13 22:31:11, "Adrian Klaver"  wrote:
>>>On 3/12/19 7:54 PM, fuzk wrote:
 Dear Sir/Madam
 
 I got an error when I execute the following select sentence.
 Would you please solve the problem for me?
>>>
>>>What version of Postgres?
>>>
 Thank you .
 
 Alan Fu.
 
 postgres=# \set VERBOSITY verbose
 postgres=# SELECT 
 round(cast(coalesce(sum(ST_length(geography(geometry)))/1000,0) as 
 NUMERIC),4)||'KM' field_value from had_link;
 
 ERROR:  XX000: cannot update SecondarySnapshot during a parallel operation
 CONTEXT:  SQL statement "SELECT proj4text FROM public.spatial_ref_sys 
 WHERE srid = 4326 LIMIT 1"
 parallel worker
 LOCATION:  GetLatestSnapshot, snapmgr.c:387
>>>
>>>I'm guessing ST_length is not parallel safe.
>>>
>>>What is your setting for?:
>>>
>>>max_parallel_workers_per_gather
>>>
 
 
>>>
>>>
>>>-- 
>>>Adrian Klaver
>>>adrian.kla...@aklaver.com
>> 
>> 
>> 
>
>
>-- 
>Adrian Klaver
>adrian.kla...@aklaver.com


Re: Camel case identifiers and folding

2019-03-15 Thread Wim Bertels
Steve Haresnape schreef op vr 15-03-2019 om 12:06 [+1300]:
> I'm porting a sql server database to postgresql 9.6. My camelCase
> identifiers are having their humps removed. This is disconcerting and
> sad.

the SQL ISO defines exactly this behaviour (as you note),
so this is be expected

suggestion : instead of CamelCase use seperate_lower_case_by_underscore

You don't like this notation, maybe this can help.
It is suggested that is more relaxing for the brain, our brain tends to
focus on the Capitals first.
Reasoning: So scanning a literal like dogFeedFood first goes the F of
Feed and Food and only then goes back to dog.

Changing habits can be hard, unfortunately.

"" > literal
'' > string

-- 
mvg,
Wim
--
Lector
UC Leuven-Limburg
--
O, it is excellent
To have a giant's strength; but it is tyrannous
To use it like a giant.
-- Shakespeare, "Measure for Measure", II, 2



Re: Do all superuser processes count toward superuser_reserved_connections?

2019-03-15 Thread Adrian Klaver

On 3/14/19 8:23 AM, Jeremy Finzel wrote:
I don't find a clear mention in the docs of superuser processes that are 
exempt from counting toward superuser_reserved_connections.  So I would 
think that it's possible that postgres autovac workers ought to count 
toward that.  Am I wrong about that?  I actually have the same question 


AFAICK autovacuum workers do not use the connections referred to above. 
The details can be found here:


https://doxygen.postgresql.org/autovacuum_8c.html

about pglogical replication background workers and manager, which also 
run as postgres.


But the actual connection can be by a different user:

https://www.postgresql.org/docs/11/logical-replication-security.html



Do I need to increase my limit to account for these workers, or are some 
or all of these processes ignored by superuser_reserved_connections?


I am still learning about logical replication, but as I understand it 
the process uses replication slots:


https://www.postgresql.org/docs/11/logical-replication-subscription.html#LOGICAL-REPLICATION-SUBSCRIPTION-SLOT

So configuring for that involves:

https://www.postgresql.org/docs/11/logical-replication-config.html

From the looks of it superuser_reserved_connections is not impacted.



Many thanks!
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Do all superuser processes count toward superuser_reserved_connections?

2019-03-15 Thread Jeremy Finzel
On Fri, Mar 15, 2019 at 9:48 AM Adrian Klaver 
wrote:

> On 3/14/19 8:23 AM, Jeremy Finzel wrote:
> > I don't find a clear mention in the docs of superuser processes that are
> > exempt from counting toward superuser_reserved_connections.  So I would
> > think that it's possible that postgres autovac workers ought to count
> > toward that.  Am I wrong about that?  I actually have the same question
>
> AFAICK autovacuum workers do not use the connections referred to above.
> The details can be found here:
>
> https://doxygen.postgresql.org/autovacuum_8c.html


Not sure I can really grok that and how it answers the question.  Are you
saying if you have max_connections set to 10, you could theoretically have
20 autovac processes still?


> > about pglogical replication background workers and manager, which also
> > run as postgres.
>
> But the actual connection can be by a different user:
>
> https://www.postgresql.org/docs/11/logical-replication-security.html
>
>
But I am speaking of pglogical, which does require superuser, last I
checked :). 

It does use replication slots, but there are processes corresponding to
each subscription.  I have some databases with dozens of them.

Thanks,
Jeremy


jsonb_set performance degradation / multiple jsonb_set on multiple documents

2019-03-15 Thread Alexandru Lazarev
Hi PostgreSQL Community.

I tried to rewrite some plv8 stored procedures, which process in bulk JSONB
documents, to PL/pgSQL.
A SP usually has to delete/update/add multiple key with the same document
and do it for multiple documents (~40K) in loop.

When updating a single key PL/pgSQL wins against plv8, but when I need to
update multiple keys with *jsonb_set*, timing increase linearly with number
of *jsonb_set*s and takes longer than similar SP in PLV8.
Below are test-cases I've used.

*QUESTION:* Is it expected behavior or I do something wrong or there are
some better approaches or we can treat datum as object?

test case:
PG 9.6, CentOS 7

CREATE TABLE public.configurationj2b
(
  id integer NOT NULL PRIMARY KEY,
  config jsonb NOT NULL
);
Each jsonb column has 3 top keys, and one of top-key ('data') has another
700-900 key-value pairs e.g. {"OID1":"Value1"}

PL/pgSQL SP
CREATE OR REPLACE FUNCTION public.process_jsonb()
  RETURNS void AS
$BODY$
DECLARE
r integer;
cfg jsonb;
BEGIN
RAISE NOTICE 'start';
FOR r IN
SELECT id as device_id FROM devices
LOOP
select config into cfg from configurationj2b c where c.id = r;
--select jsonb one by one

-- MULTIPLE KEYs, Conditional Busiines Logic (BL) updates
*cfg := jsonb_set(cfg, '{data,OID1}', '"pl/pgsql1"');*














*IF cfg@>'{"data" : { "OID1":"pl/pgsql1"} }' THENcfg :=
jsonb_set(cfg, '{data,OID2}', '"pl/pgsql2"');END IF;IF
cfg@>'{"data" : { "OID2":"pl/pgsql2"} }' THENcfg := jsonb_set(cfg,
'{data,OID3}', '"pl/pgsql3"');END IF;IF cfg@>'{"data" : {
"OID3":"pl/pgsql3"} }' THENcfg := jsonb_set(cfg, '{data,OID4}',
'"pl/pgsql4"');END IF;IF cfg@>'{"data" : { "OID4":"pl/pgsql4"} }'
THENcfg := jsonb_set(cfg, '{data,OID5}', '"pl/pgsql5"');END IF;*

update configurationj2b c set config = cfg where c.id = r;

END LOOP;
RAISE NOTICE 'end';
RETURN;
END
$BODY$
  LANGUAGE plpgsql VOLATILE
  COST 100;

or in pseudo-code I would have

for-each child_jsonb do
begin
  foreach (key-value in parent_jsonb) do
  begin
*child_jsonb  := jsonb_set(child_jsonb , '{key}', '"value"');*
  end
  update *child_jsonb * in db;
end;

plv8 snippet:
$BODY$var ids = plv8.execute('select id from devices');

var CFG_TABLE_NAME = 'configurationj2b';
var selPlan = plv8.prepare( "select c.config from " + CFG_TABLE_NAME + " c
where c.id = $1", ['int'] );
var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1
where id = $2', ['json','int'] )

try {

for (var i = 0; i < ids.length; i++) {
var db_cfg = selPlan.execute([ids[i].id]);
var cfg = db_cfg[0].config;
var cfg_data = cfg['data'];
*cfg_data['OID1'] = 'plv8_01';*











*if (cfg_data['OID1'] == 'plv8_01') {cfg_data['OID2'] =
'plv8_02'};if (cfg_data['OID2'] == 'plv8_02') {
cfg_data['OID3'] = 'plv8_03'}if (cfg_data['OID3'] ==
'plv8_03') {cfg_data['OID4'] = 'plv8_04'}if
(cfg_data['OID4'] == 'plv8_04') {cfg_data['OID5'] =
'plv8_05'}*

updPlan.execute([cfg, ids[i].id]);
plv8.elog(NOTICE, "UPDATED = " + ids[i].id);
}

} finally {
selPlan.free();
updPlan.free();
}

return;$BODY$

but for now plv8 has other issues related to resource consumption.

So could I get similar performance in PL/pgSQL?


Re: Facing issue in using special characters

2019-03-15 Thread Gunther
This is not an issue for "hackers" nor "performance" in fact even for 
"general" it isn't really an issue.


"Special characters" is actually nonsense.

When people complain about "special characters" they haven't thought 
things through.


If you are unwilling to think things through and go step by step to make 
sure you know what you are doing, then you will not get it and really 
nobody can help you.


In my professional experience, people who complain about "special 
characters" need to be cut loose or be given a chance (if they are 
established employees who carry some weight). If a contractor complains 
about "special characters" they need to be fired.


Understand charsets -- character set, code point, and encoding. Then 
understand how encoding and string literals and "escape sequences" in 
string literals might work.


Know that UNICODE today is the one standard, and there is no more need 
to do code table switch. There is nothing special about a Hebrew alef or 
a greek lower case alpha or a latin A. Nor a hyphen and en-dash or an 
em-dash. All these characters are in the UNICODE. Yes, there are some 
Japanese who claim that they don't like that their Chinese character 
versions are put together with simplified reform Chinese font. But 
that's a font issue, not a character code issue.


7 bit ASCII is the first page of UNICODE, even in the UTF-8 encoding.

ISO Latin 1, or the Windoze 123 whatever special table of ISO Latin 1 
has the same code points as UNICODE pages 0 and 1, but not compatible 
with UTF-8 coding because of the way UTF-8 uses the 8th bit.


But none of this is likely your problem.

Your problem is about string literals in SQL for examples. About the 
configuration of your database (I always use initdb with --locale C and 
--encoding UTF-8). Use UTF-8 in the database. Then all your issues are 
about string literals in SQL and in JAVA and JSON and XML or whatever 
you are using.


You have to do the right thing. If you produce any representation, 
whether that is XML or JSON or SQL or URL query parameters, or a CSV 
file, or anything at all, you need to escape your string values properly.


This question with no detail didn't deserve such a thorough answer, but 
it's my soap box. I do not accept people complaining about "special 
characters". My own people get that same sermon from me when they make 
that mistake.


-Gunther

On 3/15/2019 1:19, M Tarkeshwar Rao wrote:


Hi all,

Facing issue in using special characters. We are trying to insert 
records to a remote Postgres Server and our application not able to 
perform this because of errors.


It seems that issue is because of the special characters that has been 
used in one of the field of a row.


Regards

Tarkeshwar



Re: jsonb_set performance degradation / multiple jsonb_set on multiple documents

2019-03-15 Thread Michel Pelletier
I don't know the details of jsonb_set, Perhaps the '||' operator will
perform better for you, it will overwrite existing keys, so you can build
your new values in a new object, and then || it to the original.

postgres=# select '{"a": 1, "b": 2, "c": 3}'::jsonb || '{"b": 4, "c":
5}'::jsonb;
 ?column?
--
 {"a": 1, "b": 4, "c": 5}
(1 row)

-Michel



On Fri, Mar 15, 2019 at 9:02 AM Alexandru Lazarev <
alexandru.laza...@gmail.com> wrote:

> Hi PostgreSQL Community.
>
> I tried to rewrite some plv8 stored procedures, which process in bulk
> JSONB documents, to PL/pgSQL.
> A SP usually has to delete/update/add multiple key with the same document
> and do it for multiple documents (~40K) in loop.
>
> When updating a single key PL/pgSQL wins against plv8, but when I need to
> update multiple keys with *jsonb_set*, timing increase linearly with
> number of *jsonb_set*s and takes longer than similar SP in PLV8.
> Below are test-cases I've used.
>
> *QUESTION:* Is it expected behavior or I do something wrong or there are
> some better approaches or we can treat datum as object?
>
> test case:
> PG 9.6, CentOS 7
>
> CREATE TABLE public.configurationj2b
> (
>   id integer NOT NULL PRIMARY KEY,
>   config jsonb NOT NULL
> );
> Each jsonb column has 3 top keys, and one of top-key ('data') has another
> 700-900 key-value pairs e.g. {"OID1":"Value1"}
>
> PL/pgSQL SP
> CREATE OR REPLACE FUNCTION public.process_jsonb()
>   RETURNS void AS
> $BODY$
> DECLARE
> r integer;
> cfg jsonb;
> BEGIN
> RAISE NOTICE 'start';
> FOR r IN
> SELECT id as device_id FROM devices
> LOOP
> select config into cfg from configurationj2b c where c.id = r;
> --select jsonb one by one
>
> -- MULTIPLE KEYs, Conditional Busiines Logic (BL) updates
> *cfg := jsonb_set(cfg, '{data,OID1}', '"pl/pgsql1"');*
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *IF cfg@>'{"data" : { "OID1":"pl/pgsql1"} }' THENcfg :=
> jsonb_set(cfg, '{data,OID2}', '"pl/pgsql2"');END IF;IF
> cfg@>'{"data" : { "OID2":"pl/pgsql2"} }' THENcfg := jsonb_set(cfg,
> '{data,OID3}', '"pl/pgsql3"');END IF;IF cfg@>'{"data" : {
> "OID3":"pl/pgsql3"} }' THENcfg := jsonb_set(cfg, '{data,OID4}',
> '"pl/pgsql4"');END IF;IF cfg@>'{"data" : { "OID4":"pl/pgsql4"} }'
> THENcfg := jsonb_set(cfg, '{data,OID5}', '"pl/pgsql5"');END IF;*
>
> update configurationj2b c set config = cfg where c.id = r;
>
> END LOOP;
> RAISE NOTICE 'end';
> RETURN;
> END
> $BODY$
>   LANGUAGE plpgsql VOLATILE
>   COST 100;
>
> or in pseudo-code I would have
>
> for-each child_jsonb do
> begin
>   foreach (key-value in parent_jsonb) do
>   begin
> *child_jsonb  := jsonb_set(child_jsonb , '{key}', '"value"');*
>   end
>   update *child_jsonb * in db;
> end;
>
> plv8 snippet:
> $BODY$var ids = plv8.execute('select id from devices');
>
> var CFG_TABLE_NAME = 'configurationj2b';
> var selPlan = plv8.prepare( "select c.config from " + CFG_TABLE_NAME + " c
> where c.id = $1", ['int'] );
> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1
> where id = $2', ['json','int'] )
>
> try {
>
> for (var i = 0; i < ids.length; i++) {
> var db_cfg = selPlan.execute([ids[i].id]);
> var cfg = db_cfg[0].config;
> var cfg_data = cfg['data'];
> *cfg_data['OID1'] = 'plv8_01';*
>
>
>
>
>
>
>
>
>
>
>
> *if (cfg_data['OID1'] == 'plv8_01') {cfg_data['OID2']
> = 'plv8_02'};if (cfg_data['OID2'] == 'plv8_02') {
> cfg_data['OID3'] = 'plv8_03'}if (cfg_data['OID3'] ==
> 'plv8_03') {cfg_data['OID4'] = 'plv8_04'}if
> (cfg_data['OID4'] == 'plv8_04') {cfg_data['OID5'] =
> 'plv8_05'}*
>
> updPlan.execute([cfg, ids[i].id]);
> plv8.elog(NOTICE, "UPDATED = " + ids[i].id);
> }
>
> } finally {
> selPlan.free();
> updPlan.free();
> }
>
> return;$BODY$
>
> but for now plv8 has other issues related to resource consumption.
>
> So could I get similar performance in PL/pgSQL?
>


Conditional INSERT

2019-03-15 Thread basti
Hello,

I want to insert data into table only if condition is true.
For example:

INSERT into  mytable (domainid, hostname, txtdata)
  VALUES (100,'_acme.challenge.example', 'somedata');

The insert should only be done if Hostname like %_acme.challenge%.

How can it be done? I dont want that the user/script can insert any value.

Best regards.



Re: Conditional INSERT

2019-03-15 Thread Michel Pelletier
Well, the obvious question is, why are you inserting data into your
database you don't want?  It makes more sense to just not do the insert.

But, assuming perhaps you have no control over the client, you can create a
BEFORE INSERT trigger that rejects the inserts that don't match your
condition:

https://www.postgresql.org/docs/current/plpgsql-trigger.html#PLPGSQL-DML-TRIGGER



On Fri, Mar 15, 2019 at 10:55 AM basti  wrote:

> Hello,
>
> I want to insert data into table only if condition is true.
> For example:
>
> INSERT into  mytable (domainid, hostname, txtdata)
>   VALUES (100,'_acme.challenge.example', 'somedata');
>
> The insert should only be done if Hostname like %_acme.challenge%.
>
> How can it be done? I dont want that the user/script can insert any value.
>
> Best regards.
>
>


Re: Conditional INSERT

2019-03-15 Thread Michael Lewis
>
> On Fri, Mar 15, 2019 at 10:55 AM basti 
> wrote:
>
>> Hello,
>>
>> I want to insert data into table only if condition is true.
>> For example:
>>
>> INSERT into  mytable (domainid, hostname, txtdata)
>>   VALUES (100,'_acme.challenge.example', 'somedata');
>>
>
Alternative to a trigger implementation, if you are generating that INSERT
statement, you can change it to use a sub-select or CTE that contains no
values if the domainid isn't what you like. If you want it to fail with
error, you could add a check constraint. We might need more context on what
you are doing and why to give good advice.


Re: Conditional INSERT

2019-03-15 Thread Paul Jungwirth

On 3/15/19 10:55 AM, basti wrote:

I want to insert data into table only if condition is true.
For example:

INSERT into  mytable (domainid, hostname, txtdata)
   VALUES (100,'_acme.challenge.example', 'somedata');

The insert should only be done if Hostname like %_acme.challenge%.


I would use `INSERT INTO ... SELECT` for this, instead of `INSERT INTO 
... VALUES`. For example:


INSERT INTO mytable (domainid, hostname, txtdata)
SELECT 100, '_acme.challenge.example', 'somedata'
WHERE '_acme.challenge.example' LIKE '%_acme.challenge%'
;

(Presumably in the real code the hostname is parameterized so this isn't 
quite as pointless as it looks. :-)


If you are inserting a lot of rows at once you could also SELECT from a 
VALUES list:


INSERT INTO mytable (domainid, hostname, txtdata)
SELECT d, h, t
FROM (VALUES
  (100, '_acme.challenge.example', 'somedata'),
  (200, 'bar.example.com', 'somedata'),
  (300, 'foo.example.com', 'somedata'),
  (400, '_acme.challenge.example', 'somedata')
) x(d, h, t)
WHERE h LIKE '%_acme.challenge%'
;

I hope that helps!

Yours,

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Conditional INSERT

2019-03-15 Thread basti
this is a dns database, and the client is update the _acme-challenge for
LE certificates. I don't want that the client can insert "any" txt record.
the client should only insert data if the hostname start with
_acme-challenge. i have no control on client.

i have try this rule but the server reject this with a endless loop:

CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
WHERE NEW.hostname like '_acme-challenge%'
DO INSERT INTO t_dnsadmin_records_txt VALUES (
NEW.domainid,
NEW.hostname,
NEW.txtdata
);


On 15.03.19 19:17, Michael Lewis wrote:
> On Fri, Mar 15, 2019 at 10:55 AM basti  > wrote:
> 
> Hello,
> 
> I want to insert data into table only if condition is true.
> For example:
> 
> INSERT into  mytable (domainid, hostname, txtdata)
>   VALUES (100,'_acme.challenge.example', 'somedata');
> 
>  
> Alternative to a trigger implementation, if you are generating that
> INSERT statement, you can change it to use a sub-select or CTE that
> contains no values if the domainid isn't what you like. If you want it
> to fail with error, you could add a check constraint. We might need more
> context on what you are doing and why to give good advice.



Re: Conditional INSERT

2019-03-15 Thread Adrian Klaver

On 3/15/19 11:54 AM, basti wrote:

this is a dns database, and the client is update the _acme-challenge for
LE certificates. I don't want that the client can insert "any" txt record.
the client should only insert data if the hostname start with
_acme-challenge. i have no control on client.

i have try this rule but the server reject this with a endless loop:


To borrow a quote:

"I had a problem so I decided to use a rule, now I have two problems."

Do not use a rule. As suggested upstream use a BEFORE INSERT trigger, 
you will be a lot happier.




CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
 WHERE NEW.hostname like '_acme-challenge%'
 DO INSERT INTO t_dnsadmin_records_txt VALUES (
 NEW.domainid,
 NEW.hostname,
 NEW.txtdata
 );


On 15.03.19 19:17, Michael Lewis wrote:

 On Fri, Mar 15, 2019 at 10:55 AM basti mailto:mailingl...@unix-solution.de>> wrote:

 Hello,

 I want to insert data into table only if condition is true.
 For example:

 INSERT into  mytable (domainid, hostname, txtdata)
   VALUES (100,'_acme.challenge.example', 'somedata');

  
Alternative to a trigger implementation, if you are generating that

INSERT statement, you can change it to use a sub-select or CTE that
contains no values if the domainid isn't what you like. If you want it
to fail with error, you could add a check constraint. We might need more
context on what you are doing and why to give good advice.






--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Conditional INSERT

2019-03-15 Thread Rob Sargent


> On Mar 15, 2019, at 12:59 PM, Adrian Klaver  wrote:
> 
> On 3/15/19 11:54 AM, basti wrote:
>> this is a dns database, and the client is update the _acme-challenge for
>> LE certificates. I don't want that the client can insert "any" txt record.
>> the client should only insert data if the hostname start with
>> _acme-challenge. i have no control on client.
>> i have try this rule but the server reject this with a endless loop:
> 
> To borrow a quote:
> 
> "I had a problem so I decided to use a rule, now I have two problems."
> 
> Do not use a rule. As suggested upstream use a BEFORE INSERT trigger, you 
> will be a lot happier.
> 
>> CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
>> WHERE NEW.hostname like '_acme-challenge%'
>> DO INSERT INTO t_dnsadmin_records_txt VALUES (
>> NEW.domainid,
>> NEW.hostname,
>> NEW.txtdata
>> );
>> On 15.03.19 19:17, Michael Lewis wrote:
>>> On Fri, Mar 15, 2019 at 10:55 AM basti >> > wrote:
>>> 
>>> Hello,
>>> 
>>> I want to insert data into table only if condition is true.
>>> For example:
>>> 
>>> INSERT into  mytable (domainid, hostname, txtdata)
>>>   VALUES (100,'_acme.challenge.example', 'somedata');
>>> 
>>>  Alternative to a trigger implementation, if you are generating that
>>> INSERT statement, you can change it to use a sub-select or CTE that
>>> contains no values if the domainid isn't what you like. If you want it
>>> to fail with error, you could add a check constraint. We might need more
>>> context on what you are doing and why to give good advice.
> 
> 

Does a check constraint not suffice in this situation?



Re: Do all superuser processes count toward superuser_reserved_connections?

2019-03-15 Thread Adrian Klaver

On 3/15/19 8:16 AM, Jeremy Finzel wrote:
On Fri, Mar 15, 2019 at 9:48 AM Adrian Klaver > wrote:


On 3/14/19 8:23 AM, Jeremy Finzel wrote:
 > I don't find a clear mention in the docs of superuser processes
that are
 > exempt from counting toward superuser_reserved_connections.  So I
would
 > think that it's possible that postgres autovac workers ought to
count
 > toward that.  Am I wrong about that?  I actually have the same
question

AFAICK autovacuum workers do not use the connections referred to above.
The details can be found here:

https://doxygen.postgresql.org/autovacuum_8c.html


Not sure I can really grok that and how it answers the question.  Are 
you saying if you have max_connections set to 10, you could 
theoretically have 20 autovac processes still?


Yes:

https://www.postgresql.org/docs/11/routine-vacuuming.html#AUTOVACUUM

"There is no limit on how many workers might be in a single database, 
but workers do try to avoid repeating work that has already been done by 
other workers. Note that the number of running workers does not count 
towards max_connections or superuser_reserved_connections limits."





 > about pglogical replication background workers and manager, which
also
 > run as postgres.

But the actual connection can be by a different user:

https://www.postgresql.org/docs/11/logical-replication-security.html


But I am speaking of pglogical, which does require superuser, last I 
checked :).


Aah, my mistake I conflated pglogical with the built in logical 
replication as the latter came from the former. Still:


https://www.2ndquadrant.com/en/resources/pglogical/pglogical-docs/

2.1 Quick Setup

First the PostgreSQL server has to be properly configured to support 
logical decoding:


wal_level = 'logical'
max_worker_processes = 10   # one per database needed on provider node
# one per node needed on subscriber node
max_replication_slots = 10  # one per node needed on provider node
max_wal_senders = 10# one per node needed on provider node
shared_preload_libraries = 'pglogical'





It does use replication slots, but there are processes corresponding to 
each subscription.  I have some databases with dozens of them.


A process does not necessarily equal a connection.



Thanks,
Jeremy



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Conditional INSERT

2019-03-15 Thread Michel Pelletier
You're right it probably does, unless the constraint needs to do a
sub-query to get the matching pattern, which would require a trigger.

On Fri, Mar 15, 2019 at 12:05 PM Rob Sargent  wrote:

>
>
> On Mar 15, 2019, at 12:59 PM, Adrian Klaver 
> wrote:
>
> On 3/15/19 11:54 AM, basti wrote:
>
> this is a dns database, and the client is update the _acme-challenge for
> LE certificates. I don't want that the client can insert "any" txt record.
> the client should only insert data if the hostname start with
> _acme-challenge. i have no control on client.
> i have try this rule but the server reject this with a endless loop:
>
>
> To borrow a quote:
>
> "I had a problem so I decided to use a rule, now I have two problems."
>
> Do not use a rule. As suggested upstream use a BEFORE INSERT trigger, you
> will be a lot happier.
>
> CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
> WHERE NEW.hostname like '_acme-challenge%'
> DO INSERT INTO t_dnsadmin_records_txt VALUES (
> NEW.domainid,
> NEW.hostname,
> NEW.txtdata
> );
> On 15.03.19 19:17, Michael Lewis wrote:
>
> On Fri, Mar 15, 2019 at 10:55 AM basti  >
> wrote:
>
> Hello,
>
> I want to insert data into table only if condition is true.
> For example:
>
> INSERT into  mytable (domainid, hostname, txtdata)
>   VALUES (100,'_acme.challenge.example', 'somedata');
>
>  Alternative to a trigger implementation, if you are generating that
> INSERT statement, you can change it to use a sub-select or CTE that
> contains no values if the domainid isn't what you like. If you want it
> to fail with error, you could add a check constraint. We might need more
> context on what you are doing and why to give good advice.
>
>
>
>
> Does a check constraint not suffice in this situation?
>
>


Re: Facing issue in using special characters

2019-03-15 Thread Chapman Flack
On 3/15/19 11:59 AM, Gunther wrote:
> This is not an issue for "hackers" nor "performance" in fact even for
> "general" it isn't really an issue.

As long as it's already been posted, may as well make it something
helpful to find in the archive.

> Understand charsets -- character set, code point, and encoding. Then
> understand how encoding and string literals and "escape sequences" in
> string literals might work.

Good advice for sure.

> Know that UNICODE today is the one standard, and there is no more need

I wasn't sure from the question whether the original poster was in
a position to choose the encoding of the database. Lots of things are
easier if it can be set to UTF-8 these days, but perhaps it's a legacy
situation.

Maybe a good start would be to go do

  SHOW server_encoding;
  SHOW client_encoding;

and then hit the internet and look up what that encoding (or those
encodings, if different) can and can't represent, and go from there.

It's worth knowing that, when the server encoding isn't UTF-8,
PostgreSQL will have the obvious limitations entailed by that,
but also some non-obvious ones that may be surprising, e.g. [1].

-Chap


[1]
https://www.postgresql.org/message-id/CA%2BTgmobUp8Q-wcjaKvV%3DsbDcziJoUUvBCB8m%2B_xhgOV4DjiA1A%40mail.gmail.com



Re: Conditional INSERT

2019-03-15 Thread Andreas Kretschmer




Am 15.03.19 um 18:55 schrieb basti:

Hello,

I want to insert data into table only if condition is true.
For example:

INSERT into  mytable (domainid, hostname, txtdata)
   VALUES (100,'_acme.challenge.example', 'somedata');

The insert should only be done if Hostname like %_acme.challenge%.




you can use a check-constraint:

create table mytable(hostname text, check(hostname like 
'%_acme.challenge%'));



I think you can see the idea...


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Camel case identifiers and folding

2019-03-15 Thread Morris de Oryx
The original question has already been answered really well, but it reminds
me to mention that *Postgres text/varchar values are case-sensitive*.
Here's a list of the times when I would like a case-sensitive text field:

   Never

Now here's the list of times I would like a case-blind text field:

   Everywhere else.

If this is how you feel too, there are several alternatives. The one that
I've chosen is to use the citext extension instead of text fields.This
takes care of the problem without having to add extra function calls to
your queries, do anything special with indexes, etc.

 If you have JSON, which has case-sensitive element names, use JSONB.

Your requirements may differ than mine! Other people have good reason to
want case-sensitive searches. I just never do. (30+ years in programming
and I can't remember a time I wanted user data to be treated
case-sensitively...but you never know...one day...maybe.) There's also an
extension for stripping accents, which I've not needed.

I've idly wondered if using a different collation on a text field might be
a better answer than using citext everywhere? If anyone wants to set me
straight on this, I'd be grateful.


Re: Camel case identifiers and folding

2019-03-15 Thread Rob Sargent


> On Mar 15, 2019, at 4:43 PM, Morris de Oryx  wrote:
> 
> The original question has already been answered really well, but it reminds 
> me to mention that Postgres text/varchar values are case-sensitive. Here's a 
> list of the times when I would like a case-sensitive text field:
> 
>Never
> 
> Now here's the list of times I would like a case-blind text field:
> 
>Everywhere else.
> 
> If this is how you feel too, there are several alternatives. The one that 
> I've chosen is to use the citext extension instead of text fields.This takes 
> care of the problem without having to add extra function calls to your 
> queries, do anything special with indexes, etc.
> 
>  If you have JSON, which has case-sensitive element names, use JSONB. 
> 
> Your requirements may differ than mine! Other people have good reason to want 
> case-sensitive searches. I just never do. (30+ years in programming and I 
> can't remember a time I wanted user data to be treated case-sensitively...but 
> you never know...one day...maybe.) There's also an extension for stripping 
> accents, which I've not needed.
> 
> I've idly wondered if using a different collation on a text field might be a 
> better answer than using citext everywhere? If anyone wants to set me 
> straight on this, I'd be grateful.
> 
> 
> 
What sort of content is in your field of type text?  Certainly, in English 
prose, “rob” is different than “Rob” and if the content is for a web page (or 
in my experience, the content of medical reference books) these differences are 
critical.

Re: Conditional INSERT

2019-03-15 Thread Ken Tanzer
On Fri, Mar 15, 2019 at 11:59 AM Adrian Klaver 
wrote:

> On 3/15/19 11:54 AM, basti wrote:
> > this is a dns database, and the client is update the _acme-challenge for
> > LE certificates. I don't want that the client can insert "any" txt
> record.
> > the client should only insert data if the hostname start with
> > _acme-challenge. i have no control on client.
> >
> > i have try this rule but the server reject this with a endless loop:
>
> To borrow a quote:
>
> "I had a problem so I decided to use a rule, now I have two problems."
>
> Do not use a rule. As suggested upstream use a BEFORE INSERT trigger,
> you will be a lot happier.
>
> >
> > CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
> >  WHERE NEW.hostname like '_acme-challenge%'
> >  DO INSERT INTO t_dnsadmin_records_txt VALUES (
> >  NEW.domainid,
> >  NEW.hostname,
> >  NEW.txtdata
> >  );
> >
> >
>

Just curious, but wanted to follow up on whether rules are across-the-board
discouraged?  I've seen disparaging comments about them, but I don't see
any indication of that on the create rule page.

The other suggestion in this thread--a foreign key--will throw an error.
Your suggestion of a before trigger might well be better (and if so, why?),
but is there anything particularly wrong or bad about using a rule that
would actually work?  Something along these lines:

CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
WHERE NOT NEW.hostname like '_acme-challenge%'
DO INSTEAD NOTHING;

Thanks,
Ken



-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Camel case identifiers and folding

2019-03-15 Thread Morris de Oryx
We definitely *store* data case-sensitively, we just never want to *search*
on it case-sensitively. That's what citext gives us.  Many databases
perform this way as a default. Postgres does not, but it offers
alternatives. The OP is coming from MySQL which, if I remember correctly,
treated non-binary-like text containers as case-insensitive in searches.
That's why I mentioned the Postgres behavior, it's a gotcha if you're
assuming something else will happen.

More to the point, users never want case-sensitive searches, it's just
confusing for them.

There are places where we've got data where byte-level/code page
differences are significant. But a lot of that is binary-like data. These
are rare, and I'm willing to do a bit of extra work for them. I can't even
think of such a case off the top of my head.

UUIDs as a type are an interesting case in Postgres. They're stored as a
large numeric for efficiency (good!), but are presented by default in the
36-byte format with the dashes. However, you can also search using the
dashes 32-character formatand it all works. Case-insensitively.
Postgres converses 36/32 char strings of any case combination back into the
relevant number and then searches. Anything else would be pointlessly hard
to deal with.

There are also cases where case-sensitivity is not optional. For example,
we save and generate JSON (like everyone else) for various tasks. JSON
element names are case-sensitive. Not our call, just the way it is.
Personally, I think that case-sensitive language element names are one of
the stupidest design choices in history...but no one asked me. There are
solid arguments in favor of the idea (Dijkstra himself argued in their
favor), and it's an unchangeable fact of life. So in those cases, yeah,
case-sensitivity matters. Namely, if the data itself *is *case-sensitive. The
truth is, I rarely have a reason to use a 0NF packed field type like
JSONso the issue doesn't come up in our Postgres searches. But if I did
plan to store JSON, say API response logs, I'd want those searches to be
case-sensitive and would use JSONB and the necessary operators.


Permission Read Only User

2019-03-15 Thread Sathish Kumar
Hi All,

I have created a read only user to perform select statements on our
database but whenever we create new tables on the database this user is
unable to view it unless I grant select again for this table. Is there a
way I can make select as default permission for this user so that in future
if I create any new tables, it will be still accessible.


Re: Conditional INSERT

2019-03-15 Thread Adrian Klaver

On 3/15/19 4:23 PM, Ken Tanzer wrote:
On Fri, Mar 15, 2019 at 11:59 AM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 3/15/19 11:54 AM, basti wrote:
 > this is a dns database, and the client is update the
_acme-challenge for
 > LE certificates. I don't want that the client can insert "any"
txt record.
 > the client should only insert data if the hostname start with
 > _acme-challenge. i have no control on client.
 >
 > i have try this rule but the server reject this with a endless loop:

To borrow a quote:

"I had a problem so I decided to use a rule, now I have two problems."

Do not use a rule. As suggested upstream use a BEFORE INSERT trigger,
you will be a lot happier.

 >
 > CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
 >      WHERE NEW.hostname like '_acme-challenge%'
 >      DO INSERT INTO t_dnsadmin_records_txt VALUES (
 >                                      NEW.domainid,
 >                                      NEW.hostname,
 >                                      NEW.txtdata
 >                                  );
 >
 >


Just curious, but wanted to follow up on whether rules are 
across-the-board discouraged?  I've seen disparaging comments about 
them, but I don't see any indication of that on the create rule page.


See here:
https://www.postgresql-archive.org/Deprecating-RULES-td5727689.html

The rumor crops periodically that they will be deprecated. I personally 
do not see that happening any time soon.


My issue with rules is this:

https://www.postgresql.org/docs/11/rules.html

If you can understand what really goes on in the above you are ahead of 
me. Given that my experience is that they do the unexpected as often as 
the expected so I stay away from them. Triggers I understand even when 
they error, which is the important part.




The other suggestion in this thread--a foreign key--will throw an 
error.  Your suggestion of a before trigger might well be better (and if 
so, why?), but is there anything particularly wrong or bad about using a 
rule that would actually work?  Something along these lines:


The trouble is simple rarely stays simple and following logic is a lot 
easier in a trigger function then a rule. This is down mostly to the 
query rewrite that goes on in a rule. That is border line, if not 
outright, black magic. Try to follow what happens here:


https://www.postgresql.org/docs/11/rules-update.html

41.4.1.1. A First Rule Step by Step



CREATE RULE insert_acme AS ON INSERT TO t_dnsadmin_records_txt
WHERE NOT NEW.hostname like '_acme-challenge%'
DO INSTEAD NOTHING;

Thanks,
Ken



--
AGENCY Software
A Free Software data system
By and for non-profits
/http://agency-software.org//
/https://demo.agency-software.org/client/
ken.tan...@agency-software.org 
(253) 245-3801

Subscribe to the mailing list 
 to

learn more about AGENCY or
follow the discussion.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Permission Read Only User

2019-03-15 Thread Adrian Klaver

On 3/15/19 4:37 PM, Sathish Kumar wrote:

Hi All,

I have created a read only user to perform select statements on our 
database but whenever we create new tables on the database this user is 
unable to view it unless I grant select again for this table. Is there a 
way I can make select as default permission for this user so that in 
future if I create any new tables, it will be still accessible.


Maybe this?:

https://www.postgresql.org/docs/11/sql-alterdefaultprivileges.html


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Conditional INSERT

2019-03-15 Thread Ken Tanzer
On Fri, Mar 15, 2019 at 4:42 PM Adrian Klaver 
wrote:

> > Just curious, but wanted to follow up on whether rules are
> > across-the-board discouraged?  I've seen disparaging comments about
> > them, but I don't see any indication of that on the create rule page.
>
> See here:
> https://www.postgresql-archive.org/Deprecating-RULES-td5727689.html
>
> The rumor crops periodically that they will be deprecated. I personally
> do not see that happening any time soon.
>
> My issue with rules is this:
>
> https://www.postgresql.org/docs/11/rules.html
>
> If you can understand what really goes on in the above you are ahead of
> me. Given that my experience is that they do the unexpected as often as
> the expected so I stay away from them. Triggers I understand even when
> they error, which is the important part.
>
> OK, and thanks for the info. I've gleaned that rules are not "deprecated"
in the sense that they are slated for removal, but they are rather
discouraged.  Since that's the case, wouldn't it make sense to warn users
about this?  That might keep them away from rules, and ease any eventual
deprecation/transition issues you might have by lowering the number of
"rules" in the wild.

In the section on "Rules vs. Triggers" (41.7), it doesn't even hint at
this, and even says:


*"For the things that can be implemented by both, which is best depends on
the usage of the database."*

There _is_ a nice CAUTION box in 41.4 ("Rules on Insert, Update and
Delete").  I would suggest that something like that should go into Rules
vs. Triggers, and then have a short caution box at the top of perhaps every
Rule page (or at the very least for "CREATE RULE") that says something like
"Rules are discouraged.  You may be better off with Triggers instead.  See
."

Just my two cents, but that might help more people be aware of the issue
and avoid rules altogether.

Cheers,
Ken


> >
> > --
> > AGENCY Software
> > A Free Software data system
> > By and for non-profits
> > /http://agency-software.org//
> > /https://demo.agency-software.org/client/
> > ken.tan...@agency-software.org 
> > (253) 245-3801
> >
> > Subscribe to the mailing list
> >  to
> > learn more about AGENCY or
> > follow the discussion.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


-- 
AGENCY Software
A Free Software data system
By and for non-profits
*http://agency-software.org/ *
*https://demo.agency-software.org/client
*
ken.tan...@agency-software.org
(253) 245-3801

Subscribe to the mailing list
 to
learn more about AGENCY or
follow the discussion.


Re: Conditional INSERT

2019-03-15 Thread Adrian Klaver

On 3/15/19 5:19 PM, Ken Tanzer wrote:



On Fri, Mar 15, 2019 at 4:42 PM Adrian Klaver > wrote:


OK, and thanks for the info. I've gleaned that rules are not 
"deprecated" in the sense that they are slated for removal, but they are 
rather discouraged.  Since that's the case, wouldn't it make sense to 
warn users about this?  That might keep them away from rules, and ease 
any eventual deprecation/transition issues you might have by lowering 
the number of "rules" in the wild.


Well then there is this section:

https://www.postgresql.org/docs/11/rules-views.html

So as was mentioned in thread I posted, until someone comes up with a 
replacement they are not going away.




In the section on "Rules vs. Triggers" (41.7), it doesn't even hint at 
this, and even says:


/"For the things that can be implemented by both, which is best depends 
on the usage of the database."

/

There _is_ a nice CAUTION box in 41.4 ("Rules on Insert, Update and 
Delete").  I would suggest that something like that should go into Rules 
vs. Triggers, and then have a short caution box at the top of perhaps 
every Rule page (or at the very least for "CREATE RULE") that says 
something like "Rules are discouraged.  You may be better off with 
Triggers instead.  See ."


Just my two cents, but that might help more people be aware of the issue 
and avoid rules altogether.


I thought I remember there being something like in a older version of 
the docs. I can't find it now, so it might be just my mind playing 
tricks on me.




Cheers,
Ken



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Permission Read Only User

2019-03-15 Thread Ron

On 3/15/19 6:44 PM, Adrian Klaver wrote:

On 3/15/19 4:37 PM, Sathish Kumar wrote:

Hi All,

I have created a read only user to perform select statements on our 
database but whenever we create new tables on the database this user is 
unable to view it unless I grant select again for this table. Is there a 
way I can make select as default permission for this user so that in 
future if I create any new tables, it will be still accessible.


Maybe this?:

https://www.postgresql.org/docs/11/sql-alterdefaultprivileges.html


That's just what we used.

Create a role READONLY who is granted CONNECT and alter the default 
permissions on all your schemas to grant SELECT to that role.


Then create "user" roles which inherit from READONLY.

--
Angular momentum makes the world go 'round.



Re: Conditional INSERT

2019-03-15 Thread Tom Lane
Ken Tanzer  writes:
> OK, and thanks for the info. I've gleaned that rules are not "deprecated"
> in the sense that they are slated for removal, but they are rather
> discouraged.  Since that's the case, wouldn't it make sense to warn users
> about this?

There's no plan to remove them, but we do encourage people to think of
triggers first.  That's why the triggers chapter appears first, and why
the "rules vs. triggers" section doesn't really read as evenhanded
(to me anyway).

> In the section on "Rules vs. Triggers" (41.7), it doesn't even hint at
> this, and even says:
> *"For the things that can be implemented by both, which is best depends on
> the usage of the database."*

You're ignoring the sentence immediately before that, which is

Writing such triggers is often easier than writing rules, particularly
if complex logic is required to perform the update.

as well as the one at the end of its (short) paragraph:

However, the trigger approach is conceptually far simpler than the
rule approach, and is easier for novices to get right.

The only case where we're really encouraging people to use rules is
where the overhead of a trigger is unacceptable.  Even then, this
whole section is written thinking of per-row triggers.  The performance
tradeoffs would likely be quite different if using a per-statement trigger
with transition tables.  But that's a very new feature, and I don't think
anyone's done serious performance comparisons of that vs. rules.

regards, tom lane