Re: psql variables in the DO command

2018-03-05 Thread Pavel Stehule
Hi

2018-03-05 14:13 GMT+01:00 Pavel Luzanov :

> Hello,
>
> I can't use psql variable in the DO command. Is it intentional behavior?
>

yes. psql variables living on client side, and are not accessible from
server side . DO command is executed on server side.

you can copy psql variables to GUC variables by set_config function, and
then on server side use current_setting function for getting the content.

Regards

Pavel



>
> postgres=# \set var 'Hello, World!'
>
> postgres=# do $$begin raise notice '%', :'var'; end;$$;
>
> ERROR:  syntax error at or near ":"
>
> LINE 1: do $$begin raise notice '%', :'var'; end;$$;
>
>  ^
>
>
> --
>
> -
> Pavel Luzanov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>
>


Re: psql variables in the DO command

2018-03-05 Thread Pavel Stehule
2018-03-05 14:52 GMT+01:00 Pavel Luzanov :

> On 05.03.2018 16:42, Pavel Stehule wrote:
>
>
> I can't use psql variable in the DO command. Is it intentional behavior?
>>
>
> yes. psql variables living on client side, and are not accessible from
> server side . DO command is executed on server side.
>
>
> But SELECT command also executed on a server side ))
> I thought that the command is sent to the server after variable's
> replacement.
>

The psql variables are injected into SQL string before execution (before
SQL string is sent to server). But this injection is disabled inside
strings - and body of DO command is passed as string.

postgres=# \set xxx ahoj
postgres=# select ':xxx';
+--+
| ?column? |
+--+
| :xxx |
+--+
(1 row)


Regards

Pavel


>
>
> you can copy psql variables to GUC variables by set_config function, and
> then on server side use current_setting function for getting the content.
>
> Yes, I know about workarounds.
>
>
> -
> Pavel Luzanov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>


Re: psql variables in the DO command

2018-03-05 Thread Pavel Stehule
2018-03-05 15:02 GMT+01:00 Pavel Luzanov :

> On 05.03.2018 16:56, Pavel Stehule wrote:
>
>
>> I can't use psql variable in the DO command. Is it intentional behavior?
>>>
>>
>> yes. psql variables living on client side, and are not accessible from
>> server side . DO command is executed on server side.
>>
>>
>> But SELECT command also executed on a server side ))
>> I thought that the command is sent to the server after variable's
>> replacement.
>>
>
> The psql variables are injected into SQL string before execution (before
> SQL string is sent to server). But this injection is disabled inside
> strings - and body of DO command is passed as string.
>
> Yes, now I understand this. But at first glance this is not an obvious
> behavior.
>

It is most correct when you thinking about it.

1. :xx is out of SQL syntax, so can by safely used. There is not risk of
unwanted usage.

2. but string literal can contain :xxx symbols and not necessary it means
so it should be usage of psql variable - so additional syntax for disabling
evaluation should be necessary

3. I understand to request to use psql variables in DO command. But you
should remember - body of DO command is string. body of any function is
string too. Some unwanted psql variable evaluation in CREATE FUNCTION can
be tragic.

Unfortunately DO command is half baked - and doesn't support parameters. I
am working on schema variables and I hope it will be a solution of this
issue:

CREATE VARIABLE var as integer;

LET var = :psqlintvar;

DO $$
BEGIN
  RAISE NOTICE '%', var;
END;
$$;



> -
> Pavel Luzanov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>


Re: psql variables in the DO command

2018-03-05 Thread Pavel Stehule
2018-03-05 16:19 GMT+01:00 Pavel Luzanov :

> On 05.03.2018 18:01, Pavel Stehule wrote:
>
>
> It is most correct when you thinking about it.
>
> 1. :xx is out of SQL syntax, so can by safely used. There is not risk of
> unwanted usage.
>
> But there is absence of wanted usage too.
>

How much strong and often? The parser of SQL in psql is not nice - and I
understand so nobody would to complicate syntax. Current design is SAFE and
good enough. The problem is not is a evaluation, but in DO implementation.

>
> 2. but string literal can contain :xxx symbols and not necessary it means
> so it should be usage of psql variable - so additional syntax for disabling
> evaluation should be necessary
>
> Yes and Oracle sqlplus (I khow than you know this tool) has special
> command to control this: set define ...
>
>
> Unfortunately DO command is half baked - and doesn't support parameters. I
> am working on schema variables and I hope it will be a solution of this
> issue:
>
> CREATE VARIABLE var as integer;
>
> LET var = :psqlintvar;
>
> DO $$
> BEGIN
>   RAISE NOTICE '%', var;
> END;
> $$;
>
>
> It will be great. I already commented it in your blog.
>

I am slowly working on prototype. The work is simple, when variables are
just scalars. But it is much harder, when we allow composite variables.
When prototype will be done, I invite any cooperation - there are lot of
question - and one very hard - where and how the variable should be stored
(values) - now I have workaround, but it is pretty ugly code.

CREATE VARIABLE x INT;
LET x = 10;
BEGIN;
DROP VARIABLE x;
ROLLBACK;
SELECT x; -- should be 10 .. for this situation, the PostgreSQL internal
caches are not prepared

Regards

Pavel


>
> -
> Pavel Luzanov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>


Re: psql variables in the DO command

2018-03-06 Thread Pavel Stehule
2018-03-06 10:17 GMT+01:00 Pavel Luzanov :

> On 05.03.2018 18:35, Pavel Stehule wrote:
>
> I am slowly working on prototype. The work is simple, when variables are
> just scalars. But it is much harder, when we allow composite variables.
> When prototype will be done, I invite any cooperation - there are lot of
> question - and one very hard - where and how the variable should be stored
> (values) - now I have workaround, but it is pretty ugly code.
>
> I'm not a big expert on postgres internals, but ready to participate.
>

big thanks. I'll start new thread to stop do offtopic in this place.

Thank you

Pavel

>
> -
> Pavel Luzanov
> Postgres Professional: http://www.postgrespro.com
> The Russian Postgres Company
>
>


Re: plpgsql function-parsing

2018-03-06 Thread Pavel Stehule
Hi

2018-03-06 16:51 GMT+01:00 chris :

> Hi,
>
> I am trying to create a function that gets passed a statement as a string
> and then I need to change the table_name within the string by adding a
> "_cdc" to it, then execute the statement
>
> ex:
>
> string passed could be GRANT all ON authors TO bob.
>
> then
>
> EXECUTE GRANT all ON authors_cdc TO bob
>
>
> I'm looking into parsing the string but i can't find a way to parse in a
> way that'll work. Any ideas?
>

The extension Orafce  https://github.com/orafce/orafce has some basic
library for parsing SQL PLVlex.

Another solution can be using regular expressions
https://www.postgresql.org/docs/current/static/functions-matching.html

Regards

Pavel



>
> Thanks,
>
> Chris
>
>
>


Re: Troubleshooting a segfault and instance crash

2018-03-08 Thread Pavel Stehule
Hi

2018-03-08 18:40 GMT+01:00 Blair Boadway :

> Hello,
>
>
>
> We’re seeing an occasional segfault on a particular database
>
>
>
> Mar  7 14:46:35 pgprod2 kernel:postgres[29351]: segfault at 0 ip
> 00302f32868a sp 7ffcf1547498 error 4 in libc-2.12.so[302f20+
> 18a000]
>
> Mar  7 14:46:35 pgprod2 POSTGRES[21262]: [5] user=,db=,app=client= LOG:
> server process (PID 29351) was terminated by signal 11: Segmentation fault
>
>
>
> It crashes the database, though it starts again on its own without any
> apparent issues.  This has happened 3 times in 2 months and each time the
> segfault error and memory address is the same. We’ve only seen it on one
> database, though we’ve seen it on both hosts of primary/standby setup—we
> switched over primary to other host and got a segfault there, which seems
> to eliminate a hardware issue.  Oddly the database has no issues for normal
> DML workloads (it is a moderately busy prod oltp system) but the segfault
> has happened very shortly after DML changes are made.  Most recently it
> happened while running a series of grants for new db users we were
> deploying (ie. running a sql script from psql on the primary host)
>
>
>
> grant usage on schema app to app_user1;
>
> grant usage on schema app to app_user2;
>
> ...
>
>
>
> Our set up is
>
> RHEL 6.9  - 2.6.32-696.16.1.el6.x86_64
>
> PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> 20120313 (Red Hat 4.4.7-18), 64-bit
>
> Extensions - pg_cron,repmgr_funcs,pgaudit,pg_stat_statements,pg_hint_
> plan,pglogical
>
>
>
> So far can’t reproduce on a test system, have just added some OS config to
> collect core from the OS but haven’t collected a core yet.  There isn’t any
> particular config change or extension that we can link to the problem, this
> is a system that has run for months without problems since last config
> changes.  Appreciate any ideas.
>

can you get core dump? It can be pgaudit bug maybe? It is complex extension.

Regards

Pavel

>
>
> Regards,
>
> Blair
>


Re: Troubleshooting a segfault and instance crash

2018-03-08 Thread Pavel Stehule
2018-03-08 19:16 GMT+01:00 Blair Boadway :

> Hi Pavel,
>
>
>
> I don’t have a core yet, the only way I have now is to intentionally crash
> the prod system a couple of times.  Haven’t resorted to that yet.
>

hard to help without backtrace - and then you need core dump


>
>
> Interesting you mentioned pgaudit—it is installed on this system because
> that is a our standard installation but on this particular system we
> haven’t yet needed audits so the audit role is ‘empty’.  (And on a
> different system with same installation and heavy of audit we’ve seen no
> segfaults)
>
>
>

other extensions are simply or without relation to DDL or well known. So
pgaudit is best candidate - but the error can be anywhere

Regards

Pavel


> On this system
>
>
>
> pgaudit.role = 'auditor'
>
> pgaudit.log_parameter = off
>
> pgaudit.log_catalog = off
>
> pgaudit.log_statement_once = on
>
> pgaudit.log_level = log
>
>
>
>
>
> select * from information_schema.role_table_grants where grantee =
> 'auditor';
>
> (0 rows)
>
>
>
>
>
> thanks, Blair
>
>
>
> *From: *Pavel Stehule 
> *Date: *Thursday, March 8, 2018 at 9:49 AM
> *To: *Blair Boadway 
> *Cc: *"pgsql-gene...@postgresql.org" 
> *Subject: *Re: Troubleshooting a segfault and instance crash
>
>
>
> Hi
>
>
>
> 2018-03-08 18:40 GMT+01:00 Blair Boadway :
>
> Hello,
>
>
>
> We’re seeing an occasional segfault on a particular database
>
>
>
> Mar  7 14:46:35 pgprod2 kernel:postgres[29351]: segfault at 0 ip
> 00302f32868a sp 7ffcf1547498 error 4 in libc-2.12.so[302f20+
> 18a000]
>
> Mar  7 14:46:35 pgprod2 POSTGRES[21262]: [5] user=,db=,app=client= LOG:
> server process (PID 29351) was terminated by signal 11: Segmentation fault
>
>
>
> It crashes the database, though it starts again on its own without any
> apparent issues.  This has happened 3 times in 2 months and each time the
> segfault error and memory address is the same. We’ve only seen it on one
> database, though we’ve seen it on both hosts of primary/standby setup—we
> switched over primary to other host and got a segfault there, which seems
> to eliminate a hardware issue.  Oddly the database has no issues for normal
> DML workloads (it is a moderately busy prod oltp system) but the segfault
> has happened very shortly after DML changes are made.  Most recently it
> happened while running a series of grants for new db users we were
> deploying (ie. running a sql script from psql on the primary host)
>
>
>
> grant usage on schema app to app_user1;
>
> grant usage on schema app to app_user2;
>
> ...
>
>
>
> Our set up is
>
> RHEL 6.9  - 2.6.32-696.16.1.el6.x86_64
>
> PostgreSQL 9.6.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7
> 20120313 (Red Hat 4.4.7-18), 64-bit
>
> Extensions - pg_cron,repmgr_funcs,pgaudit,pg_stat_statements,pg_hint_
> plan,pglogical
>
>
>
> So far can’t reproduce on a test system, have just added some OS config to
> collect core from the OS but haven’t collected a core yet.  There isn’t any
> particular config change or extension that we can link to the problem, this
> is a system that has run for months without problems since last config
> changes.  Appreciate any ideas.
>
>
>
> can you get core dump? It can be pgaudit bug maybe? It is complex
> extension.
>
> Regards
>
>
>
> Pavel
>
>
>
> Regards,
>
> Blair
>
>
>


Re: Prompt for parameter value in psql

2018-03-16 Thread Pavel Stehule
Hi

2018-03-16 16:12 GMT+01:00 Tiffany Thang :

> Hi,
> Would it be possible to prompt for a user input in psql like in Oracle
> sqlplus?
>
> In oracle, we use the & sign, for example,
> select * from emp where empid=&empidvalue;
>
>
https://www.postgresql.org/docs/current/static/app-psql.html

see \prompt

Regards

Pavel


> Thanks.
>


Re: ora2pg and invalid command \N

2018-03-16 Thread Pavel Stehule
2018-03-16 18:12 GMT+01:00 Charlin Barak :

> Hi,
> I'm using ora2pg to migrate our Oracle database to Postgres. I was able to
> generate the data file using TYPE=COPY but when I attempted to load the
> file via psql, I got lots of "invalid command \N" errors. The resolution on
> the internet was not clear and was  not ora2pg related. How do I resolve
> this issue? Can someone provide some guidance?
>

\N is symbol for NULL. but \cmd are psql commands too. This behave means so
psql lost synchronization and try to eval data like commands. psql import
is tolerant - that means so source of this issue is lost usually. Try to
import data with option ON_ERROR_STOP


https://stackoverflow.com/questions/4480381/postgres-sql-fail-on-script-error

Regards

Pavel

>
> Thanks.
>
>
>
>
>


Re: Is there a way to get the name of the calling function in pgplsql?

2018-03-21 Thread Pavel Stehule
Hi

2018-03-21 8:24 GMT+01:00 Thiemo Kellner :

> Hi all
>
> In a function I would like to log the caller. Is there a way to get its
> name in pgplsql?
>

you can read it from stack


https://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-CALL-STACK

Regards

Pavel


> Kind regards
>
> Thiemo
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


Re: Is there a way to get the name of the calling function in pgplsql?

2018-03-21 Thread Pavel Stehule
2018-03-21 13:30 GMT+01:00 Thiemo Kellner, NHC Barhufpflege <
thiemo.kell...@gelassene-pferde.biz>:

> Thanks for the hint and please excuse my not thouroughly enough reading of
> the documentation. I did not suspect such Feature amongst controll
> structures.
>

no problem:)

Pavel


>
> Zitat von Pavel Stehule :
>
> https://www.postgresql.org/docs/current/static/plpgsql-contr
>> ol-structures.html#PLPGSQL-CALL-STACK
>>
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


Re: Troubleshooting a segfault and instance crash

2018-03-24 Thread Pavel Stehule
2018-03-25 0:41 GMT+01:00 Blair Boadway :

> Thanks for the tip.  We are using RHEL 6.9 and definitely up to date on
> glibc (2.12-1.209.el6_9.2).  We also have the same versions on a very
> similar system with no segfault.
>
>
>
> My colleague got a better backtrace that shows another extension
>
>
>
> Core was generated by `postgres: batch_user_account''.
>
> Program terminated with signal 11, Segmentation fault.
>
> #0 0x00386712868a in __strcmp_sse42 () from /lib64/libc.so.6
>
> Missing separate debuginfos, use: debuginfo-install
> postgresql96-server-9.6.5-1PGDG.rhel6.x86_64
>
> (gdb) bt
>
> #0 0x00386712868a in __strcmp_sse42 () from /lib64/libc.so.6
>
> #1 0x7fa3f0c7074c in get_query_string (pstate=,
> query=, jumblequery=) at
> pg_hint_plan.c:1882
>
> #2 0x7fa3f0c70a5d in pg_hint_plan_post_parse_analyze
> (pstate=0x25324b8, query=0x25325e8) at pg_hint_plan.c:2875
>
> #3 0x005203bc in parse_analyze ()
>
> #4 0x006df933 in pg_analyze_and_rewrite ()
>
> #5 0x007c6f6b in ?? ()
>
> #6 0x007c6ff0 in CachedPlanGetTargetList ()
>
> #7 0x006e173a in PostgresMain ()
>
> #8 0x006812f5 in PostmasterMain ()
>
> #9 0x00609278 in main ().
>
>
>
>
>
> We aren’t sure if this indicates that pg_hint_plan is causing the segfault
> or if it happened to be doing something when the segfault occurred.  We
> aren’t actually using pg_hint_plan hints in this system so we’re not sure
> how all this relates to segfault when another process does a ‘grant usage
> on schema abc to user xyz;’ unrelated to the account segfaulting.
>

although you don't use pg_hint_plan explicitly, pg_hint_plan is active - it
is active via planner callbacks


>
>
> Short of better ideas, we will pull the pg_hint_plan extension and see if
> that removes the problem.
>

please, try to report this back trace to pg_hint_plan authors.

Regards

Pavel


>
>
> -Blair
>
>
>
>
>
>
>
>
>
>
>
> *From: *Peter Geoghegan 
> *Date: *Saturday, March 24, 2018 at 4:18 PM
> *To: *Blair Boadway 
> *Cc: *"pgsql-gene...@postgresql.org" 
> *Subject: *Re: Troubleshooting a segfault and instance crash
>
>
>
> On Thu, Mar 8, 2018 at 9:40 AM, Blair Boadway 
> wrote:
>
> Mar  7 14:46:35 pgprod2 kernel:postgres[29351]: segfault at 0 ip
>
> 00302f32868a sp 7ffcf1547498 error 4 in
>
> libc-2.12.so[302f20+18a000]
>
>
>
> Mar  7 14:46:35 pgprod2 POSTGRES[21262]: [5] user=,db=,app=client= LOG:
>
> server process (PID 29351) was terminated by signal 11: Segmentation fault
>
>
>
> It crashes the database, though it starts again on its own without any
>
> apparent issues.  This has happened 3 times in 2 months and each time the
>
> segfault error and memory address is the same.
>
>
>
> We had a recent report of a segfault on a Redhat compatible system,
>
> that seemed like it might originate from within its glibc [1].
>
> Although all the versions there didn't match what you have, it's worth
>
> considering as a possibility.
>
>
>
> Maybe you can't install debuginfo packages because you don't yet have
>
> the necessary debuginfo repos set up. Just a guess. That is sometimes
>
> a required extra step.
>
>
>
> [1] https://postgr.es/m/7369.1520528...@sss.pgh.pa.us
>
> --
>
> Peter Geoghegan
>
>
>


Re: psql variable to plpgsql?

2018-04-11 Thread Pavel Stehule
Hi

2018-04-11 8:51 GMT+02:00 Thiemo Kellner :

> Hi all
>
> Is there a way to pass the value of a psql variable into function code? I
> create a schema with help of psql variable
>
>\set SCHEMA_NAME LOGGER
>create
>  schema :SCHEMA_NAME;
>
> I would like to create a function that has the Schema hard coded like
>declare
>   V_SCHEMA_NAME text := :SCHEMA_NAME;
>begin
> but as the plpgsql code is within quotes, it Fails.
>

No, there is not possible to read/write client side variables from server
side.

Regards

Pavel

>
> Kind regards
>
> Thiemo
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


Re: psql variable to plpgsql?

2018-04-11 Thread Pavel Stehule
2018-04-11 11:36 GMT+02:00 Thiemo Kellner, NHC Barhufpflege <
thiemo.kell...@gelassene-pferde.biz>:

> Zitat von Pavel Stehule :
>
> No, there is not possible to read/write client side variables from server
>> side.
>>
>
> I did not mean that the variable be replaced by its value at execution
> time but at Installation time.


no. The :xxx is not evaluated inside string. The workaround is using GUC
variables and related functions. Can be used from psql and from plpgsql too.

https://stackoverflow.com/questions/13172524/passing-user-id-to-postgresql-triggers/13172964#13172964

Regards

Pavel



>
>
> --
> Öffentlicher PGP-Schlüssel: http://pgp.mit.edu/pks/lookup?
> op=get&search=0x8F70EFD2D972CBEF
>
> 
> This message was sent using IMP, the Internet Messaging Program.
>
>


Re: Query function arg data types ONLY (no arg names)

2018-04-28 Thread Pavel Stehule
Hi

2018-04-28 18:52 GMT+02:00 Jeremy Finzel :

> It appears that neither pg_get_function_arguments
> nor pg_get_function_identity_arguments could be used for this.  I want to
> get function argument data types from the catalog by ordinal position,
> without the argument name.
>
> For example, I want the same information for these 2 functions:
>
> foo(p_1 int, p_2 text)
>
> - {int, text}
>
> foo(int, text)
>
> - {int, text}
>
> Any suggestions as to how to use the catalogs or built-in postgres
> functions to query this?
>

 CREATE OR REPLACE FUNCTION public.foo(a integer, b integer, c text)
 RETURNS text
 LANGUAGE sql
AS $function$ select 'hi'; $function$

postgres=# select (proargtypes::regtype[])[0:] from pg_proc where proname =
'foo';
┌─[ RECORD 1 ]┬┐
│ proargtypes │ {integer,integer,text} │
└─┴┘

Regards

Pavel


> Thank you!
> Jeremy
>


Re: How to find the hits on the databases and tables in Postgres

2018-05-05 Thread Pavel Stehule
2018-05-05 13:03 GMT+02:00 PT :

> On Fri, 4 May 2018 17:14:39 +0530
> nikhil raj  wrote:
>
> >  Hi,
> > Any one can please help me out
> >
> > How to monitor the Hits on database and how many hits on each user tables
> > Through query.
> > Is there any other tools for that so it can full fill my requirement for
> it
>
> pgBadger has always been my goto tool for that: https://github.com/dalibo/
> pgbadger
>
>
There are some statistic per tables: .. select * from pg_stat_user_tables,
indexes: select * from pg_stat_user_indexes, and databases: select * from
pg_stat_database;

Regards

Pavel

-- 
> Bill Moran 
>
>


Re: Enhancement to psql command, feedback.

2018-05-09 Thread Pavel Stehule
2018-05-09 9:59 GMT+02:00 John McKown :

> I just wanted to throw this out to the users before I made a complete fool
> of myself by formally requesting it. But I would like what I hope would be
> a minor change (enhancement) to the psql command. If you look on this page,
> https://wiki.postgresql.org/wiki/Shared_Database_Hosting ,
> you will see a number of example which look like:
>
> psql -U postgres template1 -f - << EOT
>
> REVOKE ALL ON DATABASE template1 FROM public;
> REVOKE ALL ON SCHEMA public FROM public;
> GRANT ALL ON SCHEMA public TO postgres;
> CREATE LANGUAGE plpgsql;
>
> EOT
>
>
> To me this looks similar to a UNIX shell script. Now, going sideways for a
> second, if someone wanted to create a "self contained" awk script. It would
> look something like:
>
> #!/bin/awk -f
> ... awk code ...
>
> When a user executes the above from the command line, the UNIX system runs
> the program in the first "magic" line as if the user had entered "/bin/awk
> -f ..." where the ... is replaced by the name of the file executed followed
> by the rest of the command line parameters.
>
> I think it would be nice if psql would do the same, mainly for
> "consistency" with other UNIX scripting languages, such as python, perl, &
> gawk.
>

These languages has defined # as line comment. It is not true for SQL.

Regards

Pavel



>
> The example above would then become:
>
> #!/bin/psql -U postgres template1 -f
> REVOKE ALL ON DATABASE template1 FROM public;
> REVOKE ALL ON SCHEMA public FROM public;
> GRANT ALL ON SCHEMA public TO postgres;
> CREATE LANGUAGE plpgsql;
>
> Does this seem reasonable to others? When I actually try the following as
> a "script", I get an error.
>

> === transcript ===
>
> $ls -l ./x.psql; cat ./x.psql; ./x.psql
> -rwxr-xr-x. 1 joarmc joarmc 40 May  9 02:55 ./x.psql
> #!/usr/bin/psql -f
> select * from table;
> psql:./x.psql:2: ERROR:  syntax error at or near "#!/"
> LINE 1: #!/usr/bin/psql -f
>^
>
>
> ​I have not looked at the source yet, but it seems that it would be "easy"
> to implement if psql would simply ignore the first line of any file
> referenced via the "-f" parameter if it started with "#!" or maybe even
> just "#". I'm not suggesting ignoring _every_ line that start with that
> "magic", just the first.​
>
>
> --
> We all have skeletons in our closet.
> Mine are so old, they have osteoporosis.
>
> Maranatha! <><
> John McKown
>


Re: How do I get the SQL statement in a trigger?

2018-05-10 Thread Pavel Stehule
Hi

2018-05-10 12:23 GMT+02:00 a <372660...@qq.com>:

> Hi I would like to write a trigger that recorded every sql statement under
> the effected entry. So if it is possible to retrieve the sql statement
> within a trigger??
>

You can read a tom command from pg_stat_activity table

 postgres=# select query from pg_stat_activity where pid = pg_backend_pid();
+--+
|  query   |
+--+
| select query from pg_stat_activity where pid = pg_backend_pid(); |
+--+
(1 row)

But this information is available only for top level query.

Regards

Pavel


Re: How do I get the SQL statement in a trigger?

2018-05-10 Thread Pavel Stehule
2018-05-10 12:42 GMT+02:00 a <372660...@qq.com>:

> Thanks a lot, would you please be so kind to tell me more about what is top
> level query??
>

for example - if you run some queries from PLpgSQL functions (triggers),
then these queries are not top queries.

Regards

Pavel


>
>
> -- Original message ------
> *From:* "Pavel Stehule";
> *Sendtime:* Thursday, May 10, 2018 6:38 PM
> *To:* "a"<372660...@qq.com>;
> *Cc:* "pgsql-general";
> *Subject:* Re: How do I get the SQL statement in a trigger?
>
> Hi
>
> 2018-05-10 12:23 GMT+02:00 a <372660...@qq.com>:
>
>> Hi I would like to write a trigger that recorded every sql statement
>> under the effected entry. So if it is possible to retrieve the sql
>> statement within a trigger??
>>
>
> You can read a tom command from pg_stat_activity table
>
>  postgres=# select query from pg_stat_activity where pid =
> pg_backend_pid();
> +--+
> |  query   |
> +--+
> | select query from pg_stat_activity where pid = pg_backend_pid(); |
> +--+
> (1 row)
>
> But this information is available only for top level query.
>
> Regards
>
> Pavel
>
>


Re: How do I get the SQL statement in a trigger?

2018-05-10 Thread Pavel Stehule
2018-05-10 12:50 GMT+02:00 Fabio Ugo Venchiarutti 
:

> Querying over pg_stat_activity with clauses in a trigger can be quite
> expensive if your write transaction rate is high.
>
>
> You may want to look into the current_query() function documented at
> https://www.postgresql.org/docs/current/static/functions-info.html


sure - it is better - forgot it

Regards

Pavel


>
>
>
>
>
>
>
>
>
> On 10/05/18 11:44, Pavel Stehule wrote:
>
>>
>>
>> 2018-05-10 12:42 GMT+02:00 a <372660...@qq.com <mailto:372660...@qq.com
>> >>:
>>
>> Thanks a lot, would you please be so kind to tell me more about what
>> is top level query??
>>
>>
>> for example - if you run some queries from PLpgSQL functions (triggers),
>> then these queries are not top queries.
>>
>> Regards
>>
>> Pavel
>>
>>
>>
>> -- Original message --
>> *From:* "Pavel Stehule"__;
>> *Sendtime:* Thursday, May 10, 2018 6:38 PM
>> *To:* "a"<372660...@qq.com <mailto:372660...@qq.com>>;
>> *Cc:* "pgsql-general"__;
>> *Subject:* Re: How do I get the SQL statement in a trigger?
>>
>> Hi
>>
>> 2018-05-10 12:23 GMT+02:00 a <372660...@qq.com
>> <mailto:372660...@qq.com>>:
>>
>> Hi I would like to write a trigger that recorded every sql
>> statement under the effected entry. So if it is possible to
>> retrieve the sql statement within a trigger??
>>
>>
>> You can read a tom command from pg_stat_activity table
>>
>>   postgres=# select query from pg_stat_activity where pid =
>> pg_backend_pid();
>> +--+
>> |  query   |
>> +--+
>> | select query from pg_stat_activity where pid = pg_backend_pid(); |
>> +--+
>> (1 row)
>>
>> But this information is available only for top level query.
>>
>> Regards
>>
>> Pavel
>>
>>
>>
> --
> Regards
>
> Fabio Ugo Venchiarutti
> Data Services Department
> Ocado Technology
>
> --
>
>
> Notice:  This email is confidential and may contain copyright material of
> members of the Ocado Group. Opinions and views expressed in this message
> may not necessarily reflect the opinions and views of the members of the
> Ocado Group.
>
>
>
> If you are not the intended recipient, please notify us immediately and
> delete all copies of this message. Please note that it is your
> responsibility to scan this message for viruses.
>
>
>
> Fetch and Sizzle are trading names of Speciality Stores Limited and Fabled
> is a trading name of Marie Claire Beauty Limited, both members of the Ocado
> Group.
>
>
>
>
> References to the “Ocado Group” are to Ocado Group plc (registered in
> England and Wales with number 7098618) and its subsidiary undertakings (as
> that expression is defined in the Companies Act 2006) from time to time.
> The registered office of Ocado Group plc is Buildings One & Two, Trident
> Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL.
>


Re: Function to set up variable inside it

2018-05-15 Thread Pavel Stehule
2018-05-15 14:28 GMT+02:00 Łukasz Jarych :

> Hi Guys,
>
> I am using postgres 10.3 (or 4?).
> IT is possible to set up variable inside function?
>

I don't understand to the question. What do you think?

Regards

Pavel


>
> Best,
> Jacek
>


Re: array_agg to array

2018-05-16 Thread Pavel Stehule
Hi

2018-05-16 8:14 GMT+02:00 Philipp Kraus :

> Hello,
>
> I have got a function with a reg expr to split chemical formulas e.g. H2O
> -> H2 O.
>
> CREATE OR REPLACE FUNCTION daimon.text2sumformula(text) RETURNS text[] AS
> $$
> select array_agg(i::text) as e from ( select unnest( regexp_matches(
> $1, '[0-9]*[A-Z][a-z]?\d*|\((?:[^()]*(?:\(.*\))?[^()]*)+\)\d+', 'g') ) )
> i;
> $$ LANGUAGE SQL IMMUTABLE;
>
> For H2O I get an array with {(H2),(O)}
> How I can return the inner elements as text, I would like to get {H2,O}
> without round brackets?
>
> Thanks
>

maybe you want array_to_string function

postgres=# select array['a','b'];
┌───┐
│ array │
╞═══╡
│ {a,b} │
└───┘
(1 row)

postgres=# select array_to_string(array['a','b'],'');
┌─┐
│ array_to_string │
╞═╡
│ ab  │
└─┘
(1 row)

Regards

Pavel

>
> Phil
>
>
>
>


Re: Function to set up variable inside it

2018-05-16 Thread Pavel Stehule
Hi

2018-05-16 6:52 GMT+02:00 Łukasz Jarych :

> Hi Guys,
>
> thank you for your help !
>
> Hmm yes something like this.
>
> I was wondering if is possible to pass variable into function:
>
> CREATE FUNCTION change_trigger() RETURNS trigger AS $$
>
> BEGIN
>
> IF  TG_OP = 'INSERT'
>
> THEN
>
> INSERT INTO logging.t_history (tabname, schemaname, 
> operation, new_val)
>
> VALUES (TG_RELNAME, TG_TABLE_SCHEMA, TG_OP, 
> row_to_json(NEW));
>
> RETURN NEW;
>
> ELSIF   TG_OP = 'UPDATE'
>
> THEN
>
> INSERT INTO logging.t_history (tabname, schemaname, 
> operation, new_val, old_val)
>
> VALUES (TG_RELNAME, TG_TABLE_SCHEMA, TG_OP,
>
> row_to_json(NEW), row_to_json(OLD));
>
> RETURN NEW;
>
> ELSIF   TG_OP = 'DELETE'
>
> THEN
>
> INSERT INTO logging.t_history (tabname, schemaname, 
> operation, old_val)
>
> VALUES (TG_RELNAME, TG_TABLE_SCHEMA, TG_OP, 
> row_to_json(OLD));
>
> RETURN OLD;
>
> END IF;
>
> END;
>
> $$ LANGUAGE 'plpgsql' SECURITY DEFINER;
>
> Best,
> Jacek
>

It is possible to pass values into functions -

postgres=# select fx(10);
NOTICE:  >>>10<<<
┌┐
│ fx │
╞╡
││
└┘
(1 row)

postgres=# \sf fx
CREATE OR REPLACE FUNCTION public.fx(a integer)
 RETURNS void
 LANGUAGE plpgsql
AS $function$
begin
  raise notice '>>>%<<<', a;
end;
$function$

but it is not possible to pass parameters to trigger functions. You can
define trigger parameters when you define trigger, but these values must be
constant.

a) the request of trigger parametrization is usually signal of bad using of
triggers - probably you should to use a function, not trigger

b) there is workaround - but you should not to use it if it is not really
necessary

There are few implementations of session variables in postgres - you can
find via google. Then you can set session variable before SQL command, and
you can read this session variable inside trigger function.

Regards

Pavel


>
> 2018-05-15 14:58 GMT+02:00 Adrian Klaver :
>
>> On 05/15/2018 05:28 AM, Łukasz Jarych wrote:
>>
>>> Hi Guys,
>>>
>>> I am using postgres 10.3 (or 4?).
>>> IT is possible to set up variable inside function?
>>>
>>
>> Like this?:
>>
>> https://www.postgresql.org/docs/10/static/plpgsql-declarations.html
>>
>>
>>> Best,
>>> Jacek
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>


Re: Importing data from CSV into a table with array and composite types

2018-05-18 Thread Pavel Stehule
Hi

2018-05-18 10:37 GMT+02:00 a <372660...@qq.com>:

> Hi:
>
> I would like to import data from a csv table. But the table that is going
> to be inserted is constructed with arrays and composite types, also with
> array of composite.
>
> I have tried many ways of inserting but fail. Can anyone help? Thank you
> so much.
>

create type p as (a int, b int);
create table f(pv p[], c int);
insert into f values(array[(10,20),(30,40)]::p[], 1000);
insert into f values(array[(1,20),(3,40)]::p[], -1000);

postgres=# copy f to stdout csv;
"{""(10,20)"",""(30,40)""}",1000
"{""(1,20)"",""(3,40)""}",-1000
Time: 0,391 ms

So you have to respect this format. CSV doesn't know a arrays, doesn't know
composite - so these values are passed as string

Regards

Pavel


Re: Importing data from CSV into a table with array and composite types

2018-05-19 Thread Pavel Stehule
2018-05-20 7:36 GMT+02:00 a <372660...@qq.com>:

> Thank you so much. BTW, may I ask one more question that, how should I
> select every first element of the array??
>
> I know that "select p[:] from f" will print all element of the p array,
> but I probably would want to present result as "select p[:].a from f", but
> the statement does not work.
>
> May I as the correct statement of displaying all first element of the
> composite type in an array??
>

it is not easy - for example, that I sent you can write a query

postgres=# select array_agg(a) from f, unnest(pv) where f.c = 1000;
┌───┐
│ array_agg │
╞═══╡
│ {10,30}   │
└───┘
(1 row)


> Thank you!
>
> Shore
>
>
> -- Original message --
> *From:* "Pavel Stehule";
> *Sendtime:* Friday, May 18, 2018 5:46 PM
> *To:* "a"<372660...@qq.com>;
> *Cc:* "pgsql-general";
> *Subject:* Re: Importing data from CSV into a table with array and
> composite types
>
> Hi
>
> 2018-05-18 10:37 GMT+02:00 a <372660...@qq.com>:
>
>> Hi:
>>
>> I would like to import data from a csv table. But the table that is going
>> to be inserted is constructed with arrays and composite types, also with
>> array of composite.
>>
>> I have tried many ways of inserting but fail. Can anyone help? Thank you
>> so much.
>>
>
> create type p as (a int, b int);
> create table f(pv p[], c int);
> insert into f values(array[(10,20),(30,40)]::p[], 1000);
> insert into f values(array[(1,20),(3,40)]::p[], -1000);
>
> postgres=# copy f to stdout csv;
> "{""(10,20)"",""(30,40)""}",1000
> "{""(1,20)"",""(3,40)""}",-1000
> Time: 0,391 ms
>
> So you have to respect this format. CSV doesn't know a arrays, doesn't
> know composite - so these values are passed as string
>
> Regards
>
> Pavel
>


Re: Whither 1:1?

2018-06-01 Thread Pavel Stehule
2018-06-01 18:52 GMT+02:00 Guyren Howe :

> It’s come to my attention that what seems an obvious and useful database
> design pattern — 1:1 relations between tables by having a shared primary
> key — is hardly discussed or used.
>
> It would seem to be a very simple pattern, and useful to avoid storing
> nulls or for groups of fields that tend to be used together.
>
> Thoughts? Is there some downside I can’t see?
>

Postgres has not multi table statistics - every JOIN increase estimation
error.

Regards

Pavel

>


Re: Microsoft buys GitHub, is this a threat to open-source

2018-06-04 Thread Pavel Stehule
2018-06-04 20:34 GMT+02:00 Michael Nolan :

>
>
> On Mon, Jun 4, 2018 at 12:15 PM, Tom Lane  wrote:
>
>> Michael Nolan  writes:
>> > Microsoft has bought GitHub for $7.5 billion, is this a threat to the
>> open
>> > source community?
>>
>> A fair question, but one that seems entirely off-topic for the Postgres
>> lists, since we don't depend on GitHub.  (And that's a thing for which
>> I'm very glad today.)
>>
>> regards, tom lane
>>
>
> Core postgres is OK, but I think there are a number of postgres-related
> projects that might be on places like GitHub.
>

I have few projects there - Orafce, plpgsql_check, and pspg. I hope so
these projects are well protected by BSD licence - and distributed
redundant nature of git. I hope so there is not reason for panic this
moment. I have not a big data in non git sources - issues, and others.

But I understand so there are projects that are (can be) in conflicts of
interests with Microsoft, and it can be problem.

Regards

Pavel


--
> Mike Nolan
>
>


Re: limit and query planner

2018-06-05 Thread Pavel Stehule
2018-06-05 20:24 GMT+02:00 armand pirvu :

> All
>
> Please see below
>
> explain analyze select * from sp_i2birst_reg_staging_test where evt_id =
> 'ACSF17'
> and status=0 limit 10;
> QUERY PLAN
>
> 
> ---
>  Limit  (cost=0.00..1.29 rows=10 width=519) (actual time=0.110..0.439
> rows=10 loops=1)
>->  Seq Scan on sp_i2birst_reg_staging_test  (cost=0.00..548.40
> rows=4239 width=519) (actual time=0.109..0.429 rows=10 loops=1)
>  Filter: (((evt_id)::text = 'ACSF17'::text) AND (status = 0))
>  Rows Removed by Filter: 115
>  Planning time: 3.022 ms
>  Execution time: 0.639 ms
> (6 rows)
>
>
>
> birstdb=# \d sp_i2birst_reg_staging_test
>  Table "csischema.sp_i2birst_reg_
> staging_test"
> Column |Type |
> Modifiers
> ---+-+--
> ---
>  action_id | bigint  | not null default
> nextval('i2birst_reg_staging_action_id_seq'::regclass)
>  reg_uid   | integer | not null
>  evt_uid   | integer | not null
>  evt_id| character varying(10)   | not null
>  operation | character varying(6)| not null
>  status| smallint| not null
>  category  | character varying(20)   | not null default
> ''::character varying
>  add_date  | timestamp with time zone| not null default now()
>  mod_date  | timestamp with time zone| not null default now()
>  ingres_data   | jsonb   |
>  thread_number | bigint  | not null default 0
>  start_time| timestamp without time zone |
>  end_time  | timestamp without time zone |
> Indexes:
> "sp_i2birst_reg_staging_test_pkey" PRIMARY KEY, btree (action_id)
> "sp_i2birst_reg_staging_test_idx" btree (status, evt_id, category)
> Check constraints:
> "sp_i2birst_reg_staging_test_status_check" CHECK (status = ANY
> (ARRAY[0, 1, 2, 3]))
>
> Even if add an index on evt_id and status same table scan
>
> But
>
> select count(*) from sp_i2birst_reg_staging_test;
>  count
> ---
>   6860
>
> select count(*) from sp_i2birst_reg_staging_test where evt_id = 'ACSF17'
> and status=0 ;
>  count
> ---
>   4239
>
> So I can see why the planner is choosing a table scan
>
> My question is: I suspect the limit simply limits the fethching to the
> first n-records retrieved and has no implications whatsoever on the
> planner, meaning the planner ignores it. Am I right or wrong ?
>

LIMIT is last clause and it is processed after aggregation.

probably you would select count(*) from (select * from
sp_i2birst_reg_staging_test where evt_id = 'ACSF17'  LIMIT 10) s;

more you have not index on evt_id column - there is composite index, but
the chance can be low

Regards

Pavel


> Thanks
> — Armand
>
>
>
>
>
>


Re: Slow planning time for simple query

2018-06-06 Thread Pavel Stehule
2018-06-06 18:59 GMT+02:00 Jeremy Finzel :

> Hello -
>
> We have an odd scenario on one of our OLTP systems, which behaves the same
> way on a streamer, of a 700-1000ms planning time for a query like this:
>
> SELECT *
> FROM table1
> WHERE  source_id IN (SELECT id FROM table2 WHERE customer_id = $1);
>
> The actual execution time is sub-ms.
>
> We initially thought maybe catalog bloat?  But we were able to reindex all
> of the highly churned catalog tables, and I even did VACUUM FULL on
> pg_attribute and pg_statistic, to no avail.
>
> There are no custom settings for pg_attribute for the given tables either.
>
> Interestingly, the problem goes away on a SAN snapshot of the target
> system.
>
> Any ideas of what else we could try?  A PL function that caches the query
> plan works, but that is just a workaround.
>

more times I seen similar issue based on bloated indexes on table - pg in
planning time detect min max from possible indexes

Regards

Pavel


> Thanks!
> Jeremy
>


Re: json_populate_recordset

2018-06-07 Thread Pavel Stehule
2018-06-07 18:51 GMT+02:00 Adrian Klaver :

> From here:
>
> https://www.postgresql.org/docs/10/static/functions-json.html
>
> select * from json_populate_record(null::myrowtype, '{"a": 1, "b": ["2",
> "a b"], "c": {"d": 4, "e": "a b c"}}')
>
> What is the null::myrowtype doing?
>

this function uses polymorphic parameters - its some like

create or replace function fx(in anyelement, json) returns anyelement ..

output polymorphic value requires minimally one input polymorphic value for
type specification. Postgres has not possibility to pass only datatype - so
pattern null:type is used instead.

Regards

Pavel


>
>
> Thanks,
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Database name with semicolon

2018-06-27 Thread Pavel Stehule
Hi

2018-06-27 15:22 GMT+02:00 Joby John :

> Hi,
>
>
>
> We have a windows application which communicate to Postgres database via
> PostgreSQL ODBC drivers for windows.
>
> One of our customers has a database with semicolon in its name (e.g.: “db;
> name”) and our application is failing to establish connection to that
> database saying like it could not find the database.
>
> We tried all known possibilities by enclosing the DB name under quotes,
> curly brackets etc. but none solved the problem.
>
>
>
> I wonder whether there is any possible way to achieve connection to such
> database. If not, probably it’s better not to support semicolons within the
> database name.
>
> Not sure whether I need to submit this query as a bug or not. Any help is
> much appreciated.
>

modern Postgresql has not any problems with special chars

postgres=# create database "bad name ";
CREATE DATABASE
postgres=# \q
[pavel@nemesis ~]$ psql 'bad name '
psql (11beta1)
Type "help" for help.

bad name =#

What is version of Postgres?

Did you try double quotes?

Regards

Pavel



>
>
> Thanks and Regards,
>
> Joby John
> --
> Joby John
> Software Developer
> NCC Group
> Kings Court, Kingston Road, Leatherhead, KT22 7SL
>
> Telephone: +44 1372 383 800 <+44%201372%20383%20800>
> Mobile:
> Website: www.nccgroup.trust
> Twitter: @NCCGroupplc 
> 
> --
>
> This email is sent for and on behalf of NCC Group. NCC Group is the
> trading name of NCC Services Limited (Registered in England CRN: 2802141).
> The ultimate holding company is NCC Group plc (Registered in England CRN:
> 4627044). This email may be confidential and/or legally privileged.
>


Re: EXTERNAL: Re: Database name with semicolon

2018-06-27 Thread Pavel Stehule
Hi

2018-06-27 16:39 GMT+02:00 Joby John :

> Hi Pavel,
>
>
>
> Thanks for coming back to me.
>

please, don't top post
https://en.wikipedia.org/wiki/Posting_style#Top-posting


>
> The version of the Postgres server where I am trying is: 9.5.3
>

9.5 is modern Postgres. There should not be problems with names.



> Yes, I tried putting the database name in double quotes with no luck.
>
> Please note we are using PostgreSQL Unicode ODBC driver (version 10.2) to
> communicate with the server.
>

looks like ODBC issue - semicolon has some semantic there

https://stackoverflow.com/questions/22398212/escape-semicolon-in-odbc-connection-string-in-app-config-file

but probably you read these pages

Maybe some combination of quotes is necessary

https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms722656(v=vs.85)

you can try

dbname = '"ssss; sss"'




>
> Regards,
>
> Joby John
>
>
>
> *From:* Pavel Stehule [mailto:pavel.steh...@gmail.com]
> *Sent:* 27 June 2018 15:26
> *To:* Joby John 
> *Cc:* pgsql-gene...@postgresql.org
> *Subject:* EXTERNAL: Re: Database name with semicolon
>
>
>
> Hi
>
>
>
> 2018-06-27 15:22 GMT+02:00 Joby John :
>
> Hi,
>
>
>
> We have a windows application which communicate to Postgres database via
> PostgreSQL ODBC drivers for windows.
>
> One of our customers has a database with semicolon in its name (e.g.: “db;
> name”) and our application is failing to establish connection to that
> database saying like it could not find the database.
>
> We tried all known possibilities by enclosing the DB name under quotes,
> curly brackets etc. but none solved the problem.
>
>
>
> I wonder whether there is any possible way to achieve connection to such
> database. If not, probably it’s better not to support semicolons within the
> database name.
>
> Not sure whether I need to submit this query as a bug or not. Any help is
> much appreciated.
>
>
>
> modern Postgresql has not any problems with special chars
>
>
>
> postgres=# create database "bad name ";
> CREATE DATABASE
> postgres=# \q
> [pavel@nemesis ~]$ psql 'bad name '
> psql (11beta1)
> Type "help" for help.
>
> bad name =#
>
>
>
> What is version of Postgres?
>
>
>
> Did you try double quotes?
>
>
>
> Regards
>
>
>
> Pavel
>
>
>
>
>
>
>
> Thanks and Regards,
>
> Joby John
> --
>
> Joby John
> Software Developer
> NCC Group
> Kings Court, Kingston Road, Leatherhead, KT22 7SL
>
> Telephone: +44 1372 383 800 <+44%201372%20383%20800>
> Mobile:
> Website: www.nccgroup.trust
> Twitter: @NCCGroupplc <https://twitter.com/NCCGroupplc>
>
> <http://www.nccgroup.trust/>
> --
>
> This email is sent for and on behalf of NCC Group. NCC Group is the
> trading name of NCC Services Limited (Registered in England CRN: 2802141).
> The ultimate holding company is NCC Group plc (Registered in England CRN:
> 4627044). This email may be confidential and/or legally privileged.
>
>
> --
> Joby John
> Software Developer
> NCC Group
> Kings Court, Kingston Road, Leatherhead, KT22 7SL
>
> Telephone: +44 1372 383 800 <+44%201372%20383%20800>
> Mobile:
> Website: www.nccgroup.trust
> Twitter: @NCCGroupplc <https://twitter.com/NCCGroupplc>
> <http://www.nccgroup.trust/>
> --
>
> This email is sent for and on behalf of NCC Group. NCC Group is the
> trading name of NCC Services Limited (Registered in England CRN: 2802141).
> The ultimate holding company is NCC Group plc (Registered in England CRN:
> 4627044). This email may be confidential and/or legally privileged.
>


Re: Database name with semicolon

2018-06-28 Thread Pavel Stehule
2018-06-28 12:10 GMT+02:00 joby.john@nccgroup.trust <
Joby.John@nccgroup.trust>:

> > See if something like below works:
> > {db; name}
>
>
>
> Hi,
>
> Thanks for the responses.
>
> I tried putting curly brackets around the value of database like you
> mentioned.
> {db; name}
> But the server was treating the brackets as part of the database name and
> so
> did not work.
> I tried double quotes like
> "db; name" which also didn't work.
>

please, can you try

'"db; name"' .. double quotes nested in apostrophes



> Not sure what else I can try or whether the ODBC driver supports database
> name with a semicolon at all.
> I know that in the case of password we can put the string enclosed within
> curly brackets to escape special characters but not appears to be working
> for the database.
>
> Regards,
> Joby
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>


Re: How to remove elements from array .

2018-07-06 Thread Pavel Stehule
Hi

2018-07-06 9:49 GMT+02:00 Brahmam Eswar :

> Hi ,
>
> I tried to use array_remove to remove elements from an array but it's
> saying function doesn't exist . I'm able to use other array functions.
>
> 1) Capture the results with multiple columns into array .
> 2)  if ay results exist then loop through an array to find out the record
> with col1='Y'
> 3)  If col1='Y' then get the respective value of Col2 (10) and delete the
> similar records of col2 if exist.
>
> Col1Col2
>  Y 10
>  N 20
> N  10
>
> Need to delete record1 and record3.To delete the array records i'm using
> array_remove but it says doesn't exist.
>
> Version pgadmin4 .
>
>
>
>
In this case, unnesting can be solution

postgres=# select * from foo;
+++
| c1 | c2 |
+++
| t  | 10 |
| f  | 20 |
| f  | 20 |
+++
(3 rows)

postgres=# do $$
declare a foo[] default array(select foo from foo);
begin
  a := array(select (c1,c2)::foo from unnest(a) g(c1,c2) where g.c1 = true);
  raise notice 'a=%', a;
end;
$$;
NOTICE:  a={"(t,10)"}
DO

Regards

Pavel



>
>
>
> Snippet :-
>
> CREATE or REPLACE FUNCTION FUNC1
> (
> << List of elements >>
> ) AS $$
>
> DECLARE
>
> TEST_CODES record1 ARRAY;
>  TEMP_REF_VALUE VARCHAR(4000);
>
> BEGIN
> IS_VALID := 'S';
>
>   SELECT ARRAY
>(SELECT ROW(Col1,Col2,COl3,Col4) ::record1
> FROM table1  INTO TEST_CODES
> IF array_length(TEST_CODES, 1) > 0 THEN
> FOR indx IN array_lower(TEST_CODES, 1)..array_upper(TEST_CODES, 1) LOOP
>  IF TEST_CODES[indx].COL1 = 'Y' THEN
> TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
> TEST_CODES := array_remove(TEST_CODES,TEMP_REF_VALUE);
> END IF;
>END Loop;
> END IF;
>
>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: How to remove elements from array .

2018-07-06 Thread Pavel Stehule
Hi

2018-07-06 10:19 GMT+02:00 Charles Clavadetscher :

> Hi
>
>
>
> *From:* Brahmam Eswar [mailto:brahmam1...@gmail.com]
> *Sent:* Freitag, 6. Juli 2018 09:50
> *To:* pgsql-general ;
> pgsql-hack...@postgresql.org
> *Subject:* How to remove elements from array .
>
>
>
> Hi ,
>
>
>
> I tried to use array_remove to remove elements from an array but it's
> saying function doesn't exist . I'm able to use other array functions.
>
>
>
> 1) Capture the results with multiple columns into array .
>
> 2)  if ay results exist then loop through an array to find out the record
> with col1='Y'
>
> 3)  If col1='Y' then get the respective value of Col2 (10) and delete the
> similar records of col2 if exist.
>
>
>
> Col1Col2
>
>  Y 10
>
>  N 20
>
> N  10
>
>
>
> Need to delete record1 and record3.To delete the array records i'm using
> array_remove but it says doesn't exist.
>
>
>
> Version pgadmin4 .
>
>
>
>
>
>
>
>
>
>
>
>
>
> Snippet :-
>
>
>
> CREATE or REPLACE FUNCTION FUNC1
>
> (
>
>   << List of elements >>
>
> ) AS $$
>
>
>
> DECLARE
>
>
>
>   TEST_CODES record1 ARRAY;
>
>   TEMP_REF_VALUE VARCHAR(4000);
>
>
>
> BEGIN
>
> IS_VALID := 'S';
>
>
>
>   SELECT ARRAY
>
>  (SELECT ROW(Col1,Col2,COl3,Col4) ::record1
>
>   FROM table1  INTO TEST_CODES
>
>
>
>   IF array_length(TEST_CODES, 1) > 0 THEN
>
>
>
>   FOR indx IN array_lower(TEST_CODES,
> 1)..array_upper(TEST_CODES, 1) LOOP
>
>IF TEST_CODES[indx].COL1 = 'Y' THEN
>
>   TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
>
>   TEST_CODES :=
> array_remove(TEST_CODES,TEMP_REF_VALUE);
>
>   END IF;
>
>  END Loop;
>
> END IF;
>
>
>
>
> --
>
> Thanks & Regards,
> Brahmeswara Rao J.
>
>
>
> I am not so in clear why you are using arrays in a function for that.
>
> A solution with SQL would be:
>

I don't understand to the request too.


>
>
> CREATE TABLE tst (
>
>   col1 text,
>
>   col2 integer
>
> );
>
>
>

Attention - temp table are expensive in Postgres (mainly for higher load),
so what can be done simply with arrays should be done with arrays.

Regards

Pavel


> INSERT INTO tst VALUES ('Y', 10), ('N', 20), ('N', 10);
>
>
>
> SELECT * FROM tst;
>
>
>
> col1 | col2
>
> --+--
>
> Y|   10
>
> N|   20
>
> N|   10
>
> (3 rows)
>
>
>
> DELETE FROM tst t
>
> USING (SELECT * FROM tst
>
>WHERE col1 = 'Y') AS x
>
> WHERE t.col2 = x.col2;
>
>
>
> SELECT * FROM tst;
>
>
>
> col1 | col2
>
> --+--
>
> N|   20
>
> (1 row)
>
>
>
> Regards
>
> Charles
>


Re: How to remove elements from array .

2018-07-06 Thread Pavel Stehule
2018-07-06 11:45 GMT+02:00 Brahmam Eswar :

> Hi All,
>
> My request is simple,
>
> Just browse the results from a table into an array and loop through array
> results to find out to unnecessary records and delete them based on certain
> business conditions and print the rest of the records.
>
> Below are the array results from table.
>
>  {"(20310,https://google.com,AP,BR,,Y)","(20310,https://google.com
> ,AP,,,N)","(20311,https://google.com,AP,,,N)"}
>
> Tried to apply the Unnest on array results but giving an error  at
> "https://"; .
>
> Can we iterate over unnest records?
>

sure - it is relation like any other.

Can you send test case?



> On Fri, Jul 6, 2018 at 1:56 PM, Pavel Stehule 
> wrote:
>
>> Hi
>>
>> 2018-07-06 10:19 GMT+02:00 Charles Clavadetscher <
>> clavadetsc...@swisspug.org>:
>>
>>> Hi
>>>
>>>
>>>
>>> *From:* Brahmam Eswar [mailto:brahmam1...@gmail.com]
>>> *Sent:* Freitag, 6. Juli 2018 09:50
>>> *To:* pgsql-general ;
>>> pgsql-hack...@postgresql.org
>>> *Subject:* How to remove elements from array .
>>>
>>>
>>>
>>> Hi ,
>>>
>>>
>>>
>>> I tried to use array_remove to remove elements from an array but it's
>>> saying function doesn't exist . I'm able to use other array functions.
>>>
>>>
>>>
>>> 1) Capture the results with multiple columns into array .
>>>
>>> 2)  if ay results exist then loop through an array to find out the
>>> record with col1='Y'
>>>
>>> 3)  If col1='Y' then get the respective value of Col2 (10) and delete
>>> the similar records of col2 if exist.
>>>
>>>
>>>
>>> Col1Col2
>>>
>>>  Y 10
>>>
>>>  N 20
>>>
>>> N  10
>>>
>>>
>>>
>>> Need to delete record1 and record3.To delete the array records i'm using
>>> array_remove but it says doesn't exist.
>>>
>>>
>>>
>>> Version pgadmin4 .
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Snippet :-
>>>
>>>
>>>
>>> CREATE or REPLACE FUNCTION FUNC1
>>>
>>> (
>>>
>>>   << List of elements >>
>>>
>>> ) AS $$
>>>
>>>
>>>
>>> DECLARE
>>>
>>>
>>>
>>>   TEST_CODES record1 ARRAY;
>>>
>>>   TEMP_REF_VALUE VARCHAR(4000);
>>>
>>>
>>>
>>> BEGIN
>>>
>>> IS_VALID := 'S';
>>>
>>>
>>>
>>>   SELECT ARRAY
>>>
>>>  (SELECT ROW(Col1,Col2,COl3,Col4) ::record1
>>>
>>>   FROM table1  INTO TEST_CODES
>>>
>>>
>>>
>>>   IF array_length(TEST_CODES, 1) > 0 THEN
>>>
>>>
>>>
>>>   FOR indx IN array_lower(TEST_CODES,
>>> 1)..array_upper(TEST_CODES, 1) LOOP
>>>
>>>IF TEST_CODES[indx].COL1 = 'Y' THEN
>>>
>>>   TEMP_REF_VALUE:=TEST_CODES[indx].Col2;
>>>
>>>   TEST_CODES :=
>>> array_remove(TEST_CODES,TEMP_REF_VALUE);
>>>
>>>   END IF;
>>>
>>>  END Loop;
>>>
>>> END IF;
>>>
>>>
>>>
>>>
>>> --
>>>
>>> Thanks & Regards,
>>> Brahmeswara Rao J.
>>>
>>>
>>>
>>> I am not so in clear why you are using arrays in a function for that.
>>>
>>> A solution with SQL would be:
>>>
>>
>> I don't understand to the request too.
>>
>>
>>>
>>>
>>> CREATE TABLE tst (
>>>
>>>   col1 text,
>>>
>>>   col2 integer
>>>
>>> );
>>>
>>>
>>>
>>
>> Attention - temp table are expensive in Postgres (mainly for higher
>> load), so what can be done simply with arrays should be done with arrays.
>>
>> Regards
>>
>> Pavel
>>
>>
>>> INSERT INTO tst VALUES ('Y', 10), ('N', 20), ('N', 10);
>>>
>>>
>>>
>>> SELECT * FROM tst;
>>>
>>>
>>>
>>> col1 | col2
>>>
>>> --+--
>>>
>>> Y|   10
>>>
>>> N|   20
>>>
>>> N|   10
>>>
>>> (3 rows)
>>>
>>>
>>>
>>> DELETE FROM tst t
>>>
>>> USING (SELECT * FROM tst
>>>
>>>WHERE col1 = 'Y') AS x
>>>
>>> WHERE t.col2 = x.col2;
>>>
>>>
>>>
>>> SELECT * FROM tst;
>>>
>>>
>>>
>>> col1 | col2
>>>
>>> --+--
>>>
>>> N|   20
>>>
>>> (1 row)
>>>
>>>
>>>
>>> Regards
>>>
>>> Charles
>>>
>>
>>
>
>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: How to set array element to null value

2018-07-09 Thread Pavel Stehule
2018-07-09 11:58 GMT+02:00 Brahmam Eswar :

> I'm trying to reset array element to null. but 3rd line of below snippet
> is giving the compilation error.
>
>
> FOR indx_1 IN array_lower(X, 1)..array_upper(X, 1) LOOP
> IF X[indx_1].REFERENCE_VALUE = 'ABC' THEN
> X[indx_1].REFERENCE_VALUE:='';
> END IF;
> END LOOP;
>
>
a) plpgsql doesn't support complex expressions on left side of assign
command, b) '' is not NULL in PostgreSQL

you can write your code some like

DECLARE r RECORD;
BEGIN
  FOR i IN array_lower(x, 1) .. array_upper(x, 1)
  LOOP
r := x[i];
IF r.reference_value = 'ABC' THEN
  r.reference_value := NULL;
  x[i] := r;
END IF;
  END LOOP;
END;

Regards

Pavel




> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-16 Thread Pavel Stehule
2018-07-16 13:52 GMT+02:00 Dmitry Igrishin :

>
>
> пн, 16 июл. 2018 г. в 14:26, :
>
>> We – and the majority of our customers - are mainly focused on Windows.
>> We use pgadmin iii and our own assistants. pgadmin iv ist still too slow on
>> Windows compared to pgadmin iii. That is one reason why we still use
>> PostgreSQL 9.6.
>>
> For performance reasons I mostly use the C++ language. Thus, I think the
> performance
> should not be a problem here.
>
>
>>
>>
>> That said, one requirement on a commercial tool for us would be royalty
>> free distribution to our customers. It should however provide the functions
>> of pgadmin iii.
>>
> Do you need an administration tool or an assistant for database
> development? I conceived Pgspa as a
> development tool, which works with source files organized in the usual
> way. For example, the sources
> of the schema "foo" could be organized as:
>   foo/functions/*.sql
>/views/*.sql
>/triggers/*.sql
>...
> The developer works with files rather than objects retrieved from the
> database and loaded
> into the tree view of the GUI (like in pgAdmin and most of other similar
> tools). Though, the
> database browser GUI is a useful feature of course, and should be
> implemented.
>

Few years I am thinking about new IDE for stored procedures. Probably It
should not be written from scratch, but It should to be multiplatform.

what can be nice

1. source should be in files with GIT support
2. integration with developer databese + well autocomplete support
3. formatting - SQL, PL, ..
4. online code validation
5. The should not be strong relation between files and schemas. Now is not
too hard to have information what content is in some file. There can be
physical organization (by files), and logical (by schemas, functions,
views, ...)
6. good performance is important - but Java is good enough today - DBeaver
is has good speed

Regards

Good luck - can be pretty hard to write it.

p.s. IDE for developers is some different than admin tool for
administrators. Should be decided what is target.

Pavel



>
>>
>> Regards Klaus
>>
>>
>>
>>
>>
>> *Von:* Dmitry Igrishin 
>> *Gesendet:* Sonntag, 15. Juli 2018 18:59
>> *An:* pgsql-gene...@postgresql.org
>> *Betreff:* Do we need yet another IDE (SQL development assistant) for
>> PostgreSQL?
>>
>>
>>
>> Hello all,
>>
>>
>>
>> Colleagues. There is an idea to develop a commercial IDE for PostgreSQL
>> under Windows.
>>
>> At the initial stage, not so much an IDE, as an assistant for the server
>> side development.
>>
>> What features would you like to see in such an instrument? Thanks.
>>
>


Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-16 Thread Pavel Stehule
2018-07-16 14:28 GMT+02:00 Dmitry Igrishin :

>
>
> пн, 16 июл. 2018 г. в 15:01, Pavel Stehule :
>
>>
>>
>> 2018-07-16 13:52 GMT+02:00 Dmitry Igrishin :
>>
>>>
>>>
>>> пн, 16 июл. 2018 г. в 14:26, :
>>>
>>>> We – and the majority of our customers - are mainly focused on Windows.
>>>> We use pgadmin iii and our own assistants. pgadmin iv ist still too slow on
>>>> Windows compared to pgadmin iii. That is one reason why we still use
>>>> PostgreSQL 9.6.
>>>>
>>> For performance reasons I mostly use the C++ language. Thus, I think the
>>> performance
>>> should not be a problem here.
>>>
>>>
>>>>
>>>>
>>>> That said, one requirement on a commercial tool for us would be royalty
>>>> free distribution to our customers. It should however provide the functions
>>>> of pgadmin iii.
>>>>
>>> Do you need an administration tool or an assistant for database
>>> development? I conceived Pgspa as a
>>> development tool, which works with source files organized in the usual
>>> way. For example, the sources
>>> of the schema "foo" could be organized as:
>>>   foo/functions/*.sql
>>>/views/*.sql
>>>/triggers/*.sql
>>>...
>>> The developer works with files rather than objects retrieved from the
>>> database and loaded
>>> into the tree view of the GUI (like in pgAdmin and most of other similar
>>> tools). Though, the
>>> database browser GUI is a useful feature of course, and should be
>>> implemented.
>>>
>>
>> Few years I am thinking about new IDE for stored procedures. Probably It
>> should not be written from scratch, but It should to be multiplatform.
>>
> Me too :-) I have a command line prototype of the tool with the basic
> functional. It's written
> in C++ by using the Pgfe client library and in PL/pgSQL as the PostgreSQL
> extension.
>
>
>> what can be nice
>>
>> 1. source should be in files with GIT support
>>
> +1. It's the main feature. Already done.
>
>> 2. integration with developer databese + well autocomplete support
>>
> It's the most hard part and could be implemented later.
>

The basic autocomplete is necessary - table names, column names, .. It
should not be too intelligent - but this is main benefit again generic
already available IDE.


> 3. formatting - SQL, PL, ..
>>
>  Good feature for future releases.
>
4. online code validation
>>
> Not sure I understand. Can you please elaborate what do you mean?
>

For PLpgSQL simple (press one key) send source code to server and highlight
errors (it can be integrated with plpgsql_check). For SQL using not
existing identifier, ..



> 5. The should not be strong relation between files and schemas. Now is not
>> too hard to have information what content is in some file. There can be
>> physical organization (by files), and logical (by schemas, functions,
>> views, ...)
>>
> I agree and there is no problems with it. But logical organization would
> be a bit simpler
> to implement, and would be suitable for the most users. Also it can be
> even helpful when someone
> working with foreign project since the database objects are arranged in
> shelves.
>

I cannot to estimate the cost of these variants - I use mapping - one
schema - one or more files, but the objects to files are divided by
dependency - some objects can be simply updated, other not.

Very specific kind of DB objects are views. The IDE can helps with changes
of views. It is pretty hard now due dependency.


> 6. good performance is important - but Java is good enough today - DBeaver
>> is has good speed
>>
> My primary (and favorite) language still C++ :-)
>

I have no problem with it. But C++ is harder for junior developers and
multiplatform Qt can be expensive for commercial product. But I understand
personal preferences (I don't like Java too). On second hand - the
performance argument is not valid against Java.


>> Regards
>>
>> Good luck - can be pretty hard to write it.
>>
> Thank you, Pavel! But I haven't decided about starting this project, since
> I'm not sure about
> the interest from the community.
>

Understand. Developer is alone every time. But lot of work is done. If I
started similar project (but I have not this plan), then I don't try to
write own IDE, but I'll use some existing and I'll write plugin for
eclipse, or some else.


Although the work is maybe harder, you can get more quickly more wide
community.

Regards

Pavel



>> p.s. IDE for developers is some different than admin tool for
>> administrators. Should be decided what is target.
>>
> Yeah, I'm talking about the tool for developers here.
>


Re: Do we need yet another IDE (SQL development assistant) for PostgreSQL?

2018-07-16 Thread Pavel Stehule
2018-07-16 15:22 GMT+02:00 Dmitry Igrishin :

>
>
> пн, 16 июл. 2018 г. в 16:00, Pavel Stehule :
>
>>
>>
>> 2018-07-16 14:28 GMT+02:00 Dmitry Igrishin :
>>
>>>
>>>
>>> пн, 16 июл. 2018 г. в 15:01, Pavel Stehule :
>>>
>>>>
>>>>
>>>> 2018-07-16 13:52 GMT+02:00 Dmitry Igrishin :
>>>>
>>>>>
>>>>>
>>>>> пн, 16 июл. 2018 г. в 14:26, :
>>>>>
>>>>>> We – and the majority of our customers - are mainly focused on
>>>>>> Windows. We use pgadmin iii and our own assistants. pgadmin iv ist still
>>>>>> too slow on Windows compared to pgadmin iii. That is one reason why we
>>>>>> still use PostgreSQL 9.6.
>>>>>>
>>>>> For performance reasons I mostly use the C++ language. Thus, I think
>>>>> the performance
>>>>> should not be a problem here.
>>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> That said, one requirement on a commercial tool for us would be
>>>>>> royalty free distribution to our customers. It should however provide the
>>>>>> functions of pgadmin iii.
>>>>>>
>>>>> Do you need an administration tool or an assistant for database
>>>>> development? I conceived Pgspa as a
>>>>> development tool, which works with source files organized in the usual
>>>>> way. For example, the sources
>>>>> of the schema "foo" could be organized as:
>>>>>   foo/functions/*.sql
>>>>>/views/*.sql
>>>>>/triggers/*.sql
>>>>>...
>>>>> The developer works with files rather than objects retrieved from the
>>>>> database and loaded
>>>>> into the tree view of the GUI (like in pgAdmin and most of other
>>>>> similar tools). Though, the
>>>>> database browser GUI is a useful feature of course, and should be
>>>>> implemented.
>>>>>
>>>>
>>>> Few years I am thinking about new IDE for stored procedures. Probably
>>>> It should not be written from scratch, but It should to be multiplatform.
>>>>
>>> Me too :-) I have a command line prototype of the tool with the basic
>>> functional. It's written
>>> in C++ by using the Pgfe client library and in PL/pgSQL as the
>>> PostgreSQL extension.
>>>
>>>
>>>> what can be nice
>>>>
>>>> 1. source should be in files with GIT support
>>>>
>>> +1. It's the main feature. Already done.
>>>
>>>> 2. integration with developer databese + well autocomplete support
>>>>
>>> It's the most hard part and could be implemented later.
>>>
>>
>> The basic autocomplete is necessary - table names, column names, .. It
>> should not be too intelligent - but this is main benefit again generic
>> already available IDE.
>>
> Suppose the one write
>   create table foo (id integer default n
> and the autocomplete shows all it knows that starts with "n". Would you be
> satisfied with such an autocomplete? :-)
> Me - not. (Although it is relatively easy to implement.)
>
>>
>>
>>> 3. formatting - SQL, PL, ..
>>>>
>>>  Good feature for future releases.
>>>
>> 4. online code validation
>>>>
>>> Not sure I understand. Can you please elaborate what do you mean?
>>>
>>
>> For PLpgSQL simple (press one key) send source code to server and
>> highlight errors (it can be integrated with plpgsql_check). For SQL using
>> not existing identifier, ..
>>
> Wow, cool! With plpgsql_check it's possible to achieve the user experience
> similar to the SLIME - the IDE for Common Lisp.
>
>>
>>
>>
>>> 5. The should not be strong relation between files and schemas. Now is
>>>> not too hard to have information what content is in some file. There can be
>>>> physical organization (by files), and logical (by schemas, functions,
>>>> views, ...)
>>>>
>>> I agree and there is no problems with it. But logical organization would
>>> be a bit simpler
>>> to implement, and would be suitable for the most users. Also it can be
>>> even helpful when someone
>>> working with foreign project since the database objects are arranged in
>>&g

Re: Return Multiple Rows from Store Function

2018-07-27 Thread Pavel Stehule
Hi

2018-07-27 11:24 GMT+02:00 Brahmam Eswar :

> Hi ,
>
> Returning multiple rows from store functions using "RETURNS TABLE" and
> RETURN QUERY.  The results set may have more than 50k records. Does it give
> any performance issues related to memory? if yes  how to avoid it
>
>
This command uses tuple store - the result is stored in memory to work_mem
size, and later is materialized (stored to temporary file).

Probably worst effect is invisibility of embedded query for planner. The
default estimation of SRF (set returning function) is 1000 rows. If you run
50K rows, the estimation will be really off, and the plan of query can be
strongly suboptimal.

The best practice is not using similar functions. Functions should not to
supply views. It is antipattern with more than one possible performance
issue.

regards

Pavel


> CREATE OR REPLACE FUNCTION funcq(COL1 character varying)
> )
> RETURNS TABLE
> ( a VARCHAR,
>   b VARCHAR,
>   c varchar)
>  AS $$
>
> BEGIN
>
>   RETURN QUERY SELECT a,b,c from table1 where C= COL1;
>
> END;
> $$
> LANGUAGE plpgsql;
>
>
> --
> Thanks & Regards,
> Brahmeswara Rao J.
>


Re: check_function_bodies not doing much

2018-08-07 Thread Pavel Stehule
Hi

2018-08-07 21:17 GMT+02:00 Marcelo Lacerda :

> I was trying to get postgres to warn me that I'm referencing a table that
> it doesn't exists inside a function so I was told on the IRC to check the
> setting "check_function_bodies", however when I use it in a plpgsql
> function it doesn't actually check if the tables in the body exist. Is this
> the correct behavior?
>
> Example:
> http://paste.debian.net/1037080/
>

It is expected behave. PL/pgSQL checks immediately only syntax of embedded
SQL. With this design plpgsql functions are not too sensitive on objects'
dependency. You can use reference on temporary tables what usually doesn't
exists in plpgsql validation time.

For deeper check you can use plpgsql_check
https://github.com/okbob/plpgsql_check

It does almost all possible static checks.

Regards

Pavel


Re: check_function_bodies not doing much

2018-08-07 Thread Pavel Stehule
2018-08-08 0:02 GMT+02:00 Marcelo Lacerda :

> That's a whole different nightmare that I'm expecting.
>



>
> "Yep I double-checked all my functions to see if any would break if I
> change this field mytable.a into 2 fields mytable.a1 and mytable.a2 and
> everything is ok."
>
> *1 month later*
>
> "Why is this error log for this application that I wrote one year ago so
> big? I haven't changed anything!"
>
> Error table mytable has no column a
> Error table mytable has no column a
> Error table mytable has no column a
> ...
>
> It's frustrating that the references that a function make to the tables
> and fields it access aren't taken in account for the validation of whether
> a change to the structure of the database breaks the APIs that the database
> exposes.
>

This cannot be done due possible dynamic SQL. And this issue solve
plpgsql_check really well.

Regards

Pavel


>
> On Tue, Aug 7, 2018 at 6:44 PM Merlin Moncure  wrote:
>
>> On Tue, Aug 7, 2018 at 2:31 PM Tom Lane  wrote:
>> >
>> > Marcelo Lacerda  writes:
>> > > I was trying to get postgres to warn me that I'm referencing a table
>> that
>> > > it doesn't exists inside a function so I was told on the IRC to check
>> the
>> > > setting "check_function_bodies", however when I use it in a plpgsql
>> > > function it doesn't actually check if the tables in the body exist.
>> Is this
>> > > the correct behavior?
>> >
>> > Yes.  It's supposed to be a syntax check, not a check that the function
>> > would work when executed.  (Depending on the particular PL you're using,
>> > which you didn't mention, it might be a pretty weak syntax check too.)
>> >
>> > An example of why a thorough check would be inadvisable is that a
>> trigger
>> > function might contain references to OLD and NEW that are in code paths
>> > protected by checks on the trigger event type.  That could be perfectly
>> > OK, but a static check couldn't tell.
>> >
>> > I believe there are some external tools floating around that check
>> things
>> > more aggressively, and hence with a higher rate of false positives.
>>
>> The only valid use of this GUC that I can think of is to work around
>> this problem;
>> postgres=# create or replace function f() returns void as
>> $$
>>   create temp table x(id int);
>>   delete from x;
>> $$ language sql;
>> ERROR:  relation "x" does not exist
>>
>> ...I've since given up on writing plain sql functions except for
>> inline cases though so I don't use it anymore.  Static resolution of
>> tables is not very useful since the state of the database as the time
>> of function creation is different than what it might be when the
>> function is run (as opposed to compiled languages obviously).
>>
>> merlin
>>
>


Re: User documentation vs Official Docs

2018-08-10 Thread Pavel Stehule
Hi

2018-08-10 21:00 GMT+02:00 Bruce Momjian :

> On Fri, Jul 20, 2018 at 05:31:40PM -0700, Adrian Klaver wrote:
> > JD sit down, I am going to agree with you:) The documentation as it
> stands
> > is very good, though it requires some fore knowledge to successfully
> > navigate. On pages with a lot of content it often is not evident, to
> many,
> > that there are actually examples at the bottom of the page. Also that the
> > exceptions to the rules are called out there also. The general concept of
> > presenting a task, writing a procedure to accomplish the task and
> pointing
> > to the documentation that covers the procedure would be a helpful
> addition.
> > It would be nice to point to something like that in a post rather then
> > continually rebuilding the explanation every time a new user hits the
> list.
> > Looking at the link posted upstream:
>
> I am jumping in late here, but I do have some thoughts on this topic.
> To me, there are three levels of information presentation:
>
> 1.  Task-oriented documents
> 2.  Exhaustive technical documentation/manuals
> 3.  Concept-level material
>
> I think we call agree that the Postgres documentation does very well
> with #2, and we regularly get complements for its quality.
>
> For #1, this is usually related to performing a task without requiring a
> full study of the topic.  For example, if I need iptables rules to block
> a sunrpc attack, or use NFS over ssh, I really want some commands that I
> can study and adjust to the task --- I don't want to study all the
> features of these utilities to get the job done.  This is an area the
> docs don't cover well, but our blogs and wikis do.
>
> For #3, this is mostly covered by books.  This topic requires a lot of
> explanation and high-level thinking.  We have some of that in our docs,
> but in general books probably do this better.
>

I wrote lot of documentation related to plpgsql and some other, but
unfortunately it is in Czech language. It is free, so it can be freely
transalated

Here are links - on the page is a possibility to set google translator

https://postgres.cz/wiki/Jak_nepou%C5%BE%C3%ADvat_PL/pgSQL,_p%C5%99%C3%ADpadn%C4%9B_PL/SQL,_a_dal%C5%A1%C3%AD_fat%C3%A1ln%C3%AD_chyby
https://postgres.cz/wiki/PL/pgSQL

Regards

Pavel


> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>
>


Re: Returning Vector of Pairs with a PostgreSQL C Extension Function

2018-08-27 Thread Pavel Stehule
2018-08-27 14:40 GMT+02:00 TalGloz :

> It looks like changing the
>
> elements[0] = CStringGetDatum(localT1.c_str());
> elements[1] = CStringGetDatum(localT2.c_str());
>
> to:
>
> elements[0] = PointerGetDatum(cstring_to_text(localT1.c_str()));
> elements[1] = PointerGetDatum(cstring_to_text(localT2.c_str()));
>
> Solved the problem. If anyone thinks that there is a better way please tell
> me.
>

There is not better or worst way - you have to use just correct way, that
is one

There are already prepared macros

#define CStringGetTextDatum(s) PointerGetDatum(cstring_to_text(s))
#define TextDatumGetCString(d) text_to_cstring((text *) DatumGetPointer(d))

you can use it.

Regards

Pavel


> Best regards,
> Tal
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>


Re: using a plpgsql function argument as a table column.

2018-08-28 Thread Pavel Stehule
Hi

2018-08-29 7:09 GMT+02:00 Shaun Savage :

> I have a table with many years as columns. y1976, y2077, .. , y2019,y2020
> I want to dynamically return a column from a function.
>

no - it is not possible -

the functions should to return exact same set of columns. Teoretically you
can use SETOF RECORD functions and you can specify result in query like

SELECT * FROM fx('xxx') y(c1, c2, c3, ..)

Personally, your design is unahappy - against to ideas of relations
databases. So any native tools will be impossible.

The best what you can is writing query generator and dynamicly create
queries on client side.

Regards

Pavel


>
> select * from FUNCTION('y2016') .
>
> select t1.cola t1.colb, t1.colc, t2.y2016 from . Where t2.y2016 != 0;
>
> or if I select year y2012 I want FUNCTION('y2012')
>
> select t1.cola t1.colb, t1.colc, t2.y2012 from . Where t2.y2012 != 0;
>
>
> to generalize
>
> select * from FUNCTION( year_column )
>
> select t1.cola t1.colb, t1.colc, t2.year_column from . Where
> t2.year_column != 0;
>
> is it possible? if so how?
>
>
>
>
>
>
>
>


Re: Bitmap Heap Scan and Bitmap Index Scan

2018-09-15 Thread Pavel Stehule
Hi

so 15. 9. 2018 v 9:39 odesílatel Arup Rakshit  napsal:

> Here is a explain plan of a very simple query:
>
> aruprakshit=# explain analyze select first_name, last_name from users
> where lower(state) = 'colorado';
> QUERY PLAN
>
> --
>  Bitmap Heap Scan on users  (cost=5.86..161.40 rows=203 width=13) (actual
> time=0.134..0.444 rows=203 loops=1)
>Recheck Cond: (lower((state)::text) = 'colorado'::text)
>Heap Blocks: exact=106
>->  Bitmap Index Scan on lower_state_users_idx  (cost=0.00..5.81
> rows=203 width=0) (actual time=0.098..0.098 rows=203 loops=1)
>  Index Cond: (lower((state)::text) = 'colorado'::text)
>  Planning time: 0.263 ms
>  Execution time: 0.517 ms
> (7 rows)
>
> I read this
> https://www.postgresql.org/message-id/12553.1135634231%40sss.pgh.pa.us
> 
>  and
> https://www.postgresql.org/message-id/464F3C5D.2000700%40enterprisedb.com
>   to
> understand what this bitmap heap scan and index scan is. But there are some
> questions still in mind which I am not able to figure out yet.
>
> Does bitmap index apply when normal index scan is costly?
>

yes

Does bitmap index always store page number of matching tuples instead of
> just the tuples?
>

What I know, it doesn't store tuples - if there are good enough memory,
then tid are stored (page number, tuple number), else only page numbers are
stored.


> What is Heap Blocks: exact=106 ?
>

see
https://paquier.xyz/postgresql-2/postgres-9-4-feature-highlight-lossyexact-pages-for-bitmap-heap-scan/

Why the cost is higher in Heap scan than index scan?
>

It have to read all pages, but depends on hw and configuration, this read
can be fast

Regards

Pavel


> Thanks,
>
> Arup Rakshit
> a...@zeit.io
>
>
>
>


Re: array must have even number of elements

2018-09-20 Thread Pavel Stehule
Hi

čt 20. 9. 2018 v 19:55 odesílatel Susan Hurst <
susan.hu...@brookhurstdata.com> napsal:

>
> Why must an array have an even number of elements?  I need to use a
> trigger function on any table, some of which may have an odd number of
> columns that I want to cleanse before inserting/updating.
>

The hstore function get parameters as sequence of pairs (key, value) - so
the number should be even. Odd parameter signalize broken format.

Your example is pretty crazy - I cannot to decode it. Maybe you should to
use different function, I don't see a sense for using hstore type there.
But I cannot to decode it.

Regards

Pavel




>
> Is there a workaround for this?
>
>
> ERROR: array must have even number of elements
>
> SQL state: 2202E
>
> Context: SQL statement "SELECT ($1 #=
> hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ','
> ')),trim(replace(regexp_replace($1.vc,'( ){2,}',' ','g'),' ','
> ')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*"
> PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE
>
>
>
> --  my test table
> create table dm.trg_test (c character(8), vc varchar(16), t text);
> insert into dm.trg_test (c,vc,t) values ('ctest','   vctest  ','
> ttest   ');
>
>
> -- code snippet that produced the error.
> -- new will be substituted for $1 during execution with using clause
> l_query_string := 'select ($1 #= hstore(array[' || l_column_list ||
> '])).*';
> execute format(l_query_string) using  new into   new;
> return new;
>
>
> Thanks for your help!
>
> --
> 
> Susan E Hurst
> Principal Consultant
> Brookhurst Data LLC
> Email: susan.hu...@brookhurstdata.com
> Mobile: 314-486-3261
>
>


Re: array must have even number of elements

2018-09-21 Thread Pavel Stehule
čt 20. 9. 2018 v 20:29 odesílatel Susan Hurst <
susan.hu...@brookhurstdata.com> napsal:

> Thanks, everyone!
>
> I get it now.  It's not just an array but an hstore array.  I changed my
> code to include the original values so now it works:
>
> -- new will be substituted for $1 during execution with using clause
>
> l_query_string := 'select ($1 #= hstore(array[' || l_orig_list || '],'
>
>|| 'array[' || l_clean_list || '])).*';
>
>
>
> Pavel...I am creating a trigger function to look for columns with char,
> varchar or text data types to purge any incoming or updated data of
> extraneous spaces and tabs both within the string and on either end.  We
> can use the same function from any table that calls it from a trigger.  Now
> that it works, we can refactor it to make it better.  I would welcome your
> suggestions for alternatives to hstore.
>
Similar task are not good for plpgsql. You can check PLPerl or PLPythonu,
that is better for these iterations over record.

It can be easy task for C extension.


> Thanks for your help!
>
with pleasure

Pavel


> Sue
>
>
>
> ---
> 
> Susan E Hurst
> Principal Consultant
> Brookhurst Data LLC
> Email: susan.hu...@brookhurstdata.com
> Mobile: 314-486-3261
>
> On 2018-09-20 13:04, Pavel Stehule wrote:
>
> Hi
>
> čt 20. 9. 2018 v 19:55 odesílatel Susan Hurst <
> susan.hu...@brookhurstdata.com> napsal:
>
>>
>> Why must an array have an even number of elements?  I need to use a
>> trigger function on any table, some of which may have an odd number of
>> columns that I want to cleanse before inserting/updating.
>
>
> The hstore function get parameters as sequence of pairs (key, value) - so
> the number should be even. Odd parameter signalize broken format.
>
> Your example is pretty crazy - I cannot to decode it. Maybe you should to
> use different function, I don't see a sense for using hstore type there.
> But I cannot to decode it.
>
> Regards
>
> Pavel
>
>
>
>
>>
>>
>> Is there a workaround for this?
>>
>>
>> ERROR: array must have even number of elements
>>
>> SQL state: 2202E
>>
>> Context: SQL statement "SELECT ($1 #=
>> hstore(array[trim(replace(regexp_replace($1.c,'( ){2,}',' ','g'),' ','
>> ')),trim(replace(regexp_replace($1.vc,'( ){2,}',' ','g'),' ','
>> ')),trim(replace(regexp_replace($1.t,'( ){2,}',' ','g'),' ',' '))])).*"
>> PL/pgSQL function store.trim_string_before_dml() line 44 at EXECUTE
>>
>>
>>
>> --  my test table
>> create table dm.trg_test (c character(8), vc varchar(16), t text);
>> insert into dm.trg_test (c,vc,t) values ('ctest','   vctest  ','
>> ttest   ');
>>
>>
>> -- code snippet that produced the error.
>> -- new will be substituted for $1 during execution with using clause
>> l_query_string := 'select ($1 #= hstore(array[' || l_column_list ||
>> '])).*';
>> execute format(l_query_string) using  new into   new;
>> return new;
>>
>>
>> Thanks for your help!
>>
>> --
>> 
>> Susan E Hurst
>> Principal Consultant
>> Brookhurst Data LLC
>> Email: susan.hu...@brookhurstdata.com
>> Mobile: 314-486-3261
>>
>>


Re: How to improve sql query to achieve the better plan

2018-09-30 Thread Pavel Stehule
Hi

ne 30. 9. 2018 v 18:23 odesílatel Arup Rakshit  napsal:

> I have the below query which is taking 1873 ms. How can I improve this?
>
> explain analyze select
> sum(coalesce(price_cents, 0)::bigint * coalesce(quantity, 0) * (1 -
> coalesce(workitems.discount, 0)/ 100)) as total_budget_cents,
> sum(coalesce(price_cents, 0)::bigint * coalesce(quantity, 0) * (1 -
> coalesce(workitems.discount, 0)/ 100) + coalesce(additional_cost_cents, 0)
> - coalesce(cost_reduction_cents, 0)) as final_budget_cents,
> projects.id as project_id
> from
> projects
> left join workitems on
> workitems.project_id = projects.id
> where
> workitems.deleted_at is null
> group by
> projects.id
> order by
> project_id asc
>
>
> And explain output is:
>
> Sort  (cost=62851.33..62856.07 rows=1897 width=35) (actual
> time=1872.867..1873.003 rows=1229 loops=1)
>   Sort Key: projects.id
>   Sort Method: quicksort  Memory: 145kB
>   ->  HashAggregate  (cost=62719.59..62748.04 rows=1897 width=35) (actual
> time=1871.281..1872.104 rows=1229 loops=1)
> Group Key: projects.id
> ->  Hash Right Join  (cost=159.68..45386.32 rows=364911 width=35)
> (actual time=2.226..637.936 rows=365784 loops=1)
>   Hash Cond: (workitems.project_id = projects.id)
>   Filter: (workitems.deleted_at IS NULL)
>   Rows Removed by Filter: 257457
>   ->  Seq Scan on workitems  (cost=0.00..36655.53 rows=623353
> width=43) (actual time=0.020..220.215 rows=623175 loops=1)
>   ->  Hash  (cost=135.97..135.97 rows=1897 width=16) (actual
> time=2.177..2.177 rows=1897 loops=1)
> Buckets: 2048  Batches: 1  Memory Usage: 105kB
> ->  Seq Scan on projects  (cost=0.00..135.97 rows=1897
> width=16) (actual time=0.013..1.451 rows=1897 loops=1)
> Planning time: 2.775 ms
> Execution time: 1873.308 ms
>
>
maybe conditional index can help

CREATE INDEX ON workitems(project_id) WHERE deleted_at is null

Regards

Pavel


> Projects table has the index:
>
> Indexes:
> "projects_pkey" PRIMARY KEY, btree (id)
> "index_projects_on_company_id" btree (company_id)
> "index_projects_on_deleted_at" btree (deleted_at)
> "index_projects_on_inspector_id" btree (inspector_id)
> "index_projects_on_managed_offline_by_user_id" btree
> (managed_offline_by_user_id)
> "index_projects_on_project_status_id" btree (project_status_id)
> "index_projects_on_shipyard_id" btree (shipyard_id)
> "index_projects_on_vessel_id" btree (vessel_id)
>
> Workitems table has the index:
>
> Indexes:
> "workitems_pkey" PRIMARY KEY, btree (id)
> "index_workitems_on_company_id" btree (company_id)
> "index_workitems_on_deleted_at" btree (deleted_at)
> "index_workitems_on_parent_workitem_id" btree (parent_workitem_id)
> "index_workitems_on_project_id" btree (project_id)
> "index_workitems_on_standard_workitem_id" btree (standard_workitem_id)
> "index_workitems_on_workitem_category_id" btree (workitem_category_id)
>
>
> Thanks,
>
> Arup Rakshit
> a...@zeit.io
>
>
>
>


Re: How to improve sql query to achieve the better plan

2018-09-30 Thread Pavel Stehule
ne 30. 9. 2018 v 18:49 odesílatel Arup Rakshit  napsal:

> I just added it as you said, but I am getting same plan.
>
>
> Sort  (cost=62842.16..62846.91 rows=1897 width=35) (actual
> time=1845.831..1845.950 rows=1229 loops=1)
>   Sort Key: projects.id
>   Sort Method: quicksort  Memory: 145kB
>   ->  HashAggregate  (cost=62710.42..62738.88 rows=1897 width=35) (actual
> time=1844.178..1845.060 rows=1229 loops=1)
> Group Key: projects.id
> ->  Hash Right Join  (cost=159.68..45382.09 rows=364807 width=35)
> (actual time=1.534..618.717 rows=365784 loops=1)
>   Hash Cond: (workitems.project_id = projects.id)
>   Filter: (workitems.deleted_at IS NULL)
>   Rows Removed by Filter: 257457
>   ->  Seq Scan on workitems  (cost=0.00..36653.75 rows=623175
> width=43) (actual time=0.047..213.842 rows=623175 loops=1)
>   ->  Hash  (cost=135.97..135.97 rows=1897 width=16) (actual
> time=1.478..1.478 rows=1897 loops=1)
> Buckets: 2048  Batches: 1  Memory Usage: 105kB
> ->  Seq Scan on projects  (cost=0.00..135.97 rows=1897
> width=16) (actual time=0.006..0.914 rows=1897 loops=1)
> Planning time: 0.498 ms
> Execution time: 1846.100 ms
>
>
Then there is not too much what can be done better - maybe you can try
PostgreSQL 11 with paralel hash join -- it is process about 6M rows, the
time about 2 sec is good


> ——
>
> Indexes:
> "workitems_pkey" PRIMARY KEY, btree (id)
> "index_workitems_on_company_id" btree (company_id)
> "index_workitems_on_deleted_at" btree (deleted_at)
> "index_workitems_on_parent_workitem_id" btree (parent_workitem_id)
> "index_workitems_on_project_id" btree (project_id)
> "index_workitems_on_standard_workitem_id" btree (standard_workitem_id)
> "index_workitems_on_workitem_category_id" btree (workitem_category_id)
> "patrial_index_workitems_200_1" btree (project_id) WHERE deleted_at IS
> NULL
>
>
> Thanks,
>
> Arup Rakshit
> a...@zeit.io
>
>
>
> On 30-Sep-2018, at 10:15 PM, Pavel Stehule 
> wrote:
>
> CREATE INDEX ON workitems(project_id) WHERE deleted_at is null
>
>
>


Re: How can I get and handle the status of sql statements that run in plpgsql ?

2018-10-03 Thread Pavel Stehule
Hi

st 3. 10. 2018 v 18:26 odesílatel David Gauthier 
napsal:

> I found "no_data" here...
> https://www.postgresql.org/docs/10/static/errcodes-appendix.html
>
> update blah, blah...
> if(no_data) then
>   raise exception "update failed to update anything";
> end if
>

UPDATE 
IF NOT FOUND THEN
  RAISE EXCEPTION 'no rows updates';
END IF;

or

DECLARE rc int;
BEGIN
UPDATE 
GET DIAGNOSTICS rc = ROW_COUNT;
IF rc = 0 THEN
 ...

Regards

Pavel


>
> Is that how you detect if nothing was updated and how to make sure the
> thing returns and rolls back ?
>
> On Wed, Oct 3, 2018 at 11:46 AM David Gauthier 
> wrote:
>
>> Thanks Adrian and Christopher !
>>
>> So the transaction will be rolled back automatically if there's a
>> problem.  Got it !
>>
>> Question:  How do I detect when it is appropriate to raise notice so as
>> to be able to see the error message?  Or will that automatically be sent to
>> stdout if an error occurs?
>>
>> Question: Also, how can I detect how many records were operated on by,
>> say, an update statement?  In the example I gave, I would want to make sure
>> that the transfer amount was deducted from the savings AND that the amount
>> was added to the checking.  Both need to be 1, else I want to...
>> 1) send a message (raise notice)
>> 2) rollback somehow.
>>
>>
>>
>>
>> On Tue, Oct 2, 2018 at 5:27 PM Adrian Klaver 
>> wrote:
>>
>>> On 10/2/18 1:47 PM, David Gauthier wrote:
>>> > Hi:
>>> > psql (9.6.7, server 9.5.2) on linux
>>> >
>>> > How does one get the status of an sql statement executed in plpgsql?
>>> If
>>> > that status is cryptic, how can that be translated to something which
>>> > someone could understand?   Finally, how can I effectively do a start
>>> > transaction and either rollback or commit based on the results of the
>>> > sql statements run?
>>> >
>>>
>>> >
>>> > Of course I don't know what the  and
>>> > "something_went_wrong" pieces look like, or they even make sense with
>>> > how this sort of thing shold be properly handled in plpgsql.  Also, in
>>>
>>> The below(read to bottom of the page) might help:
>>>
>>>
>>> https://www.postgresql.org/docs/10/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
>>>
>>>
>>> > my trials, it appears that plpgsql doesn't like "start transaction".
>>> So
>>> > how is that piece done ?
>>> >
>>> > Thanks in Advance for any help !
>>> >
>>>
>>>
>>> --
>>> Adrian Klaver
>>> adrian.kla...@aklaver.com
>>>
>>


Re: Why the index is not used ?

2018-10-06 Thread Pavel Stehule
so 6. 10. 2018 v 11:57 odesílatel ROS Didier  napsal:

> Hi
>
> I would like to submit the following problem to the PostgreSQL community.
> In my company, we have data encryption needs.
> So I decided to use the following procedure :
>
>
>
> (1)Creating a table with a bytea type column to store the encrypted
> data
> CREATE TABLE cartedecredit(card_id SERIAL PRIMARY KEY, username
> VARCHAR(100), cc bytea);
>
>
>
> (2)inserting encrypted data
> INSERT INTO cartedecredit(username,cc)  SELECT 'individu ' || x.id,
> pgp_sym_encrypt('test value ' || x.id, 'motdepasse','compress-algo=2,
> cipher-algo=aes256') FROM generate_series(1,10) AS x(id);
>
>
>
> (3)Querying the table
> SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE
> pgp_sym_decrypt(cc, 'motdepasse')='test value 32';
>
> pgp_sym_decrypt
>
> -
>
> test value 32
>
> (1 row)
>
>
>
> Time: 115735.035 ms (01:55.735)
> -> the execution time is very long. So, I decide to create an index
>
>
>
> (4)Creating an index on encrypted data
> CREATE INDEX idx_cartedecredit_cc02 ON cartedecredit(cc);
>

this index cannot to help.

but functional index can cartedecredit(pgp_sym_decrypt(cc, 'motdepasse').
Unfortunately index file will be decrypted in this case.

CREATE INDEX ON


>
>
> (5)Querying the table again
>
> SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE
> pgp_sym_decrypt(cc, 'motdepasse')='test value 32';
> pgp_sym_decrypt
>
> -
>
> test value 32
>
> (1 row)
>
>
>
> Time: 118558.485 ms (01:58.558) -> almost 2 minutes !!
> postgres=# explain analyze SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM
> cartedecredit WHERE pgp_sym_decrypt(cc, 'motdepasse')='test value 32';
>
>   QUERY PLAN
>
>
> --
>
> Seq Scan on cartedecredit  (cost=0.00..3647.25 rows=500 width=32) (actual
> time=60711.787..102920.509 rows=1 loops=1)
>
>Filter: (pgp_sym_decrypt(cc, 'motdepasse'::text) = 'test value
> 32'::text)
>
>Rows Removed by Filter: 9
>
> Planning time: 0.112 ms
>
> Execution time: 102920.585 ms
>
> (5 rows)
>
>
>
> è the index is not used in the execution plan. maybe because of the use
> of a function in the WHERE clause. I decide to modify the SQL query
>
>
>
> (6)Querying the table
> SELECT pgp_sym_decrypt(cc, 'motdepasse') FROM cartedecredit WHERE 
> *cc*=pgp_sym_encrypt('test
> value 32', 'motdepasse');
>

it is strange - this should to use index, when there is usual index over cc
column.

What is result of explain analyze when you penalize seq scan by

set enable_seqscan to off



> pgp_sym_decrypt
>
> -
>
> (0 rows)
>
>
>
> Time: 52659.571 ms (00:52.660)
>
> è The execution time is very long and I get no result (!?)
>
> QUERY PLAN
>
>
> ---
>
> Seq Scan on cartedecredit  (cost=0.00..3646.00 rows=1 width=32) (actual
> time=61219.989..61219.989 rows=0 loops=1)
>
>Filter: (cc = pgp_sym_encrypt('test value 32'::text,
> 'motdepasse'::text))
>
>Rows Removed by Filter: 10
>
> Planning time: 0.157 ms
>
> Execution time: 61220.035 ms
>
> (5 rows)
>
>
>
> è My index is not used.
>
>
> QUESTIONS :
> -  why I get no result ?
>
> -why the index is not used?
>
> Thanks in advance
>
>
>
> Best Regards
> Didier
>
>
>
>
>
> [image: cid:image002.png@01D14E0E.8515EB90]
>
>
> * Didier ROS*
> * Expertise SGBD*
>
>
> *DS IT/IT DMA/Solutions Groupe EDF/Expertise Applicative - SGBD *
>
>
>
>
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont
> établis à l'intention exclusive des destinataires et les informations qui y
> figurent sont strictement confidentielles. Toute utilisation de ce Message
> non conforme à sa destination, toute diffusion ou toute publication totale
> ou partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de
> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
> sur quelque support que ce soit. Nous vous remercions également d'en
> avertir immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for
> the addressees. The information contained in this Message is confidential.
> Any use of information contained in this Message not in accord with its
> pur

Re: Optimizing Postgresql ILIKE while query

2018-10-21 Thread Pavel Stehule
Hi

po 22. 10. 2018 v 7:57 odesílatel aman gupta  napsal:

> Hi Team,
>
> Greetings for the day!!
>
> Platform:
>
> PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-11), 64-bit
>
>
> Issue:
>
>
> We have the base table which contains 22M records and we created a view on
> top of it while querying the view with ILIKE clause it took 44 seconds and
> with LIKE Clause 20 Seconds
>
> Query:
>
> fm_db_custom_db=# EXPLAIN (ANALYZE, TIMING OFF)
> select
> destination,hostname,inputfilename,inputtime,logicalservername,outputfilename,outputtime,processinglink,source,totalinputbytes,totalinputcdrs,totaloutputbytes,totaloutputcdrs
> from mmsuper.test_20m_view  where inputfilename ilike
> '%SDPOUTPUTCDR_4001_BLSDP09_ADM_4997_18-10-15-02549.ASN%';
>
> Also attaching the comparison for both ILIKE and LIKE test performed.
>
> Expectation:
>
> How can we optimize our ILIKE query, since it is hardcoded in the
> application and we can't use any other keyword than ILIKE .
>

look on trigram index

https://www.postgresql.org/docs/11/static/pgtrgm.html

Regards

Pavel

>
> BR//
> Aman Gupta
> +918447611183
> amangp...@gmail.com
>
>


Re: CRecordset::Open postgresql procedure call don't work

2019-07-15 Thread Pavel Stehule
Hi

Dne po 15. 7. 2019 13:40 uživatel jeanclaude marzin <
jeanclaude.mar...@sfr.fr> napsal:

> ‌Hi
> I migrate from MSSQL to postgresql 11. I translate MSSQL stored procedure
> to Postgresql one :
>
> CREATE PROCEDURE procacp ()
> LANGUAGE SQL
> AS $$
> SELECT tabjdbexploit.jdbeid, tabjdbexploit.jdbeproc,
> tabjdbexploit.jdbedate, tabjdbexploit.jdbetypemsg,
> tabjdbexploit.jdbeurg, tabjdbexploit.jdbeconfid,
> tabjdbexploit.jdbeetat, tabmsgacp.acpid,
> tabmsgacp.acpnumserie,
> tabmsgacp.acpsignserv, tabmsgacp.acpnumligne,
> tabmsgacp.acpiaorigine, tabmsgacp.acpgdh,
> tabmsgacp.acperv,  tabmsgacp.acpcdu,
> tabmsgacp.acpdir, tabmsgacp.acppere,
> tabmsgacp.acpcomplement, tabmsgacp.acpsection
>
> FROM tabjdbexploit INNER JOIN
> tabmsgacp ON tabjdbexploit.jdbeid = tabmsgacp.acpid
>
> ORDER BY tabjdbexploit.jdbedate ASC
> $$;
>
>
>
It cannot to work - PostgreSQL procedures are like Oracle's procedures -
cannot returns any result. Only OUT variables can be changed.

Regards

Pavel


>
>
>
> All seems OK in PgAdmin 4, procedure is created
>
> I use ODBC and Crecorset in C++
>
> When i use the call strSQL = "{CALL procacp()}". It don't work :
>
> Message error : ERROR: procacp() is a procedure
>
> I don't know what to do
>
> Thanks in advance for any assistance
>
> Jean-Claude


Re: How to check if a field exists in NEW in trigger

2019-08-04 Thread Pavel Stehule
po 5. 8. 2019 v 7:55 odesílatel Thomas Kellerer  napsal:

> Igal @ Lucee.org schrieb am 05.08.2019 um 00:52:
> > I have the following statement in a trigger:
> >
> > new.email = lower(new.email);
> >
> > When I try to update a record without setting the email column however,
> I get an error:
> >
> > SQL Error [42703]: ERROR: record "new" has no field "email"
> >   Where: SQL statement "SELECT lower(new.email)"
> > PL/pgSQL function on_record_modified() line 26 at assignment
> >
> > I have seen some hacks suggesting TRY/CATCH or converting to a JSON
> > and checking if the field exists, but I would think that there's a
> > better way to check if the field is in the NEW record, no?
>
>
> I assume using to_jsonb(new) and then check for the key in the json value
> will be faster than checking e.g. information_schema.column
> or pg_catalog.pg_attribute
>

Alternative solution can be using other language than PLpgSQL - PLPythonu
or PLPerl (there it is simple task). This language is not designed for too
dynamic code. PLpgSQL triggers are designed for stable schema - you should
to know if table has email column or not.

Catching errors in PLpgSQL is relative expensive solution due related
savepoint overhead in background.

Regards

Pavel


Re: How to check if a field exists in NEW in trigger

2019-08-05 Thread Pavel Stehule
po 5. 8. 2019 v 10:10 odesílatel Thomas Kellerer 
napsal:

> Pavel Stehule schrieb am 05.08.2019 um 08:19:
> >>> I have seen some hacks suggesting TRY/CATCH or converting to a JSON
> >>> and checking if the field exists, but I would think that there's a
> >>> better way to check if the field is in the NEW record, no?
> >
> >> I assume using to_jsonb(new) and then check for the key in the json
> value
> >> will be faster than checking e.g. information_schema.column
> >> or pg_catalog.pg_attribute
> >
> > Alternative solution can be using other language than PLpgSQL -
> > PLPythonu or PLPerl (there it is simple task). This language is not
> > designed for too dynamic code. PLpgSQL triggers are designed for
> > stable schema - you should to know if table has email column or not.
> >
> > Catching errors in PLpgSQL is relative expensive solution due related
> > savepoint overhead in background.
> Yes, exception handling (or a catalog lookup) is expensive.
> That's why I suggested that using to_jsonb() has the least overhead.
>
> The check is then as simple as:
>
>if (to_jsonb(new) ? 'email') then
>  ... do something
>end if;
>

casting from record to jsonb is not gratis too :).

But surely, it is cheaper than savepoints.

Pavel


> Thomas
>
>
>


Re: A question about trigger fucntion syntax

2019-08-11 Thread Pavel Stehule
Hi


> BTW, here is what I Ave tried.
>
> CREATE OR REPLACE FUNCTION check_permission()
>   RETURNS trigger AS
> $BODY$
> BEGIN
> SELECT
> permit
> FROM
> permitted_work
> WHERE
> NEW.employee_key = OLD.employee_key
> AND
> NEW.work_type_key = OLD.work_type_key
>
>RETURN permit;
> END;
> $BODY$
> LANGUAGE PLPGSQL;
>
> and when I try to insert it I get a syntax error at the RETURN
>

there is more than one issue

1) trigger function should to returns record type (with same type like
table joined with trigger). Column permit is a boolean, so some is wrong.

2) the structure of your function is little bit strange. Probably you want
some like

CREATE OR REPLACE FUNCTION check_permission()
RETURNS trigger AS $$
DECLARE _permit boolean; -- variables should be declared;
BEGIN
  SELECT permit INTO _permit -- result should be assigned to variable
FROM permitted_work
   ...;
  IF NOT permit THEN
RAISE EXCEPTION 'some error message';
  END IF;

  RETURN NEW;
END;
$$ LANGUAGE plpgsql;

Regards

Pavel


>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: "storing" a calculated value in plsql function ?

2019-08-30 Thread Pavel Stehule
pá 30. 8. 2019 v 12:48 odesílatel stan  napsal:

> I have created a function (PLSQL) that does a complex select with joins on
> various
> tables and views, and returns a table.
>
> In the resultant table, I have raw data, and adjusted data. The adjusted
> data i
> all adjusted by a common factor, which is calculated in the select.
> Presently, I
> calculate this same adjustment factor several times in the select.
>
> Is there a way to reference this value, multiple times, once it is
> calculated? Or
> would I have to create a 2nd select that calculates this adjustment
> factor, and
> stores it in a PLSQL variable< and if I do that, can I reference this
> stored value
> in the select?
>

you can use custom configuration values like global variables

https://wiki.postgresql.org/wiki/Variable_Design#PostgreSQL_User-Defined_GUCS

Regards

Pavel Stehule

>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: SQL equivalint of #incude directive ?

2019-08-30 Thread Pavel Stehule
pá 30. 8. 2019 v 15:49 odesílatel stan  napsal:

>
> I thought this would be common. But a quick Google only revealed what look
> to be
> workarounds.
>
> I am defining a bunch  of functions, and I would prefer to store them in a
> separate file, which then gets "source" by the main DB init file.
>
> Is there a standard way to do this?
>

no, it isn't

Pavel


> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>
>


Re: echo work alike in SQL

2019-08-31 Thread Pavel Stehule
Hi

so 31. 8. 2019 v 17:46 odesílatel stan  napsal:

> Just started using the RAISE functionality to help in debugging.
>
> Tried to add it to an SQL script, only to realize it is a PLPGSQL
> extension.
>
> Is there a way to get a string to appear in the output f an SQL script?
>
> Can I do some sort of select?
>

if you run SQL script from psql, then you can use \echo

https://www.postgresql.org/docs/current/app-psql.html

elsewhere you should to use SELECT

or you can use own function

CREATE OR REPLACE FUNCTION public.debug_text(text)
 RETURNS text
 LANGUAGE plpgsql
AS $function$
BEGIN
  RAISE NOTICE '%', $1;
  RETURN $1;
END;
$function$

Regards

Pavel


>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: pgmodeler ?

2019-09-01 Thread Pavel Stehule
Hi

ne 1. 9. 2019 v 12:35 odesílatel stan  napsal:

> On Sun, Sep 01, 2019 at 06:30:23AM -0400, stan wrote:
> > I apologize, as this is a bit off topic for this list.
> >
> > pgmodeler got mention in a thread that I started a day or two ago, and
> it looks
> > like it might be very  useful
> >
> > I am running on Ubuntu 18.04 (latest STABLE), and I found that there is
> a package,
> > which I downloaded. The idea is to import the project I am working on.
> >
> > But, when I try to import it I get an error message about:
> >
> > t says that the column pr.proisaag.
> >
> > I don't want to spend a lot of time trying to get this package to work,
> if it is a
> > dead end. I am using Postgres 11 if that matters.
> >
> > Has anyone gotten this to work If so, can anyone point me to a quick
> HOWTO to
> > import my existing project?
> >
> This is the specific error message:
>
> Could not execute the SQL command.
> Message returned: ERROR: column pr.proisagg does not exist
> LINE 1: ...namespace AS ns ON pr.pronamespace = ns.oid WHERE pr.proisag...
> ^
> HINT: Perhaps you meant to reference the column
>

looks like pgmodeler doesn't support  PostgreSQL 11. proisagg was replaced
by prokind column

https://dba.stackexchange.com/questions/238903/postgresql-11-error-column-p-proisagg-does-not-exist

The application (pgmodeler) should be fixed.

Regards

Pavel

> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: ERROR: too many dynamic shared memory segments

2019-09-11 Thread Pavel Stehule
Hi

st 11. 9. 2019 v 9:48 odesílatel Nicola Contu 
napsal:

> Hello,
> We are running postgres 11.5 and in the last two weeks we did :
>
> - upgrade of postgres to 11.5 from 11.4
> - increased shared_buffer to 1/3 of the memory
> - increased effective_cache_size = 160GB from 120
> - increased checkpoint_completion_target = 0.9 from 0.7
> - increased checkpoint_timeout = 1h
> - increased work_mem = 2GB (this can be set up to 4GB) from 600MB
>
> Since that, in the last two weeks we saw an increment of this error :
>
> ERROR: too many dynamic shared memory segments
>
> Is there any relation between these parameters or the pgsql 11.5 version?
>

I expect it can be related to increasing work_mem - maybe parallel hash
join was used

Regards

Pavel

>
> Any help can be appreciated.
>
> Thank you,
> Nicola
>


Re: PG SQL and LIKE clause

2019-09-12 Thread Pavel Stehule
Hi

pá 13. 9. 2019 v 7:29 odesílatel Matthias Apitz  napsal:

>
> Hello,
>
> We're porting a huge Library Management System, written using all kind
> of languages one can think of (C, C++, ESQL/C, Perl, Java, ...) on Linux
> from the DBS Sybase to PG, millions of lines of code, which works also
> with DBS Oracle and in the past with INFORMIX-SE and -ONLINE.
>
> We got to know that in CHAR columns with trailing blanks a
>
> SELECT ... FROM ... WHERE name LIKE 'Ali'
>
> does not match in 'name' having 'Ali '.
>
> I glanced through our code with grep pipelines and found some hundred
> places which would be affected by this problem. I'm not interested in a
> religious discussion if or if not this behaviour of PG is correcter or
> better than in Sybase. It's just different to Sybase.
>
> Any hints to address this problem? Or is there any compile time option
> for the PG server to address this?
>

There is not simple solution - you should to write own patch and used
patched postgres.

Pavel


> Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> Mientras haya voluntad de lucha habrá esperanza de vencer.
>
>
>


Re: PG SQL and LIKE clause

2019-09-12 Thread Pavel Stehule
pá 13. 9. 2019 v 8:49 odesílatel Matthias Apitz  napsal:

> El día Friday, September 13, 2019 a las 07:33:10AM +0200, Pavel Stehule
> escribió:
>
> > > We got to know that in CHAR columns with trailing blanks a
> > >
> > > SELECT ... FROM ... WHERE name LIKE 'Ali'
> > >
> > > does not match in 'name' having 'Ali '.
> > >
> > > I glanced through our code with grep pipelines and found some hundred
> > > places which would be affected by this problem. I'm not interested in a
> > > religious discussion if or if not this behaviour of PG is correcter or
> > > better than in Sybase. It's just different to Sybase.
> > >
> > > Any hints to address this problem? Or is there any compile time option
> > > for the PG server to address this?
> > >
> >
> > There is not simple solution - you should to write own patch and used
> > patched postgres.
>
> Hello,
>
> As we compiled our own PG 11.4 and as we're alone on the server with
> our software plus PG, this could be an option to consider.
>
> Do you have an idea where to look for this in the PG source. Maybe it's
> just deleting the trailing blanks from the column content before applying
> the string compare with regular expressions.
>

I am not a expert on this area, and I don't know all dependency. Some code
is in src/backend/utils/adt/like.c

Pavel


> Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
> May, 9: Спаси́бо освободители! Thank you very much, Russian liberators!
>


Re: JSONB maximal length ?

2019-09-28 Thread Pavel Stehule
so 28. 9. 2019 v 18:12 odesílatel ROS Didier  napsal:

> Hi
>
>By inserting data in a JSONB type column I got the following error
> message:
>
> *>> *
>
> *ERROR:  string too long to represent as jsonb string*
>
> *DETAIL:  Due to an implementation restriction, jsonb strings cannot
> exceed 268435455 bytes.*
>
> *<< *
>
> could anyone confirm that there is a size limit for JSONB type fields ?
>

you hit check

static size_t
checkStringLen(size_t len)
{
<-->if (len > JENTRY_OFFLENMASK)
<--><-->ereport(ERROR,
<--><--><--><-->(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
<--><--><--><--> errmsg("string too long to represent as jsonb string"),
<--><--><--><--> errdetail("Due to an implementation restriction, jsonb
strings cannot exceed %d bytes.",
<--><--><--><--><--><-->   JENTRY_OFFLENMASK)));

<-->return len;
}

what is 255MB

Regards

Pavel


>
> Thanks in advance
>
>
>
> Best Regards
>
> Didier ROS
>
> EDF
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont
> établis à l'intention exclusive des destinataires et les informations qui y
> figurent sont strictement confidentielles. Toute utilisation de ce Message
> non conforme à sa destination, toute diffusion ou toute publication totale
> ou partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de
> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
> sur quelque support que ce soit. Nous vous remercions également d'en
> avertir immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for
> the addressees. The information contained in this Message is confidential.
> Any use of information contained in this Message not in accord with its
> purpose, any dissemination or disclosure, either whole or partial, is
> prohibited except formal approval.
>
> If you are not the addressee, you may not copy, forward, disclose or use
> any part of it. If you have received this message in error, please delete
> it and all copies from your system and notify the sender immediately by
> return message.
>
> E-mail communication cannot be guaranteed to be timely secure, error or
> virus-free.
>


Re: JSONB maximal length ?

2019-09-28 Thread Pavel Stehule
so 28. 9. 2019 v 18:35 odesílatel ROS Didier  napsal:

> Hi Pavel
>
>
>
>I don’t know what is the value 255MB ?
>
>   all I know is that I want to insert large pdf documents. I am prevented
> because of this limit.
>

It's pretty long pdf :) Maybe you are hit some bug.

Postgres JSON, JSONB, XML is not designed for extra long documents. There
are hard coded limit, 255 MB field (JSONB), and total length (1GB), and the
operations over this size documents requires lot of memory.

Pavel


>
> Best Regards
>
> Didier ROS
>
> EDF
>
> *De :* pavel.steh...@gmail.com [mailto:pavel.steh...@gmail.com]
> *Envoyé :* samedi 28 septembre 2019 18:26
> *À :* ROS Didier 
> *Cc :* pgsql-gene...@postgresql.org
> *Objet :* Re: JSONB maximal length ?
>
>
>
>
>
>
>
> so 28. 9. 2019 v 18:12 odesílatel ROS Didier  napsal:
>
> Hi
>
>By inserting data in a JSONB type column I got the following error
> message:
>
> *>> *
>
> *ERROR:  string too long to represent as jsonb string*
>
> *DETAIL:  Due to an implementation restriction, jsonb strings cannot
> exceed 268435455 bytes.*
>
> *<< *
>
> could anyone confirm that there is a size limit for JSONB type fields ?
>
>
>
> you hit check
>
>
>
> static size_t
> checkStringLen(size_t len)
> {
> <-->if (len > JENTRY_OFFLENMASK)
> <--><-->ereport(ERROR,
> <--><--><--><-->(errcode(ERRCODE_PROGRAM_LIMIT_EXCEEDED),
> <--><--><--><--> errmsg("string too long to represent as jsonb string"),
> <--><--><--><--> errdetail("Due to an implementation restriction, jsonb
> strings cannot exceed %d bytes.",
> <--><--><--><--><--><-->   JENTRY_OFFLENMASK)));
>
> <-->return len;
> }
>
>
>
> what is 255MB
>
>
>
> Regards
>
>
>
> Pavel
>
>
>
>
>
> Thanks in advance
>
>
>
> Best Regards
>
> Didier ROS
>
> EDF
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont
> établis à l'intention exclusive des destinataires et les informations qui y
> figurent sont strictement confidentielles. Toute utilisation de ce Message
> non conforme à sa destination, toute diffusion ou toute publication totale
> ou partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de
> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
> sur quelque support que ce soit. Nous vous remercions également d'en
> avertir immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for
> the addressees. The information contained in this Message is confidential.
> Any use of information contained in this Message not in accord with its
> purpose, any dissemination or disclosure, either whole or partial, is
> prohibited except formal approval.
>
> If you are not the addressee, you may not copy, forward, disclose or use
> any part of it. If you have received this message in error, please delete
> it and all copies from your system and notify the sender immediately by
> return message.
>
> E-mail communication cannot be guaranteed to be timely secure, error or
> virus-free.
>
>
> Ce message et toutes les pièces jointes (ci-après le 'Message') sont
> établis à l'intention exclusive des destinataires et les informations qui y
> figurent sont strictement confidentielles. Toute utilisation de ce Message
> non conforme à sa destination, toute diffusion ou toute publication totale
> ou partielle, est interdite sauf autorisation expresse.
>
> Si vous n'êtes pas le destinataire de ce Message, il vous est interdit de
> le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou
> partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de
> votre système, ainsi que toutes ses copies, et de n'en garder aucune trace
> sur quelque support que ce soit. Nous vous remercions également d'en
> avertir immédiatement l'expéditeur par retour du message.
>
> Il est impossible de garantir que les communications par messagerie
> électronique arrivent en temps utile, sont sécurisées ou dénuées de toute
> erreur ou virus.
> 
>
> This message and any attachments (the 'Message') are intended solely for
> the addressees. The information contained in this Message is confidential.
> Any use of information contained in this Message not in accord with its
> purpose, any dissemination or disclosure, either whole or partial, is
> prohibited except formal approval.
>
> If you are not the addressee, you may not copy, forward, disclose or use
> any part of it. If you have received this message in error, please delete
> it and all copies from your system and notify 

Re: Performance on JSONB select

2019-10-05 Thread Pavel Stehule
Hi

so 5. 10. 2019 v 13:34 odesílatel PegoraroF10  napsal:

> I think I solved my problem. I didn´t know it was possible but I´ve created
> an almost dynamic index, because it´ll index for status and corresponding
> datevalue of an object with that status value.
>

you created multicolumn functional index (there are no any dynamic index :))

Pavel


>
> Postgres, I love you.
>
> Just one B-Tree index with 2 fields, Status and DateTime of that respective
> Status object. My Json has always a status and a respective object of that
> status with other values. So I know if it has that status it has a Date
> Value or not on its correspondent object.
>
> {
> "status": "visitadescartada",
> "contrato": {},
> "atribuido": {"datevalue": "2019-09-05 14:47:11.149095-03"},
> "trabalhando": {},
> "visitaagendada": {"datevalue": "2019-09-05
> 15:06:24.255548-03","caption": "Agendado"},
> "visitadescartada": {"datevalue": "2019-09-12
> 11:47:17.45782-03","caption": "Desagendado"},
> "digitacaodescartada": {}
> }
>
> create index IndexByStatusAndDate on MyTable (
> (JsonBField->>$$status$$),
> castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue'))
>
> where (JsonBField ? $$status$$);
>
> select * from MyTable where (JsonBField ? $$status$$) and
> case when JsonBField->>$$status$$=$$cadastrodescartado$$ then
> castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')
> > castimmutabletimestamp($$2019-10-01$$)
>  when JsonBField->>$$status$$=$$visitadescartada$$ then
> castimmutabletimestamp(JsonBField->(JsonBField->>$$status$$)->>'datevalue')
> > castimmutabletimestamp($$2019-10-01$$)
>  when JsonBField->>$$status$$=$$contrato$$ then
> castimmutabletimestamp(funilvendas->(JsonBField->>$$status$$)->>'data') >
> castimmutabletimestamp($$2019-10-01$$)
>  when JsonBField->>$$status$$=$$naoatribuido$$ then True end;
>
> And performance now is great because I´m using both fields on index.
>
>
>
> --
> Sent from:
> https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
>
>
>


Re: psql \copy hanging

2019-10-08 Thread Pavel Stehule
út 8. 10. 2019 v 9:06 odesílatel Arnaud L.  napsal:

> Le 07/10/2019 à 16:36, Adrian Klaver a écrit :
> > So you are saying that you have not run the problematic line by itself?
>
> It hung during last night's run.
>
> I had modified my batch script to run the \copy commands separately,
> i.e. it now reads as :
> psql -h myserver -a mydb < originalscriptwithoutproblematicline.sql
> psql -h myserver -a mydb < problematicline.sql
>
> It hung at the problematic line, so during the second psql command.
>
> I'm really at loss... I *believe* that the problem lies either in psql
> or in PostgreSQL, but I really don't know what to try now.
>

you can write simple C application with COPY API
https://www.postgresql.org/docs/12/libpq-copy.html

Then you can eliminate or ensure locality of problem.

more, you can use server side copy. Superuser can read data from server
file system.

Regards

Pavel


>
> Regards
> --
> Arnaud
>
>
>


Re: jsonb_set() strictness considered harmful to data

2019-10-18 Thread Pavel Stehule
Hi


> What I am talking about is that jsonb_set(..., ..., NULL) returns SQL NULL.
>
> postgres=# \pset null '(null)'
> Null display is "(null)".
> postgres=# select jsonb_set('{"a":1,"b":2,"c":3}'::jsonb, '{a}', NULL);
> jsonb_set
> ---
> (null)
> (1 row)
>
> This behaviour is basically giving an application developer a loaded
> shotgun and pointing it at their feet.  It is not a good design.  It
> is a design which has likely lead to many users experiencing
> unintentional data loss.
>

on second hand - PostgreSQL design is one possible that returns additional
information if value was changed or not.

Unfortunately It is very low probably so the design of this function will
be changed - just it is not a bug (although I fully agree, it has different
behave than has other databases and for some usages it is not practical).
Probably there will be some applications that needs NULL result in
situations when value was not changed or when input value has not expected
format. Design using in Postgres allows later customization - you can
implement with COALESCE very simply behave that you want (sure, you have to
know what you do). If Postgres implement design used by MySQL, then there
is not any possibility to react on situation when update is not processed.

Is not hard to implement second function with different name that has
behave that you need and you expect - although it is just

CREATE OR REPLACE FUNCTION jsonb_modify(jsonb, text[], jsonb)
RETURNS jsonb AS $$
SELECT jsonb_set($1, $2, COALESCE($3, "null"::jsonb), true);
$$ LANGUAGE sql;

It is important to understand so JSON NULL is not PostgreSQL NULL. In this
case is not problem in PostgreSQL design because it is consistent with
everything in PG, but in bad expectations. Unfortunately, there are lot of
wrong expectations, and these cannot be covered by Postgres design because
then Postgres will be very not consistent software. You can see - my
function jsonb_modify is what you are expect, and can works for you
perfectly, but from system perspective is not consistent, and very strong
not consistent. Users should not to learn where NULL has different behave
or where NULL is JSON__NULL. Buildin functions should be consistent in
Postgres. It is Postgres, not other databases.

Pavel





> Ariadne
>
>
>


Re: jsonb_set() strictness considered harmful to data

2019-10-18 Thread Pavel Stehule
so 19. 10. 2019 v 7:41 odesílatel David G. Johnston <
david.g.johns...@gmail.com> napsal:

> On Friday, October 18, 2019, Pavel Stehule 
> wrote:
>
>
>> Probably there will be some applications that needs NULL result in
>> situations when value was not changed or when input value has not expected
>> format. Design using in Postgres allows later customization - you can
>> implement with COALESCE very simply behave that you want (sure, you have to
>> know what you do). If Postgres implement design used by MySQL, then there
>> is not any possibility to react on situation when update is not processed.
>>
>
> A CASE expression seems like it would work well for such detection in the
> rare case it is needed.  Current behavior is unsafe with minimal or no
> redeeming qualities.  Change it so passing in null raises an exception and
> make the user decide their own behavior if we don’t want to choose one for
> them.
>

How you can do it? Buildn functions cannot to return more than one value.
The NULL is one possible signal how to emit this informations.

The NULL value can be problem everywhere - and is not consistent to raise
exception somewhere and elsewhere not.

I agree so the safe way is raising exception on NULL. Unfortunately,
exception handling is pretty expensive in Postres (more in write
transactions), so it should be used only when it is really necessary.





> David J.
>
>


Re: CPU SPIKE

2019-10-20 Thread Pavel Stehule
Hi

po 21. 10. 2019 v 7:14 odesílatel Pawan Sharma 
napsal:

> Hi All,
>
> Having real high CPU issue (95-98%), with SELECT statements and select
> queries contains multiple AND operator, is it will cause any CPU Spike..???
>
> apps team is using sub-partition, PG11, CPU:24, Mem: 16GB
>
> ```test=# show shared_buffers ;
>  shared_buffers
> 
>  7680MB
> (1 row)
>
> test=# show work_mem ;
>  work_mem
> --
>  104MB
> (1 row)
>
> test=# show maintenance_work_mem ;
>  maintenance_work_mem
> --
>  1GB
> (1 row)
>
> test=# show effective_cache_size ;
>  effective_cache_size
> --
>  22GB
> (1 row)
>
> test=# show max_worker_processes;
>  max_worker_processes
> --
>  8
> (1 row)
>
> test=#```
>

It's hard to say what is reason. But good tool for debugging is `perf top`
if you are on linux. Then you can see what routines uses CPU intensively.

Regards

Pavel


>
> Regards,
> Pawan
>


Re: A question about building pg-libphonenumber

2019-10-22 Thread Pavel Stehule
Hi

út 22. 10. 2019 v 14:15 odesílatel stan  napsal:

> OK, this seems a stupid question, but I do not see the answer.
>
>
> I downloaded pg-libphonenumber, and am going to try to build it for PG
> version 11. Looks like the last build was against version 9.6.
>
> It builds fine, the issue comes in when I try to do a make install. It
> wants to install in the appropriate directory tree for 9.6. Should be a
> simple thing to change, right? But greping around, I cannot seem to find
> where this is defend. it is NOT in the Makefile.
>
> Can someone tell me where to change this to try it against the version of
> the DB engine I am running?
>

The build and installation is controlled by small application - pg_config

[pavel@nemesis libvterm]$ pg_config --libdir
/usr/local/pgsql/lib
[pavel@nemesis libvterm]$ pg_config --sharedir
/usr/local/pgsql/share

if you have more versions on your comp, it is important what pg_config is
executed - depends on PATH

[pavel@nemesis libvterm]$ pg_config --version
PostgreSQL 13devel

Regards

Pavel


>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: A question about building pg-libphonenumber

2019-10-22 Thread Pavel Stehule
út 22. 10. 2019 v 14:37 odesílatel stan  napsal:

> On Tue, Oct 22, 2019 at 02:19:15PM +0200, Pavel Stehule wrote:
> > Hi
> >
> > ??t 22. 10. 2019 v 14:15 odes??latel stan  napsal:
> >
> > > OK, this seems a stupid question, but I do not see the answer.
> > >
> > >
> > > I downloaded pg-libphonenumber, and am going to try to build it for PG
> > > version 11. Looks like the last build was against version 9.6.
> > >
> > > It builds fine, the issue comes in when I try to do a make install. It
> > > wants to install in the appropriate directory tree for 9.6. Should be a
> > > simple thing to change, right? But greping around, I cannot seem to
> find
> > > where this is defend. it is NOT in the Makefile.
> > >
> > > Can someone tell me where to change this to try it against the version
> of
> > > the DB engine I am running?
> > >
> >
> > The build and installation is controlled by small application - pg_config
> >
> > [pavel@nemesis libvterm]$ pg_config --libdir
> > /usr/local/pgsql/lib
> > [pavel@nemesis libvterm]$ pg_config --sharedir
> > /usr/local/pgsql/share
> >
> > if you have more versions on your comp, it is important what pg_config is
> > executed - depends on PATH
> >
>
> Thanks, yes turns out I had the wrong development package installed (I am
> on Ubuntu). I removed the wrong one, and installed the correct one.
>
> BUT, now I am getting this error at the "make install" step:
>
> cd '/usr/lib/postgresql/11/lib/bitcode' && /usr/lib/llvm-6.0/bin/llvm-lto
> -thinlto -thinlto-action=thinlink -o pg_libphonenumber.index.bc
> pg_libphonenumber/src/error_handling.bc
> pg_libphonenumber/src/pg_libphonenumber.bc
> pg_libphonenumber/src/packed_phone_number.bc
> error: can't create module summary index for buffer: Expected a single
> module
> LLVM ERROR: ThinLink didn't create an index
> /usr/lib/postgresql/11/lib/pgxs/src/makefiles/pgxs.mk:229: recipe for
> target 'install' failed
> m
>
> Any thoughts on this?
>

I have not a idea what is it, but I found this message

https://github.com/PierreSenellart/provsql/issues/9

and there is a patch
https://github.com/rdkit/rdkit/issues/2192#issuecomment-445579020

Probably it is problem with some JIT feature flags on Postgres 11 and higher

Pavel

-- 
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>


Re: PostgreSQL && data types in ESQL/C

2019-11-05 Thread Pavel Stehule
Hi


> The above is a bit by try and error. Is there any good manual which
> describes the ESQL/C details for PostgreSQL.
>

I newer used ESQL/C so I cannot to evaluate a quality of this part of this
doc. I expect so there will not be any other.

https://www.postgresql.org/docs/12/ecpg.html

Regards

Pavel


> Thanks
>
> matthias
>
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> "Glaube wenig, hinterfrage alles, denke selbst: Wie man Manipulationen
> durchschaut"
> "Believe little, scrutinise all, think by your own: How see through
> manipulations"
> ISBN-10: 386489218X
>


Re: PostGreSQL Replication and question on maintenance

2019-11-14 Thread Pavel Stehule
pá 15. 11. 2019 v 6:26 odesílatel github kran  napsal:

>
> Hello postGreSQL Community ,
>>
>>
>>
>> Hope everyone is doing great !!.
>>
>>
>> *Background*
>>
>> We use PostgreSQL Version 10.6 version and heavily use PostgreSQL for our
>> day to day activities to write and read data. We have 2 clusters running
>> PostgreSQL engine , one cluster
>>
>> keeps data up to 60 days and another cluster retains data beyond 1 year.
>> The data is partitioned close to a week( ~evry 5 days a partition) and we
>> have around 5 partitions per month per each table and we have 2 tables
>> primarily so that will be 10 tables a week. So in the cluster-1 we have
>> around  20 partitions and in cluster-2 we have around 160 partitions ( data
>> from 2018). We also want to keep the data for up to 2 years in the
>> cluster-2 to serve the data needs of the customer and so far we reached
>> upto 1 year of maintaining this data.
>>
>>
>>
>> *Current activity*
>>
>> We have a custom weekly migration DB script job that moves data from 1
>> cluster to another cluster what it does is the below things.
>>
>> 1) COPY command to copy the data from cluster-1 and split that data into
>> binary files
>>
>> 2) Writing the binary data into the cluster-2 table
>>
>> 3) Creating indexes after the data is copied.
>>
>>
>>
>> *Problem what we have right now. *
>>
>> When the migration activity runs(weekly) from past 2 times , we saw the
>> cluster read replica instance has restarted as it fallen behind the
>> master(writer instance). Everything
>>
>> after that worked seamlessly but we want to avoid the replica getting
>> restarted. To avoid from restart we started doing smaller binary files and
>> copy those files to the cluster-2
>>
>> instead of writing 1 big file of 450 million records. We were successful
>> in the recent migration as the reader instance didn’t restart after we
>> split 1 big file into multiple files to copy the data over but did restart
>> after the indexes are created on the new table as it could be write
>> intensive.
>>
>>
>>
>> *DB parameters set on migration job*
>>
>> work_mem set to 8 GB  and maintenace_work_mem=32 GB.
>>
>
these numbers looks crazy high - how much memory has your server - more
than 1TB?


Indexes per table = 3
>>
>> total indexes for 2 tables = 5
>>
>>
>>
>> *DB size*
>>
>> Cluster-2 = 8.6 TB
>>
>> Cluster-1 = 3.6 TB
>>
>> Peak Table relational rows = 400 - 480 million rows
>>
>> Average table relational rows = 300 - 350 million rows.
>>
>> Per table size = 90 -95 GB , per table index size is about 45 GB
>>
>>
>>
>> *Questions*
>>
>> 1) Can we decrease the maintenace_work_mem to 16 GB and will it slow down
>> the writes to the cluster , with that the reader instance can sync the data
>> slowly ?.
>>
>> 2) Based on the above use case what are your recommendations to keep the
>> data longer up to 2 years ?
>>
>> 3) What other recommendations you recommend ?.
>>
>>
>>
>>
>>
>> Appreciate your replies.
>>
>> THanks
>> githubkran
>>
>>>


Re: jsonb_set() strictness considered harmful to data

2019-11-15 Thread Pavel Stehule
Hi



> For release 13+, I have given some more thought to what should be done.
> I think the bar for altering the behaviour of a function should be
> rather higher than we have in the present case, and the longer the
> function has been sanctioned by time the higher the bar should be.
> However, I think there is a case to be made for providing a non-strict
> jsonb_set type function. To advance th4e discussion, attached is a POC
> patch that does that. This can also be done as an extension, meaning
> that users of back branches could deploy it immediately. I've tested
> this against release 12, but I think it could go probably all the way
> back to 9.5. The new function is named jsonb_ set_lax, but I'm open to
> bikeshedding.
>
>
I am sending a review of this patch

1. this patch does what was proposed and it is based on discussion.

2. there are not any problem with patching or compilation, all regress
tests passed.

4. code looks well and it is well commented.

5. the patch has enough regress tests

My notes:

a) missing documentation

b) error message is not finalized

+   ereport(ERROR,
+   (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+errmsg("null jsonb value")));

Any other looks well, and this function can be very handy.

Regards

Pavel


Re: jsonb_set() strictness considered harmful to data

2019-11-15 Thread Pavel Stehule
pá 15. 11. 2019 v 21:01 odesílatel Andrew Dunstan <
andrew.duns...@2ndquadrant.com> napsal:

>
> On 11/15/19 2:14 PM, Pavel Stehule wrote:
> > Hi
> >
> >
> >
> > For release 13+, I have given some more thought to what should be
> > done.
> > I think the bar for altering the behaviour of a function should be
> > rather higher than we have in the present case, and the longer the
> > function has been sanctioned by time the higher the bar should be.
> > However, I think there is a case to be made for providing a
> non-strict
> > jsonb_set type function. To advance th4e discussion, attached is a
> POC
> > patch that does that. This can also be done as an extension, meaning
> > that users of back branches could deploy it immediately. I've tested
> > this against release 12, but I think it could go probably all the way
> > back to 9.5. The new function is named jsonb_ set_lax, but I'm open
> to
> > bikeshedding.
> >
> >
> > I am sending a review of this patch
> >
> > 1. this patch does what was proposed and it is based on discussion.
> >
> > 2. there are not any problem with patching or compilation, all regress
> > tests passed.
> >
> > 4. code looks well and it is well commented.
> >
> > 5. the patch has enough regress tests
> >
> > My notes:
> >
> > a) missing documentation
> >
> > b) error message is not finalized
> >
> > +   ereport(ERROR,
> > +   (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
> > +errmsg("null jsonb value")));
> >
> > Any other looks well, and this function can be very handy.
> >
> >
>
> Thanks for the review. I will add some docco.
>
>
> What would be a better error message? "null jsonb replacement not
> permitted"?
>

Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and "NULL is not allowed",
errdetail - a exception due setting "null_value_treatment" =>
raise_exception
and maybe some errhint - "Maybe you would to use Jsonb NULL - "null"::jsonb"

I don't know, but in this case, the exception should be verbose. This is
"rich" function with lot of functionality





>
> cheers
>
>
> andrew
>
> --
> Andrew Dunstanhttps://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
>


Re: Function performance degrades after repeated execution

2019-11-16 Thread Pavel Stehule
so 16. 11. 2019 v 16:06 odesílatel Dave Roberge 
napsal:

> Hi,
>
> We've been troubleshooting a slow running function in our postgres
> database. I've been able to boil it down to the simplest function possible.
> It looks like this:
>
> FOR rec IN select 1 as matchval FROM table1 t1, table2 t2
>   join table3 t3 on t3.col = t2.col
>   WHERE t1.col = id
> LOOP
>   IF rec.matchval > 0 THEN
> co := co + 1;
>   END IF;
>   if co % 100 = 0 then
> raise notice 'match value %', co;
>   end if;
> END LOOP;
>
> Here's the interesting parts:
> - The result of that query returns about 13,000 rows.
> - If I open a PSQL session and execute the function it returns almost
> immediately.
> - If I execute the same function 4 more times in the same session (a total
> of 5 times) it returns immediately.
> - On the 6th execution it slows down. It processes 100 records every 1.5
> minutes.
> - On every subsequent execution from the same session (after 5 times) it
> is slow.
> - It reliably slows down after 5 consecutive executions.
> - If I exit the PSQL session and open a new one the function returns
> immediately (up to the 6th execution.)
> - If I replace the function from a separate session after executing it 5
> times, it returns immediately up to 5 executions.
> - The CPU spikes to 100% after the 5 execution.
>
> I'm attempting to understand what is causing the slow down after 5
> consecutive executions. But I'm having a hard time getting insight. We are
> on PostgreSQL 9.6.15.
>

6 time slow execution is known issue - it is unwanted effect of query plan
cache.

See part of doc https://www.postgresql.org/docs/9.6/sql-prepare.html

Prepared statements can use generic plans rather than re-planning with each
set of supplied EXECUTE values. This occurs immediately for prepared
statements with no parameters; otherwise it occurs only after five or more
executions produce plans whose estimated cost average (including planning
overhead) is more expensive than the generic plan cost estimate. Once a
generic plan is chosen, it is used for the remaining lifetime of the
prepared statement. Using EXECUTE values which are rare in columns with
many duplicates can generate custom plans that are so much cheaper than the
generic plan, even after adding planning overhead, that the generic plan
might never be used.

in postgresql 9.6 you can use dynamic query as workaround - it generates
only one shot plans, and it should be ok every time

FOR rec IN EXECUTE 'SELECT ...

https://www.postgresql.org/docs/9.6/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING

Regards

Pavel



> We've tried:
> - Increase logging to debug5 but don't get any helpful feedback there.
> - Reviewing the execution plan of the query. Seems fine when running it
> outside of the function.
> - Turn on temp file logging -- but no temp files are logged.
>
> Any ideas for where we might get insight? Or clues as to what is happening?
>
> Thank you.
>
>


Re: Function performance degrades after repeated execution

2019-11-16 Thread Pavel Stehule
so 16. 11. 2019 v 16:46 odesílatel Ron  napsal:

> On 11/16/19 8:22 AM, Dave Roberge wrote:
> > Hi,
> >
> > We've been troubleshooting a slow running function in our postgres
> database. I've been able to boil it down to the simplest function possible.
> It looks like this:
> >
> > FOR rec IN select 1 as matchval FROM table1 t1, table2 t2
> >join table3 t3 on t3.col = t2.col
> >WHERE t1.col = id
> > LOOP
> >IF rec.matchval > 0 THEN
> >  co := co + 1;
> >END IF;
> >if co % 100 = 0 then
> >  raise notice 'match value %', co;
> >end if;
> > END LOOP;
>
> Where are you joining table1 to either table2 or table3?
>

good shot - there is maybe unwanted cartesian product

Pavel


> --
> Angular momentum makes the world go 'round.
>
>
>


Re: REINDEX VERBOSE unknown option

2019-11-16 Thread Pavel Stehule
so 16. 11. 2019 v 18:43 odesílatel Josef Šimánek 
napsal:

> Ahh, I just tried to do the same with reindexdb cli tool and the
> actual syntax is REINDEX (VERBOSE) TABLE sales; Sorry for unnecessary
> question. Anyway maybe we can add this to documentation as a example. I can
> prepare patch for this if welcomed.
>

Documentation patch is good idea.

Pavel


> so 16. 11. 2019 v 18:40 odesílatel Josef Šimánek 
> napsal:
>
>> Hello,
>>
>> according to https://www.postgresql.org/docs/11/sql-reindex.html VERBOSE
>> option is valid for REINDEX command for 11.3 PostgreSQL server. Anyway I'm
>> getting error using VERBOSE option.
>>
>> project_production=# REINDEX VERBOSE TABLE sales;
>> ERROR:  syntax error at or near "VERBOSE"
>> LINE 1: REINDEX VERBOSE TABLE sales;
>>
>> Time: 0.235 ms
>>
>> I'm wondering if I'm doing anything wrong or actual documentation is
>> wrong. Any ideas?
>>
>


Re: PostGreSQL Replication and question on maintenance

2019-11-20 Thread Pavel Stehule
Hi

st 20. 11. 2019 v 13:12 odesílatel Soto Cuevas Manuel Alejandro <
mcs...@entel.cl> napsal:

> Taking advantage of the instance, what software do you recommend to do
> reverse engineering of postgresql?
>

please, open new thread. Your query is not related to subject.

https://wiki.postgresql.org/wiki/Design_Tools

Regards

Pavel


> Thank you
>
> Manuel
>
> El 16-11-2019, a la(s) 10:36, github kran  escribió:
>
> 
>
> *ANEXO ENTEL **Este es un correo externo: **Verifique remitente, No haga
> clic en vínculos. **Si recibes un correo malicioso avísanos a **s...@entel.cl
> ** . *
> Any reply on this please ?.
>
> On Fri, Nov 15, 2019 at 9:10 AM github kran  wrote:
>
>>
>>
>> On Thu, Nov 14, 2019 at 11:42 PM Pavel Stehule 
>> wrote:
>>
>>>   these numbers looks crazy high - how much memory has your server -
>>> more than 1TB?
>>>
>>
>> The cluster got 244 GB of RAM and storage capacity it has is 64 TB.
>>
>>>
>>>
>>> pá 15. 11. 2019 v 6:26 odesílatel github kran 
>>> napsal:
>>>
>>>>
>>>> Hello postGreSQL Community ,
>>>>>
>>>>>
>>>>>
>>>>> Hope everyone is doing great !!.
>>>>>
>>>>>
>>>>> *Background*
>>>>>
>>>>> We use PostgreSQL Version 10.6 version and heavily use PostgreSQL for
>>>>> our day to day activities to write and read data. We have 2 clusters
>>>>> running PostgreSQL engine , one cluster
>>>>>
>>>>> keeps data up to 60 days and another cluster retains data beyond 1
>>>>> year. The data is partitioned close to a week( ~evry 5 days a partition)
>>>>> and we have around 5 partitions per month per each table and we have 2
>>>>> tables primarily so that will be 10 tables a week. So in the cluster-1 we
>>>>> have around  20 partitions and in cluster-2 we have around 160 partitions 
>>>>> (
>>>>> data from 2018). We also want to keep the data for up to 2 years in the
>>>>> cluster-2 to serve the data needs of the customer and so far we reached
>>>>> upto 1 year of maintaining this data.
>>>>>
>>>>>
>>>>>
>>>>> *Current activity*
>>>>>
>>>>> We have a custom weekly migration DB script job that moves data from 1
>>>>> cluster to another cluster what it does is the below things.
>>>>>
>>>>> 1) COPY command to copy the data from cluster-1 and split that data
>>>>> into binary files
>>>>>
>>>>> 2) Writing the binary data into the cluster-2 table
>>>>>
>>>>> 3) Creating indexes after the data is copied.
>>>>>
>>>>>
>>>>>
>>>>> *Problem what we have right now. *
>>>>>
>>>>> When the migration activity runs(weekly) from past 2 times , we saw
>>>>> the cluster read replica instance has restarted as it fallen behind the
>>>>> master(writer instance). Everything
>>>>>
>>>>> after that worked seamlessly but we want to avoid the replica getting
>>>>> restarted. To avoid from restart we started doing smaller binary files and
>>>>> copy those files to the cluster-2
>>>>>
>>>>> instead of writing 1 big file of 450 million records. We were
>>>>> successful in the recent migration as the reader instance didn’t restart
>>>>> after we split 1 big file into multiple files to copy the data over but 
>>>>> did
>>>>> restart after the indexes are created on the new table as it could be 
>>>>> write
>>>>> intensive.
>>>>>
>>>>>
>>>>>
>>>>> *DB parameters set on migration job*
>>>>>
>>>>> work_mem set to 8 GB  and maintenace_work_mem=32 GB.
>>>>>
>>>>
>>>
>>>
>>> Indexes per table = 3
>>>>>
>>>>> total indexes for 2 tables = 5
>>>>>
>>>>>
>>>>>
>>>>> *DB size*
>>>>>
>>>>> Cluster-2 = 8.6 TB
>>>>>
>>>>> Cluster-1 = 3.6 TB
>>>>>
>>>>> Peak Table relational rows = 400 - 480 million rows
>>>>>
>>>>> Average table relational rows = 300 - 350 million rows.
>>>>>
>>>>> Per table size = 90 -95 GB , per table index size is about 45 GB
>>>>>
>>>>>
>>>>>
>>>>> *Questions*
>>>>>
>>>>> 1) Can we decrease the maintenace_work_mem to 16 GB and will it slow
>>>>> down the writes to the cluster , with that the reader instance can sync 
>>>>> the
>>>>> data slowly ?.
>>>>>
>>>>> 2) Based on the above use case what are your recommendations to keep
>>>>> the data longer up to 2 years ?
>>>>>
>>>>> 3) What other recommendations you recommend ?.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Appreciate your replies.
>>>>>
>>>>> THanks
>>>>> githubkran
>>>>>
>>>>>>
> --
>
> Este correo es dirigido solo a las personas que aparecen como
> destinatarios y
> puede contener información confidencial o privilegiada. Si usted recibió
> este correo
> por error, por favor notificar inmediatamente al emisor y elimine el
> correo original.
> Cualquier otro uso de este correo esta prohibido.
>
>
> *
>
> This message is for the designated recipient only and may contain
> privileged or
> confidential information. If you have received it in error, please notify
> the sender
> immediately and delete the original. Any other use of the email is
> prohibited.
>


Re: How to get column and identifier names in UPPERCASE in postgres?

2019-11-20 Thread Pavel Stehule
Hi

st 20. 11. 2019 v 14:11 odesílatel Amine Tengilimoglu <
aminetengilimo...@gmail.com> napsal:

> Hi all;
>
>   I want to get the column and other identifier names in UPPERCASE
>  form rather than a lowercase one without changing application code like
> qouting the identifiers. Do you know any settings  function or extention
> for this purpose?
>

there is not any possibility

Regards

Pavel


> thank you.
>


Re: Return Table in StoredProceure/Function

2019-11-20 Thread Pavel Stehule
Hi

st 20. 11. 2019 v 16:01 odesílatel İlyas Derse 
napsal:

>
> How can I return table in Stored Procedure ? I can do it in function but I
> have inout parameters.So I can not create in function. What can I do this
> case ?
>
> I guess,It should be like for function :
>
> CREATE or REPLACE FUNCTION public."test" (INOUT "x" integer, INOUT "y" text)
>
> RETURNS TABLE  ("id" integer,"filesize" character varying(36))AS $$
>  BEGINRETURN QUERYSELECT * FROMpublic."tbl_employees" ;
> END;$$ LANGUAGE plpgsql;
>
> Thanks.
>

You can returns cursor via OUT variable. That's all. There is not any other
possibility. Procedures in Postgres can returns results only via OUT
variables.

Regards

Pavel


Re: Return Table in StoredProceure/Function

2019-11-20 Thread Pavel Stehule
čt 21. 11. 2019 v 7:34 odesílatel Tony Shelver  napsal:

> Well then SQL Server breaks that rule big time :)
> Most people coming from a SQL Server background expect procedures to
> return a result set that can be queried, and in-out or out parameters to
> return variables for further information.
>

yes, SQL server is absolutely unique in this case. Oracle, DB2, and
Postgres has different design

If you have SQL Server background, and you would to write stored
procedures, start with doc, please
https://www.postgresql.org/docs/current/plpgsql.html

lot of things are really different. Postgres SQL, stored procedures are
close to Oracle, and very far to T-SQL

Regards

Pavel



>
>
> On Wed, 20 Nov 2019 at 17:20, Thomas Kellerer  wrote:
>
>> İlyas Derse schrieb am 20.11.2019 um 09:18:
>>
>> > How can I return table in Stored Procedure ? I can do it in function
>> but I
>> > have inout parameters.So I can not create in function. What can I do
>> this
>> > case ?
>> >
>> > I guess,It should be like for function :
>> >
>> > CREATE or REPLACE FUNCTION public."test" (INOUT "x" integer, INOUT "y"
>> text)
>> >
>> > RETURNS TABLE  ("id" integer,"filesize" character varying(36))AS $$
>> >  BEGINRETURN QUERYSELECT * FROMpublic."tbl_employees" ;
>> > END;$$ LANGUAGE plpgsql;
>> >
>>
>> Procedures aren't meant to return anything (in Postgres specifically and
>> in Computer Science in general).
>>
>> If you want to return something use a function.
>>
>>
>>
>>
>>


Re: Adding LIMIT changes PostgreSQL plan from good to a bad one

2019-11-21 Thread Pavel Stehule
čt 21. 11. 2019 v 17:19 odesílatel Michael Korbakov 
napsal:

> Hi everybody.
>
> I stumbled upon a weird problem with the query planner. I have a query
> on a typical EAV schema:
>
> SELECT contacts.id
> FROM contacts
> LEFT OUTER JOIN (SELECT DISTINCT contacts_values.company_id AS
> company_id, contacts_values.id AS id
>  FROM contacts_values
>  WHERE contacts_values.field_id =
> '\x'
>AND contacts_values.field_name = 'facebook'
>AND
>
> nimble_contact_value_normalize(nimble_skip_long_values(contacts_values.value))
> =
>
> nimble_contact_value_normalize('http://www.facebook.com/jon.ferrara'))
> AS anon_1
> ON anon_1.company_id = contacts.company_id AND
> anon_1.id = contacts.id
> LEFT OUTER JOIN (SELECT DISTINCT contacts_values.company_id AS
> company_id, contacts_values.id AS id
>  FROM contacts_values
>  WHERE contacts_values.field_id =
> '\x'
>AND contacts_values.field_name = 'last_name'
>AND
>
> nimble_contact_value_normalize(nimble_skip_long_values(contacts_values.value))
> =
>nimble_contact_value_normalize('Ferrara')) AS
> anon_2
> ON anon_2.company_id = contacts.company_id AND
> anon_2.id = contacts.id
> JOIN contacts__aggregated AS contacts__aggregated_1
>  ON contacts__aggregated_1.company_id = contacts.company_id AND
> contacts__aggregated_1.contact_id = contacts.id AND
> contacts__aggregated_1.field_name = 'names'
> WHERE contacts.company_id = '\x4c2118ad54397f271b00'
>   AND (anon_1.id IS NOT NULL OR anon_2.id IS NOT NULL)
> ORDER BY contacts__aggregated_1.value ASC
> LIMIT 30 OFFSET 0;
>
> My problem is that the LIMIT clause in this query makes the planner
> choose a bad plan
> with nested loops: https://explain.depesz.com/s/Mute. Running the same
> query after
> SET ENABLE_NESTLOOP TO OFF I am getting a much more efficient plan:
> https://explain.depesz.com/s/b5kn. Removing the LIMIT from the query
> results in a
> similar plan: https://explain.depesz.com/s/wDqE.
>
> One thing that concerns me is that the cost of the LIMIT node in the
> bad nested loop
> plan is just a fraction of the cost of its subnode. But for the better
> merge join
> plan LIMIT node has the same cost as its subnode. How could it be this
> way? And what
> can I do to make the planner pick up a better plan?
>
> We are running PostgreSQL 10.10.
>

Sometimes are problems with LIMIT clause, because it too much decrease
costs. The system expects so necessary values are found quickly - but if
this premise is not valid, then this plan can be bad.

typical solution is wrapping to subquery and using OFFSET 0 (that is
optimizer fence)

SELECT * FROM foo WHERE x = 10 LIMIT 10 -- should be transformed

SELECT * FROM (SELECT * FROM foo WHERE x = 10 OFFSET 0) s LIMIT 10;

But you can see in explain very bad estimations - left join is estimated to
918K rows and result is just 83 rows

There is relative very high rows removed in top part of query "Rows Removed
by Filter: 1043891"

It's EAV table - it is unoptimized pattern :/

Pavel


> Sincerely,
>
> -- Michael Korbakov
>
>
>


Re: Adding LIMIT changes PostgreSQL plan from good to a bad one

2019-11-21 Thread Pavel Stehule
čt 21. 11. 2019 v 22:04 odesílatel Michael Korbakov 
napsal:

> On November 21, 2019 at 19:14:33, Pavel Stehule (pavel.steh...@gmail.com)
> wrote:
>
>
>
> čt 21. 11. 2019 v 17:19 odesílatel Michael Korbakov 
> napsal:
>
>> Hi everybody.
>>
>> I stumbled upon a weird problem with the query planner. I have a query
>> on a typical EAV schema:
>>
>> SELECT contacts.id
>> FROM contacts
>> LEFT OUTER JOIN (SELECT DISTINCT contacts_values.company_id AS
>> company_id, contacts_values.id AS id
>>  FROM contacts_values
>>  WHERE contacts_values.field_id =
>> '\x'
>>AND contacts_values.field_name = 'facebook'
>>AND
>>
>> nimble_contact_value_normalize(nimble_skip_long_values(contacts_values.value))
>> =
>>
>> nimble_contact_value_normalize('http://www.facebook.com/jon.ferrara'))
>> AS anon_1
>> ON anon_1.company_id = contacts.company_id AND
>> anon_1.id = contacts.id
>> LEFT OUTER JOIN (SELECT DISTINCT contacts_values.company_id AS
>> company_id, contacts_values.id AS id
>>  FROM contacts_values
>>  WHERE contacts_values.field_id =
>> '\x'
>>AND contacts_values.field_name = 'last_name'
>>AND
>>
>> nimble_contact_value_normalize(nimble_skip_long_values(contacts_values.value))
>> =
>>nimble_contact_value_normalize('Ferrara')) AS
>> anon_2
>> ON anon_2.company_id = contacts.company_id AND
>> anon_2.id = contacts.id
>> JOIN contacts__aggregated AS contacts__aggregated_1
>>  ON contacts__aggregated_1.company_id = contacts.company_id AND
>> contacts__aggregated_1.contact_id = contacts.id AND
>> contacts__aggregated_1.field_name = 'names'
>> WHERE contacts.company_id = '\x4c2118ad54397f271b00'
>>   AND (anon_1.id IS NOT NULL OR anon_2.id IS NOT NULL)
>> ORDER BY contacts__aggregated_1.value ASC
>> LIMIT 30 OFFSET 0;
>>
>> My problem is that the LIMIT clause in this query makes the planner
>> choose a bad plan
>> with nested loops: https://explain.depesz.com/s/Mute. Running the same
>> query after
>> SET ENABLE_NESTLOOP TO OFF I am getting a much more efficient plan:
>> https://explain.depesz.com/s/b5kn. Removing the LIMIT from the query
>> results in a
>> similar plan: https://explain.depesz.com/s/wDqE.
>>
>> One thing that concerns me is that the cost of the LIMIT node in the
>> bad nested loop
>> plan is just a fraction of the cost of its subnode. But for the better
>> merge join
>> plan LIMIT node has the same cost as its subnode. How could it be this
>> way? And what
>> can I do to make the planner pick up a better plan?
>>
>> We are running PostgreSQL 10.10.
>>
>
> Sometimes are problems with LIMIT clause, because it too much decrease
> costs. The system expects so necessary values are found quickly - but if
> this premise is not valid, then this plan can be bad.
>
> LIMIT node with nested loop decreases the cost proportionally to requested
> number of rows versus estimated number. That may be too aggressive, but the
> same LIMIT node with merge join does not decrease total cost at all. I do
> not understand why is it happening.
>
>
> typical solution is wrapping to subquery and using OFFSET 0 (that is
> optimizer fence)
>
> SELECT * FROM foo WHERE x = 10 LIMIT 10 -- should be transformed
>
> SELECT * FROM (SELECT * FROM foo WHERE x = 10 OFFSET 0) s LIMIT 10;
>
> Unfortunately, that has not worked for me (
> https://explain.depesz.com/s/slsM). Looks like OFFSET 0 is not working as
> a fence. However, I  managed to get some success with wrapping everything
> except LIMIT into a CTE: https://explain.depesz.com/s/n7c4.
>
>
> But you can see in explain very bad estimations - left join is estimated
> to 918K rows and result is just 83 rows
>
> There is relative very high rows removed in top part of query "Rows
> Removed by Filter: 1043891"
>
> It's EAV table - it is unoptimized pattern :/
>
> Is there any way to tune planner to choose better plan for such queries? I
> tried increasing default_statistics_target to 1000 and creating extended
> statistics: CREATE STATISTICS contacts_values_company_id_field
> (dependencies) ON company_id, field_id, field_name FROM contacts_values.
> After running ANALYZE on all relevant tables I noticed no changes in
> planner's behavior.
>

you can try increase a value FROM_COLLAPSE_LIMIT and JOIN_COLLAPSE_LIMIT
and  maybe geqo_threshold

or rewrite query to push some conditions deeper manually



>
> Pavel
>
>
>> Sincerely,
>>
>> -- Michael Korbakov
>>
>>
>>


Re: Slow planing...

2019-12-06 Thread Pavel Stehule
pá 6. 12. 2019 v 15:12 odesílatel Mladen Marinović <
mladen.marino...@kset.org> napsal:

> After a couple of hours of trying different stuff, set enable_mergejoin  =
> off made the planning time look better: Planning time: 0.322 ms
> Any ideas why this helps?
>

pls, can you try reindex all related indexes? Sometimes planning time is
high when indexes are bloated.

Regards

Pavel


> Regards,
> Mladen Marinović
>
> On Fri, Dec 6, 2019 at 11:14 AM Mladen Marinović <
> mladen.marino...@kset.org> wrote:
>
>> Hi,
>>
>> Since this morning our system is running slower than usual. It turns out
>> that some queries take a very long time to plan ( > 1 second). The problem
>> occurs when joining bigger tables. There are no partition for the used
>> tables. The problem has a time correlation with the last
>> autovacuum/autoanalyse this morning, but manual vacuuming and analysing did
>> not fix the problem.
>>
>> An example explain is:
>>
>> EXPLAIN ANALYSE
>> SELECT 1
>> FROM table_a a
>>   LEFT JOIN table_b bON b.a_id= a.id
>> WHERE a.object_id=13
>>   AND a.timestamp<'2019-12-06'
>>   AND a.timestamp>'2019-12-03'
>>
>> Nested Loop Left Join  (cost=1.28..18137.57 rows=6913 width=4) (actual
>> time=0.043..90.016 rows=14850 loops=1)
>>   ->  Index Scan using uq_object_id_timestamp on table_a a
>>  (cost=0.70..7038.49 rows=6913 width=8) (actual time=0.028..21.832
>> rows=14850 loops=1)
>> Index Cond: ((object_id = 13) AND (timestamp <
>> '2019-12-06'::timestamp with time zone) AND (timestamp >
>> '2019-12-03'::timestamp with time zone))
>>   ->  Index Only Scan using table_b_a_id on table_b b  (cost=0.57..1.60
>> rows=1 width=8) (actual time=0.004..0.004 rows=0 loops=14850)
>> Index Cond: (a_id = a.id)
>> Heap Fetches: 0
>> Planning time: 1908.550 ms
>> Execution time: 91.004 ms
>>
>> The same query on a similar parallel system takes 5ms for planing (PG
>> 9.4.).
>>
>> Is there a way to detect why the planing is taking this long?
>>
>> The database is a 9.6.1 with 32GB of shared_buffers, and 1GB of
>> maintanance_work_mem, and machine CPU is below 80% all the time.
>>
>> Regards,
>> Mladen Marinović
>>
>


Re: Query with correlated join having slow performance

2019-12-09 Thread Pavel Stehule
po 9. 12. 2019 v 21:05 odesílatel saket bansal  napsal:

> Thank you Michael. I re-wrote it and it does perform well. Modified query
> at:
>
>
> https://github.com/bansalsaket/PG_correlated_subquery_slowness/blob/master/Modified%20query%20-%20performs%20faster.txt
>
> Our app team is checking with their vendor whether this can be modified at
> source code level or not.
> But question remains somewhat valid. Data volume is not huge and original
> query wasn't very badly written either. Operating system level resources
> are similar.
> Do you know of any bugs associated with using co-related sub queries in
> postgres. In Oracle, it runs in a sec, while in postgres it does not give
> result at all , even after 8-9 hours.
> I understand both database engines work differently, but such drastic
> change in performance is a surprise!
> We have lot of migrations planned from oracle to postgres, this could be a
> show stopper. :(
> Any suggestions...
>

There was more times discussion about rewriting OR conditions to UNION -
but nobody did this work what I know. Usually Postgres process OR
conditions well due bitmap scans, but it doesn't work well in some special
cases. To this time this issue was fixed by manual query rewriting.

Regards

Pavel


> On Mon, Dec 9, 2019 at 1:49 PM Michael Lewis  wrote:
>
>> I'd suggest re-writing your query to avoid ORs whenever possible. Is this
>> generated by an ORM or subject to change with filters selected in
>> application or can you totally control it on DB side?
>>
>> It may be hugely more performant to simply rewrite this as (almost) the
>> same query twice UNION ALL'd together to separate the
>> productalt1_.alt_pdt_dbky OR productalt1_.orgnl_pdt_dbky conditions.
>>
>


Re: How to Change collate & ctype for an existing database?

2019-12-13 Thread Pavel Stehule
Hi

pá 13. 12. 2019 v 9:57 odesílatel Daulat Ram 
napsal:

> Hi,
>
> How we can change the   Collate &  Ctype from “c”  to  “C.UTF-8” on
> existing production database.
>
> SELECT datcollate FROM pg_database WHERE datname='wwkidbt';
>
> datcollate
>
> 
>
> C
>
> postgres=# select version();
>
>  version
>
>
> --
>
>
> PostgreSQL 9.5.6 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
> 5.3.1-14ubuntu2) 5.3.1 20160413, 64-bit
>
>
There is not any official setup how to doit other than just pg_dump and
load.


>
>
> Please advice about the below method:
>
> update pg_database set datcollate='C.UTF-8', datctype='C.UTF-8'  where
> datname='wwkidbt';
>
> Is there any impact on data/corruption if we do via update command ?
>
sure - almost all your indexes will be broken. Direct update of system
tables is usually really bad idea.

Regards

Pavel


> Thanks,
>
> Daulat
>
>
>
>
>
>
>
>
>


Re: Syntax question about returning value from an insert

2019-12-25 Thread Pavel Stehule
Hi

st 25. 12. 2019 v 16:26 odesílatel stan  napsal:

> I am writing a trigger/function to make certain a default item, and its key
> exist when an insert is called. EG
>
> The trigger gets called on insert to T1 If column c1 is NULL in the NEW
> structure, I need to check table t2 to get the key associated with the
> default for this column. However, if the default is not yet inserted into
> t2, I an to go ahead and insert it.
>
> I found this page:
>
> https://stackoverflow.com/questions/6560447/can-i-use-return-value-of-insert-returning-in-another-insert
> which sugest this syntax:
>
> with rows as (
> INSERT INTO Table1 (name) VALUES ('a_title') RETURNING id
> )
> INSERT INTO Table2 (val)
> SELECT id
> FROM rows
>
> I modified it slightly to look like this:
>
> IF _bom_name_key is NULL
> THEN
> with rows as (
> INSERT INTO project_bom (bom_name) VALUES ('Main') RETURNING
> project_bom_key
> )
> NEW.project_bom_key = SELECT project_bom_key
> FROM rows ;
>
> But this gives me  syntax error.
>

you example is little bit confused.

probably it should to be

CREATE OR REPLACE FUNCTION ...
RETURNS ...
AS $$
BEGIN
 ...
  INSERT INTO ... VALUES('...') RETURNING project_bom_key INTO
NEW.project_bom_key;

You cannot to use plpgsql statements inside SQL statements - you cannot to
use assign statement (plpgsql) inside SQL statement (WITH).


> I realize this functionality is slightly different, but can I get the new
> key into the NEW structure to return from the function call?
>

The fields of records are fixed in first time of created composite value,
and cannot to enhanced in time.

But maybe I don't understand well to your use case. Your examples looks
chaotic little bit.

Regards

Pavel


>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: Not my day :-( Another syntax error

2019-12-26 Thread Pavel Stehule
čt 26. 12. 2019 v 18:50 odesílatel stan  napsal:

>
> On Thu, Dec 26, 2019 at 10:39:54AM -0700, David G. Johnston wrote:
> > You should probably send that reply again using reply-to-all.
> >
> > Dave
> >
> >
> > On Thu, Dec 26, 2019 at 10:38 AM stan  wrote:
> >
> > > On Thu, Dec 26, 2019 at 10:26:49AM -0700, David G. Johnston wrote:
> > > > On Thu, Dec 26, 2019 at 9:33 AM stan  wrote:
> > > >
> > > > >
> > > > > WITH inserted AS (
> > > > > INSERT into project_cost_category
> > > > > (category)
> > > > > VALUES
> > > > > ('MISC')
> > > > > RETURNING
> > > > > *
> > > > > )
> > > > > SELECT  project_cost_category_key
> > > > > INTO
> NEW.project_cost_category_key  =
> > > > > ( SELECT
> > > > > project_cost_category_key
> > > > >  FROM
> > > > > inserted )
> > > > >
> > > > >
> > > > You have two SELECTs.  The "inner" one has a FROM clause attached to
> it
> > > > providing columns from the "inserted" CTE.  The "outer" one doesn't
> have
> > > a
> > > > FROM clause and so doesn't have access to columns.  The "outer"
> SELECT
> > > > project_cost_category_key is thus invalid.
> > > >
> > >
> > > INSERT into project_bom
> > > (project_key, bom_name)
> > > VALUES
> > > (NEW.project_key , 'Main')
> > > RETURNING
> > > project_bom_key
> > > )
> > > SELECT  project_bom_key INTO
> NEW.project_bom_key
> > > = ( SELECT
> > > project_bom_key
> > >  FROM inserted )
> > > ;
> > >
> > > Which is working, to the best of my knowledge. BTW the oen I am having
> > > trouble with originaly had:
> > >
> > > RETURBING project_cost_category_key
> > >
> > > Bit I changed that to * during my debuging efforts.
> > >
> > > Please tell me if I am looking at this worng.
> > >
> > > And thatnls for looking through my really long post
>
> Turns out, you were correct, changed it to:
>
>
> DROP FUNCTION default_cost_category() CASCADE;
>
> CREATE FUNCTION default_cost_category()
> RETURNS trigger AS $$
> DECLARE _cost_category_key numeric;
> BEGIN
> /* Z */
> if NEW.project_cost_category_key IS NULL
> THEN
> /* DEBUG
> RAISE NOTICE 'Called default_cost_category() and
> NEW.project_cost_category_key is NULL' ;
> */
>  _cost_category_key =
> (
> SELECT
> project_cost_category_key
> FROM
> project_cost_category
> WHERE
> category = 'MISC'
> )
> ;
> /* DEBUG
> RAISE NOTICE '_cost_category_key = %', _cost_category_key ;
> */
> IF _cost_category_key is NULL
> THEN
>

why you use CTE there - it is useless there. INSERT INTO RETURNING should
be enough



WITH inserted AS (
> INSERT into project_cost_category
> (category)
> VALUES
> ('MISC')
> RETURNING
> *
> )
> SELECT  project_cost_category_key
> INTO NEW.project_cost_category_key FROM
> ( SELECT
> project_cost_category_key
>  FROM
> inserted ) AS project_cost_category_key
> ;
> ELSE
> NEW.project_cost_category_key = _cost_category_key;
> END IF;
> END IF;
>
> return NEW;
> END;
> $$
> LANGUAGE PLPGSQL
> SECURITY DEFINER
> -- Set a secure search_path: trusted schema(s), then 'pg_temp'.
> SET search_path = ica, "user" , public
> VOLATILE ;
>
> And all is well.
>
> Thank you!
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: jsonb_set() strictness considered harmful to data

2020-01-07 Thread Pavel Stehule
Hi

po 6. 1. 2020 v 22:34 odesílatel Andrew Dunstan <
andrew.duns...@2ndquadrant.com> napsal:

> On Thu, Nov 28, 2019 at 2:15 PM Andrew Dunstan
>  wrote:
> >
> >
> > On 11/27/19 9:35 PM, Michael Paquier wrote:
> > > On Fri, Nov 15, 2019 at 09:45:59PM +0100, Pavel Stehule wrote:
> > >> Maybe ERRCODE_NULL_VALUE_NOT_ALLOWED, and "NULL is not allowed",
> > >> errdetail - a exception due setting "null_value_treatment" =>
> > >> raise_exception
> > >> and maybe some errhint - "Maybe you would to use Jsonb NULL -
> "null"::jsonb"
> > >>
> > >> I don't know, but in this case, the exception should be verbose. This
> is
> > >> "rich" function with lot of functionality
> > > @Andrew: This patch is waiting on input from you for a couple of days
> > > now.
> > >
> >
> >
>
>
> Updated version including docco and better error message.
>
> cheers
>
> andrew
>

I think so my objections are solved. I have small objection

+ errdetail("exception raised due to \"null_value_treatment :=
'raise_exception'\""),
+ errhint("to avoid, either change the null_value_treatment argument or
ensure that an SQL NULL is not used")));

"null_value_treatment := 'raise_exception'\""

it use proprietary PostgreSQL syntax for named parameters. Better to use
ANSI/SQL syntax

"null_value_treatment => 'raise_exception'\""

It is fixed in attached patch

source compilation without warnings,
compilation docs without warnings
check-world passed without any problems

I'll mark this patch as ready for commiter

Thank you for your work

Pavel


>
> --
> Andrew Dunstanhttps://www.2ndQuadrant.com
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index 4b42f12862..72072e7545 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -12231,6 +12231,9 @@ table2-mapping
   
jsonb_set
   
+  
+   jsonb_set_lax
+  
   
jsonb_insert
   
@@ -12545,6 +12548,26 @@ table2-mapping
  [{"f1": 1, "f2": null, "f3": [2, 3, 4]}, 2]
 

+  
+   jsonb_set_lax(target jsonb, path text[], new_value jsonb , create_missing boolean , null_value_treatment text)
+ 
+   jsonb
+   
+If new_value is not null,
+behaves identically to jsonb_set. Otherwise behaves
+according to the value of null_value_treatment
+which must be one of 'raise_exception',
+'use_json_null', 'delete_key', or
+'return_target'. The default is
+'use_json_null'.
+   
+   jsonb_set_lax('[{"f1":1,"f2":null},2,null,3]', '{0,f1}',null)
+ jsonb_set_lax('[{"f1":99,"f2":null},2]', '{0,f3}',null, true, 'return_target')
+ 
+   [{"f1":null,"f2":null},2,null,3]
+ [{"f1": 99, "f2": null}, 2]
+
+   
   


diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 2fc3e3ff90..1cb2af1bcd 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -1237,6 +1237,15 @@ LANGUAGE INTERNAL
 STRICT IMMUTABLE PARALLEL SAFE
 AS 'jsonb_set';
 
+CREATE OR REPLACE FUNCTION
+  jsonb_set_lax(jsonb_in jsonb, path text[] , replacement jsonb,
+create_if_missing boolean DEFAULT true,
+null_value_treatment text DEFAULT 'use_json_null')
+RETURNS jsonb
+LANGUAGE INTERNAL
+CALLED ON NULL INPUT IMMUTABLE PARALLEL SAFE
+AS 'jsonb_set_lax';
+
 CREATE OR REPLACE FUNCTION
   parse_ident(str text, strict boolean DEFAULT true)
 RETURNS text[]
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index ab5a24a858..4b5a0214dc 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -4395,6 +4395,70 @@ jsonb_set(PG_FUNCTION_ARGS)
 }
 
 
+/*
+ * SQL function jsonb_set_lax(jsonb, text[], jsonb, boolean, text)
+ */
+Datum
+jsonb_set_lax(PG_FUNCTION_ARGS)
+{
+	/* Jsonb	   *in = PG_GETARG_JSONB_P(0); */
+	/* ArrayType  *path = PG_GETARG_ARRAYTYPE_P(1); */
+	/* Jsonb	  *newval = PG_GETARG_JSONB_P(2); */
+	/* bool		create = PG_GETARG_BOOL(3); */
+	text   *handle_null;
+	char   *handle_val;
+
+	if (PG_ARGISNULL(0) || PG_ARGISNULL(1) || PG_ARGISNULL(3))
+		PG_RETURN_NULL();
+
+	/* could happen if they pass in an explicit NULL */
+	if (PG_ARGISNULL(4))
+		ereport(ERROR,
+(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+		

Re: Is there a GoTo ?

2020-01-16 Thread Pavel Stehule
Hi

čt 16. 1. 2020 v 11:53 odesílatel İlyas Derse  napsal:

> In this function I have to GOTO to a label1, but GOTO keyword is not
> working, can you please help me in getting the way from which I am able to
> jump from a particular code to label.
>  Thanks...
>

no, plpgsql has not GOTO statement. It cannot be implemented in interpret
type used by plpgsql.

Regards

Pavel


>
> CREATE OR REPLACE FUNCTION test(i integer) RETURNS integer AS $$BEGINIF i<0 
> THEN RETURN i + 1;ELSE
>   GOTO label1;END IF<>RETURN null;END;$$ LANGUAGE plpgsql;
>
>


Re: performance of loading CSV data with COPY is 50 times faster than Perl::DBI

2020-01-31 Thread Pavel Stehule
pá 31. 1. 2020 v 19:25 odesílatel Matthias Apitz  napsal:

>
> Hello,
>
> Since ages, we transfer data between different DBS (Informix, Sybase,
> Oracle, and now PostgreSQL) with our own written tool, based on
> Perl::DBI which produces a CSV like export in a common way, i.e. an
> export of Oracle can be loaded into Sybase and vice versa. Export and
> Import is done row by row, for some tables millions of rows.
>
> We produced a special version of the tool to export the rows into a
> format which understands the PostgreSQL's COPY command and got to know
> that the import into PostgreSQL of the same data with COPY is 50 times
> faster than with Perl::DBI, 2.5 minutes ./. 140 minutes for around 6
> million rows into an empty table without indexes.
>
> How can COPY do this so fast?
>

Probably there are more reasons

1. probably DBI implementation is not too effective (for Postgres), maybe
because COPY is not fault tolerant
2. postgres has not implicit plan cache, so every INSERT planned again and
again
3. COPY bypass planner and executor and it has very effective network
communication
4. with COPY you have a sure so autocommit is disabled.

Regards

Pavel



>
> matthias
>
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
>
>


Re: Force Commit

2020-02-05 Thread Pavel Stehule
st 5. 2. 2020 v 9:10 odesílatel İlyas Derse  napsal:

> I'm writing to you about  Commit. I want to do force commit query even
> if  I have exception.
> It's like :
>
> CREATE OR REPLACE PROCEDURE public."test"()
> LANGUAGE 'plpgsql'
> AS $BODY$
> DECLARE "a" integer  ;
> DECLARE "b" integer  ;
> BEGIN
> "a" = 1;
> "b" = 0;
>BEGIN
>raise notice 'hata';
>update public."crud" set lastname = 'Tekindor' where autoid = 20;
>"a"="a"/"b";
>ROLLBACK;
>
>   EXCEPTION
>   WHEN OTHERS THEN
>
>COMMIT;
>   END ;
> END ;
> $BODY$;
>
> How can I do force commit  ?
> Thanks..
>

just you cannot to do it.

There is not possibility how to do it now.

Regards

Pavel


Re: a proposal for a new functionality: "SELECT * [EXCEPT col1 [,col2]]

2020-02-25 Thread Pavel Stehule
út 25. 2. 2020 v 22:14 odesílatel Tom Lane  napsal:

> Paul Jungwirth  writes:
> > Not that this is necessarily fatal, but you'd need to avoid parsing
> > trouble with the other EXCEPT, e.g.
> > SELECT 1 EXCEPT SELECT 1;
>
> Yeah, it doesn't sound like much consideration has been given to
> that ambiguity, but it's a big problem if you want to use a syntax
> like this.
>
> > Google Big Query was mentioned upthread. I see they require parens, e.g.
> > SELECT ... EXCEPT (...). I don't think that actually fixes the ambiguity
> > though.
>
> Indeed it doesn't, because you can parenthesize an EXCEPT's sub-queries:
>
> regression=# select 1 except (select 2);
>  ?column?
> --
> 1
> (1 row)
>
> In principle, once you got to the SELECT keyword you could tell things
> apart, but I'm afraid that might be too late for a Bison-based parser.
>
> > So it seems they require at least one `*` in the SELECT target list. In
> > fact the `*` must be the very last thing. Personally I think it should
> > be as general as possible and work even without a `*` (let alone caring
> > about its position).
>
> I wonder if they aren't thinking of the EXCEPT as annotating the '*'
> rather than the whole SELECT list.  That seems potentially more flexible,
> not less so.  Consider
>
> SELECT t1.* EXCEPT (foo, bar), t2.* EXCEPT (baz) ... FROM t1, t2, ...
>
> This doesn't have any problem with ambiguity if t2 has a "foo" column,
> or if t1 has a "baz" column; which indeed would be cases where this
> sort of ability would be pretty useful, since otherwise you end up
> with painful-to-rename duplicate output column names.  And certainly
> there is no particular need for this construct if you didn't write
> a "*".
>

this proposal looks well

Pavel


> regards, tom lane
>
>
>


Re: Detecting which columns a query will modify in a function called by a trigger

2020-03-02 Thread Pavel Stehule
po 2. 3. 2020 v 19:59 odesílatel stan  napsal:

> I need to implement a fairly fine grained security model. Probably a bit
> finer that I can do with the standard ownership functionality.
>
> My thinking on this is to create a table that contains the users, and a
> "permission bit" for each function that they may want to do, vis a vi
> altering an existing row,or rows, or inserting new rows.
>
> Looks relatively straight forward, if fairly time consuming to do. But I
> would need to know which column(s) a given query would add..alter from the
> function to implement this via a trigger. looks like I see most of what I
> need t do this in the docs, but I can't quite figure out if I can get this
> down to what column(s) a given trigger will modify. Is this possible?
>

You can compare NEW and OLD record. It is pretty hard in PLpgSQL, but easy
with hstore or jsonb, or with PLPerl or PLPythonu

Regards

Pavel

>
>
> --
> "They that would give up essential liberty for temporary safety deserve
> neither liberty nor safety."
> -- Benjamin Franklin
>
>
>


Re: How to plpgsql scripting

2020-03-25 Thread Pavel Stehule
st 25. 3. 2020 v 13:20 odesílatel Ekaterina Amez 
napsal:

> Hi List,
>
> I'm used to make my own scripts in Oracle plsql, Sql Server tsql... but
> I'm unable to make one simple script in Postgres.
>
> Objective version is 8.4 (I know, I know... it's a legacy server, I'm
> planning upgrade this server as soon as I can).
>
> I have a test server with 9.2 version where I've succesfully run this code
> from psql:
>
> DO $$
> DECLARE
>a integer := 10;
>b integer := 20;
>c integer;
> BEGIN
>c := a + b;
> RAISE NOTICE'Value of c: %', c;
> END $$;
>
> But this syntax is (anonymous code block?) is available since 9.0 so I'm
> trying to adapt this to v8.4
>

you cannot to do this.

If you want to use plpgsql in older releases, you should to write functions
and then run these functions.

A per documentation [
> https://www.postgresql.org/docs/8.4/plpgsql-structure.html], the
> structure of a code block is defined as:
>

yes, but the block must be wrapped by some SQL statement - in 8.4, by
CREATE OR REPLACE FUNCTION

[ <> ]
> [ DECLARE
> declarations ]
> BEGIN
> statements
> END [ label ];
>
> so I've adapted my code to:
>
> DECLARE
>a integer;
>b integer;
>c integer;
> BEGIN
> a := 10;
> b := 20;
> c := a + b;
> RAISE NOTICE'Value of c: %', c;
> END ;
>
> But when I run this from psql, both versions 8.4 and 9.2, all I get is:
>
> testdb=# DECLARE
> testdb-#a integer;
> ERROR:  syntax error at or near «integer»
> LINE 2:a integer;
>   ^
> testdb=#b integer;
> ERROR:  syntax error at or near «b»
> LINE 1: b integer;
>  ^
> testdb=#c integer;
> ERROR:  syntax error at or near «c»
> LINE 1: c integer;
>  ^
> testdb=# BEGIN
> testdb-# a := 10;
> ERROR:  syntax error at or near «a»
> LINE 2: a := 10;
>  ^
> testdb=# b := 20;
> ERROR:  syntax error at or near «b»
> LINE 1: b := 20;
>  ^
> testdb=#c := a + b;
> ERROR:  syntax error at or near «c»
> LINE 1: c := a + b;
>  ^
> testdb=# RAISE NOTICE'Value of c: %', c;
> ERROR:  syntax error at or near «RAISE»
> LINE 1: RAISE NOTICE'Value of c: %', c;
>  ^
> testdb=# END;
> WARNING:  no hay una transacción en curso
> COMMIT
> testdb=#
>
> NOTE: I've translated error messages myself.
>
> What's wrong with the syntax? Or is not possible to make a script and I
> have to create a function to encapsulate my code?
>

just this is not supported feature.

You have some special reason why you use 8.4? It's pretty old unsupported
version.

Regards

Pavel

>
> Kind regards,
>
> Ekaterina
>
>
>


  1   2   3   4   5   >