Re: Default ordering option

2019-07-25 Thread Cyril Champier
*Adrian*:
>
> If order is not an issue in the production code why test for it in the
> test code?


In many cases, it would not be a problem in tests if we had an unordered
array comparison helper.
But in other cases, it is a production issue.
In ruby ActiveRecord for exemple, you can do `Patient.find_by(last_name:
'champier')`,
which translates to `SELECT  "patients".* FROM "patients" WHERE
"patients"."last_name" = 'champier' LIMIT 1`.
If last_name is not unique, the returned record will be random.

So yes, everything as to be randomized, because the sources are multiples
and the consequences can vary to a dramatic production bug, a failed CI 1%
of the time, or to a useless test assertion.


*Peter*:

> It might be an interesting exercise to implement this as a post-parsing
> hook.


I known nothing about that, but that sounds interesting, do you have any
documentation pointer to help me implement that?



On Wed, Jul 24, 2019 at 10:36 PM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 2019-07-23 17:43, Cyril Champier wrote:
> > In this documentation
> > , it is said:
> >
> > If sorting is not chosen, the rows will be returned in an
> > unspecified order. The actual order in that case will depend on the
> > scan and join plan types and the order on disk, but it must not be
> > relied on.
> >
> >
> > I would like to know if there is any way to change that to have a "real"
> > random behaviour.
>
> It might be an interesting exercise to implement this as a post-parsing
> hook.
>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: Request for resolution || Support

2019-07-25 Thread Alban Hertroys


> On 24 Jul 2019, at 10:08, jay chauhan  wrote:
> 
> Hi Thomas, David/Team,
> 
> Thanks you for your response. However we need your confirmation whether my 
> Error/issue as mentioned below will be resolved if we upgrade our PostgreSQL 
> Version.

It won’t, you are talking about Oracle-specific features. You need to change 
the code. PostgreSQL is not Oracle, some features are quite RDBMS-specific.

> < compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit">>
> David response: Use a newer version
> Tomas response: Yeah, you should use release 11 for a new project.  
> 
> My Issue while migrating procedure/function from Oracle to PostgreSQL:
> Error-1) 
> ERROR:  cannot begin/end transactions in PL/pgSQL
> HINT:  Use a BEGIN block with an EXCEPTION clause instead.
> CONTEXT:  PL/pgSQL function 
> icmsuatnew.eload_commission_payout_active(text,text,text,text,text,text) line 
> 486 at SQL statement
> SQL state: 0A000
> David Response on it : Rewrite your code as instructed

How to handle these depends on your use of sub-transactions, but the HINT gives 
a pretty good general approach.

> Error-2)
> ERROR:  schema "utl_http" does not exist
> LINE 38: L_HTTP_REQUEST UTL_HTTP.REQ;
> ^
> SQL state: 3F000
> Character: 1785
> Thomas response: That's an Oracle thing for doing HTTP requests from PL/SQL.  
> To do that from plpgsql you could try an extension like this one:
> https://github.com/pramsey/pgsql-http
> Or you could write your own function in Python or  favourite PL>.  That's what I'd probably do.
> https://www.postgresql.org/docs/11/plpython-funcs.html

Initiating TCP/IP from the database means that a database process needs to wait 
for a response. In the meantime, it cannot do anything else. You’re effectively 
blocking it for other transactions and keeping that particular transaction 
‘waiting in transaction’ until, in the worst case, a time-out. That means that 
no maintenance can be done on records touched by this transaction, which can 
lead to bloat.

This is generally considered a bad idea, at least in this community. You’re 
usually better off handing the connection over to an external process that 
reports back to the database when appropriate.
The exception to that is if your transaction cannot be allowed to commit 
without a response from the other peer. In such cases it is appropriate to use 
plpython, plperl, etc

All that said, I am talking about PostgreSQL here. If you’re instead using 
EnterpriseDB, which does have an Oracle compatibility layer that could perhaps 
support these features (I don’t know), you should talk to the EnterpriseDB guys.

Alban Hertroys
--
There is always an exception to always.








Re: Default ordering option

2019-07-25 Thread Peter Eisentraut
On 2019-07-25 09:43, Cyril Champier wrote:
> It might be an interesting exercise to implement this as a post-parsing
> hook.
> 
>  
> I known nothing about that, but that sounds interesting, do you have any
> documentation pointer to help me implement that?

Look for post_parse_analyze_hook.  Walk the parsed query tree, look for
queries without ordering clause and manufacture one.

-- 
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Default ordering option

2019-07-25 Thread Cyril Champier
Peter:
So I would need to create a pg extension encapsulating this hook callback?
If this is the case, it seems it will be much more complicated than
expected, and I wont be able to do it :(
But thanks for the suggestion anyway.

On Thu, Jul 25, 2019 at 12:21 PM Peter Eisentraut <
peter.eisentr...@2ndquadrant.com> wrote:

> On 2019-07-25 09:43, Cyril Champier wrote:
> > It might be an interesting exercise to implement this as a
> post-parsing
> > hook.
> >
> >
> > I known nothing about that, but that sounds interesting, do you have any
> > documentation pointer to help me implement that?
>
> Look for post_parse_analyze_hook.  Walk the parsed query tree, look for
> queries without ordering clause and manufacture one.
>
> --
> Peter Eisentraut  http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: Default ordering option

2019-07-25 Thread Adrian Klaver

On 7/25/19 12:43 AM, Cyril Champier wrote:

*Adrian*:

If order is not an issue in the production code why test for it in the
test code?


In many cases, it would not be a problem in tests if we had an unordered 
array comparison helper.

But in other cases, it is a production issue.
In ruby ActiveRecord for exemple, you can do `Patient.find_by(last_name: 
'champier')`,
which translates to `SELECT  "patients".* FROM "patients" WHERE 
"patients"."last_name" = 'champier' LIMIT 1`.

If last_name is not unique, the returned record will be random.


Are you really looking for a pseudo-random name?

If so would not(warning not a Ruby developer, so below is tentative):

Patient.where(["last_name = :last_name", {last_name: 
"champier"}]).order('RANDOM()').first


work better?

If not why not use something that returns all possible matches?



So yes, everything as to be randomized, because the sources are 
multiples and the consequences can vary to a dramatic production bug, a 
failed CI 1% of the time, or to a useless test assertion.


One way I can think of doing this is write a script that walks through 
your tables in the test db and does an UPDATE across the rows. It is 
going to add time to your tests, but then I believe that is going to be 
the case for anything you do. Or you could look at something I have 
never tried, fuzzy testing. As a starting point:


https://www.guru99.com/fuzz-testing.html

Maybe other folks have suggestions on tools you could use for fuzzy 
testing.






*Peter*:

It might be an interesting exercise to implement this as a post-parsing
hook.

I known nothing about that, but that sounds interesting, do you have any 
documentation pointer to help me implement that?




On Wed, Jul 24, 2019 at 10:36 PM Peter Eisentraut 
> wrote:


On 2019-07-23 17:43, Cyril Champier wrote:
 > In this documentation
 > , it is said:
 >
 >     If sorting is not chosen, the rows will be returned in an
 >     unspecified order. The actual order in that case will depend
on the
 >     scan and join plan types and the order on disk, but it must
not be
 >     relied on.
 >
 >
 > I would like to know if there is any way to change that to have a
"real"
 > random behaviour.

It might be an interesting exercise to implement this as a post-parsing
hook.

-- 
Peter Eisentraut http://www.2ndQuadrant.com/

PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.5 DB corruption

2019-07-25 Thread Thomas Tignor
Hi Adrian,Thanks for responding. Below is the schema data for the tables where 
we always see corruption. You'll notice they have triggers for a postgres 
extension called Slony-I which provides replication service. It's not clear 
if/how that's a factor, though.

ams=# \d ams.alert_instance

                    Table "ams.alert_instance"

       Column        |              Type              | Modifiers 

-++---

 alert_instance_id   | integer                        | not null

 alert_definition_id | integer                        | not null

 alert_instance_key  | character varying(500)         | not null

 start_active_date   | timestamp(0) without time zone | not null

 stop_active_date    | timestamp(0) without time zone | 

 active              | smallint                       | not null

 acknowledged        | smallint                       | not null

 ack_clear_time      | timestamp(0) without time zone | 

 user_set_clear_time | smallint                       | 

 category_id         | integer                        | not null

 condition_start     | timestamp(0) without time zone | not null

 unack_reason        | character varying(1)           | 

 viewer_visible      | smallint                       | not null

Indexes:

    "pk_alert_instance" PRIMARY KEY, btree (alert_instance_id), tablespace 
"tbls5"

    "idx_alert_inst_1" btree (alert_instance_key, alert_definition_id, 
alert_instance_id, active, acknowledged, ack_clear_time), tablespace "tbls5"

    "idx_alert_inst_cat_id" btree (category_id), tablespace "tbls5"

    "idx_alert_inst_def_id" btree (alert_definition_id), tablespace "tbls5"

Check constraints:

    "ck_alert_inst_acked" CHECK (acknowledged = 0 OR acknowledged = 1)

    "ck_alert_inst_active" CHECK (active = 0 OR active = 1)

    "ck_alert_inst_set_cl_tm" CHECK (user_set_clear_time = 0 OR 
user_set_clear_time = 1)

    "ck_alert_inst_viewer_vis" CHECK (viewer_visible = 0 OR viewer_visible = 1)

Foreign-key constraints:

    "fk_alert_inst_cat_id" FOREIGN KEY (category_id) REFERENCES 
ams.category(category_id)

    "fk_alert_inst_def_id" FOREIGN KEY (alert_definition_id) REFERENCES 
ams.alert_definition(alert_definition_id)

    "fk_alert_inst_unack_reason" FOREIGN KEY (unack_reason) REFERENCES 
ams.unack_reason(unack_reason)

Referenced by:

    TABLE "ams.alert_attribute" CONSTRAINT "fk_alert_attr_instance_id" FOREIGN 
KEY (alert_instance_id) REFERENCES ams.alert_instance(alert_instance_id) ON 
DELETE CASCADE

Triggers:

    _ams_cluster_logtrigger AFTER INSERT OR DELETE OR UPDATE ON 
ams.alert_instance FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.logtrigger('_ams_cluster', '1', 'k')

    _ams_cluster_truncatetrigger BEFORE TRUNCATE ON ams.alert_instance FOR EACH 
STATEMENT EXECUTE PROCEDURE _ams_cluster.log_truncate('1')

Disabled user triggers:

    _ams_cluster_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON 
ams.alert_instance FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.denyaccess('_ams_cluster')

    _ams_cluster_truncatedeny BEFORE TRUNCATE ON ams.alert_instance FOR EACH 
STATEMENT EXECUTE PROCEDURE _ams_cluster.deny_truncate()

 

ams=# 

ams=# \d ams.alert_attribute

               Table "ams.alert_attribute"

      Column       |          Type           | Modifiers 

---+-+---

 alert_instance_id | integer                 | not null

 name              | character varying(200)  | not null

 data_type         | smallint                | not null

 value             | character varying(2000) | 

Indexes:

    "pk_alert_attributes" PRIMARY KEY, btree (alert_instance_id, name), 
tablespace "tbls5"

    "idx_alert_attr_name" btree (name)

Foreign-key constraints:

    "fk_alert_attr_instance_id" FOREIGN KEY (alert_instance_id) REFERENCES 
ams.alert_instance(alert_instance_id) ON DELETE CASCADE

Triggers:

    _ams_cluster_logtrigger AFTER INSERT OR DELETE OR UPDATE ON 
ams.alert_attribute FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.logtrigger('_ams_cluster', '2', 'kk')

    _ams_cluster_truncatetrigger BEFORE TRUNCATE ON ams.alert_attribute FOR 
EACH STATEMENT EXECUTE PROCEDURE _ams_cluster.log_truncate('2')

Disabled user triggers:

    _ams_cluster_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON 
ams.alert_attribute FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.denyaccess('_ams_cluster')

    _ams_cluster_truncatedeny BEFORE TRUNCATE ON ams.alert_attribute FOR EACH 
STATEMENT EXECUTE PROCEDURE _ams_cluster.deny_truncate()

 

ams=# 


Tom    :-) 

On Wednesday, July 24, 2019, 11:15:04 AM EDT, Adrian Klaver 
 wrote:  
 
 On 7/24/19 7:38 AM, Thomas Tignor wrote:
> Hello postgres community,
> 
> Writing again to see if there are insights on this issue. We have had 
> infrequent but recurring corruption since upgrading from postgres 9.1 to 
> postgres 9.5. We are presently on 9.5.16. Our DB-facing app continually 
> performs a mixture of DML, primarily inserts 

SELECT INTO question

2019-07-25 Thread Kevin Brannen
Hi,

We're trying to understand what happened with a SELECT INTO. The problem can be 
see with this example:

# create table t1 (id int, v int);
CREATE TABLE

# insert into t1 (select x, x from generate_series(1, 5) as g(x));
INSERT 0 5

# select * from t1;
id | v
+
  1 |  1
  2 |  2
  3 |  3
  4 |  4
  5 |  5
(5 rows)

nms=# select into t2 from t1;
SELECT 5

# select * from t2;
--
(5 rows)

# select * into t3 from t1;
SELECT 5

# select * from t3;
id | v
+
  1 |  1
  2 |  2
  3 |  3
  4 |  4
  5 |  5
(5 rows)

As you can see on the first select into, the result in t2 is ... missing, no 
"data" at all, unlike t3 which was the expected answer. Upon closer inspection, 
it was realized that the "expression" in the statement was left out (oops!), 
but instead of getting a syntax error, it worked.

So why did it work and why was nothing stored?

The only answer I've been able to come up with is that the expression was 
evaluated as a "null expression" for each row, so it gave us 5 null rows. A 
small part of my brain understands that, but most of my brain goes "what?!"

I've noticed that I can also do:

# select from t1;
--
(5 rows)

That also doesn't make sense and yet it does in a weird way. I suspect the 
answer revolves around some corner case in the SQL Standard.

So, what's going on here?

Thanks,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: SELECT INTO question

2019-07-25 Thread Adrian Klaver

On 7/25/19 12:23 PM, Kevin Brannen wrote:

Hi,

We’re trying to understand what happened with a SELECT INTO. The problem 
can be see with this example:


# create table t1 (id int, v int);

CREATE TABLE

# insert into t1 (select x, x from generate_series(1, 5) as g(x));

INSERT 0 5

# select * from t1;

id | v

+

   1 |  1

   2 |  2

   3 |  3

   4 |  4

   5 |  5

(5 rows)

nms=# select into t2 from t1;

SELECT 5

# select * from t2;

--

(5 rows)

# select * into t3 from t1;

SELECT 5

# select * from t3;

id | v

+

   1 |  1

   2 |  2

   3 |  3

   4 |  4

   5 |  5

(5 rows)

As you can see on the first select into, the result in t2 is … missing, 
no “data” at all, unlike t3 which was the expected answer. Upon closer 
inspection, it was realized that the “expression” in the statement was 
left out (oops!), but instead of getting a syntax error, it worked.


So why did it work and why was nothing stored?

The only answer I’ve been able to come up with is that the expression 
was evaluated as a “null expression” for each row, so it gave us 5 null 
rows. A small part of my brain understands that, but most of my brain 
goes “what?!”


I’ve noticed that I can also do:

# select from t1;

--

(5 rows)

That also doesn’t make sense and yet it does in a weird way. I suspect 
the answer revolves around some corner case in the SQL Standard.


So, what’s going on here?


https://www.postgresql.org/docs/11/sql-select.html

Compatibility

"Omitted FROM Clauses

PostgreSQL allows one to omit the FROM clause. It has a straightforward 
use to compute the results of simple expressions:


SELECT 2+2;

 ?column?
--
4

Some other SQL databases cannot do this except by introducing a dummy 
one-row table from which to do the SELECT.


...

Empty SELECT Lists

The list of output expressions after SELECT can be empty, producing a 
zero-column result table. This is not valid syntax according to the SQL 
standard. PostgreSQL allows it to be consistent with allowing 
zero-column tables. However, an empty list is not allowed when DISTINCT 
is used.

"

So:
test=# \d t2 



   Table "public.t2" 



 Column | Type | Collation | Nullable | Default 



+--+---+--+-





Thanks,

Kevin



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres 9.5 DB corruption

2019-07-25 Thread Adrian Klaver

On 7/25/19 10:24 AM, Thomas Tignor wrote:

Hi Adrian,
Thanks for responding. Below is the schema data for the tables where we 
always see corruption. You'll notice they have triggers for a postgres 
extension called Slony-I which provides replication service. It's not 
clear if/how that's a factor, though.


What specific version of Slony?

Did you upgrade Slony when you moved from 9.1 to 9.5?

Trace you showed in your first post was for:

ams.alert_attribute_bak

I do not see that below.

Are the errors on any specific field?

The errors are occurring on the primary, correct?

Where is the data coming from?



ams=# \d ams.alert_instance

Table "ams.alert_instance"

Column|Type| Modifiers

-++---

alert_instance_id| integer| not null

alert_definition_id | integer| not null

alert_instance_key| character varying(500)| not null

start_active_date| timestamp(0) without time zone | not null

stop_active_date| timestamp(0) without time zone |

active| smallint| not null

acknowledged| smallint| not null

ack_clear_time| timestamp(0) without time zone |

user_set_clear_time | smallint|

category_id| integer| not null

condition_start| timestamp(0) without time zone | not null

unack_reason| character varying(1)|

viewer_visible| smallint| not null

Indexes:

"pk_alert_instance" PRIMARY KEY, btree (alert_instance_id), tablespace 
"tbls5"


"idx_alert_inst_1" btree (alert_instance_key, alert_definition_id, 
alert_instance_id, active, acknowledged, ack_clear_time), tablespace "tbls5"


"idx_alert_inst_cat_id" btree (category_id), tablespace "tbls5"

"idx_alert_inst_def_id" btree (alert_definition_id), tablespace "tbls5"

Check constraints:

"ck_alert_inst_acked" CHECK (acknowledged = 0 OR acknowledged = 1)

"ck_alert_inst_active" CHECK (active = 0 OR active = 1)

"ck_alert_inst_set_cl_tm" CHECK (user_set_clear_time = 0 OR 
user_set_clear_time = 1)


"ck_alert_inst_viewer_vis" CHECK (viewer_visible = 0 OR viewer_visible = 1)

Foreign-key constraints:

"fk_alert_inst_cat_id" FOREIGN KEY (category_id) REFERENCES 
ams.category(category_id)


"fk_alert_inst_def_id" FOREIGN KEY (alert_definition_id) REFERENCES 
ams.alert_definition(alert_definition_id)


"fk_alert_inst_unack_reason" FOREIGN KEY (unack_reason) REFERENCES 
ams.unack_reason(unack_reason)


Referenced by:

TABLE "ams.alert_attribute" CONSTRAINT "fk_alert_attr_instance_id" 
FOREIGN KEY (alert_instance_id) REFERENCES 
ams.alert_instance(alert_instance_id) ON DELETE CASCADE


Triggers:

_ams_cluster_logtrigger AFTER INSERT OR DELETE OR UPDATE ON 
ams.alert_instance FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.logtrigger('_ams_cluster', '1', 'k')


_ams_cluster_truncatetrigger BEFORE TRUNCATE ON ams.alert_instance FOR 
EACH STATEMENT EXECUTE PROCEDURE _ams_cluster.log_truncate('1')


Disabled user triggers:

_ams_cluster_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON 
ams.alert_instance FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.denyaccess('_ams_cluster')


_ams_cluster_truncatedeny BEFORE TRUNCATE ON ams.alert_instance FOR EACH 
STATEMENT EXECUTE PROCEDURE _ams_cluster.deny_truncate()


ams=#

ams=# \d ams.alert_attribute

Table "ams.alert_attribute"

Column|Type| Modifiers

---+-+---

alert_instance_id | integer| not null

name| character varying(200)| not null

data_type| smallint| not null

value| character varying(2000) |

Indexes:

"pk_alert_attributes" PRIMARY KEY, btree (alert_instance_id, name), 
tablespace "tbls5"


"idx_alert_attr_name" btree (name)

Foreign-key constraints:

"fk_alert_attr_instance_id" FOREIGN KEY (alert_instance_id) REFERENCES 
ams.alert_instance(alert_instance_id) ON DELETE CASCADE


Triggers:

_ams_cluster_logtrigger AFTER INSERT OR DELETE OR UPDATE ON 
ams.alert_attribute FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.logtrigger('_ams_cluster', '2', 'kk')


_ams_cluster_truncatetrigger BEFORE TRUNCATE ON ams.alert_attribute FOR 
EACH STATEMENT EXECUTE PROCEDURE _ams_cluster.log_truncate('2')


Disabled user triggers:

_ams_cluster_denyaccess BEFORE INSERT OR DELETE OR UPDATE ON 
ams.alert_attribute FOR EACH ROW EXECUTE PROCEDURE 
_ams_cluster.denyaccess('_ams_cluster')


_ams_cluster_truncatedeny BEFORE TRUNCATE ON ams.alert_attribute FOR 
EACH STATEMENT EXECUTE PROCEDURE _ams_cluster.deny_truncate()


ams=#



Tom    :-)


On Wednesday, July 24, 2019, 11:15:04 AM EDT, Adrian Klaver 
 wrote:



On 7/24/19 7:38 AM, Thomas Tignor wrote:
 > Hello postgres community,
 >
 > Writing again to see if there are insights on this issue. We have had
 > infrequent but recurring corruption since upgrading from postgres 9.1 to
 > postgres 9.5. We are presently on 9.5.16. Our DB-facing app continually
 > performs a mixture of DML, primarily inserts and updates on two specific
 > tables, with no single op being suspect. In the past, corruption events
 > have produced encoding errors on COPY operations (invalid byte sequence
 > f

Re: SELECT INTO question

2019-07-25 Thread Tom Lane
Adrian Klaver  writes:
> On 7/25/19 12:23 PM, Kevin Brannen wrote:
>> We're trying to understand what happened with a SELECT INTO. The problem 
>> can be see with this example:
>> 
>> nms=# select into t2 from t1;
>> SELECT 5
>> # select * from t2;
>> --
>> (5 rows)

> The list of output expressions after SELECT can be empty, producing a 
> zero-column result table. This is not valid syntax according to the SQL 
> standard. PostgreSQL allows it to be consistent with allowing 
> zero-column tables. However, an empty list is not allowed when DISTINCT 
> is used.

Right, you selected no columns from t1, so t2 has no columns (and yet
five rows).  Worth noting here is that psql is a bit squirrely about
displaying zero-column results --- it drops the column-names header
line, and it doesn't emit a blank-line-per-row as one might expect.
Perhaps somebody ought to fix that, but it's such a corner case that
no one has bothered yet.

regards, tom lane




RE: SELECT INTO question

2019-07-25 Thread Kevin Brannen
-Original Message-
From: Tom Lane 
Sent: Thursday, July 25, 2019 2:47 PM
To: Adrian Klaver 
Cc: Kevin Brannen ; pgsql-generallists.postgresql.org 

Subject: Re: SELECT INTO question



Adrian Klaver mailto:adrian.kla...@aklaver.com>> 
writes:

> On 7/25/19 12:23 PM, Kevin Brannen wrote:

>> We're trying to understand what happened with a SELECT INTO. The

>> problem can be see with this example:

>>

>> nms=# select into t2 from t1;

>> SELECT 5

>> # select * from t2;

>> --

>> (5 rows)



> The list of output expressions after SELECT can be empty, producing a

> zero-column result table. This is not valid syntax according to the

> SQL standard. PostgreSQL allows it to be consistent with allowing

> zero-column tables. However, an empty list is not allowed when

> DISTINCT is used.



Right, you selected no columns from t1, so t2 has no columns (and yet five 
rows).  Worth noting here is that psql is a bit squirrely about displaying 
zero-column results --- it drops the column-names header line, and it doesn't 
emit a blank-line-per-row as one might expect.

Perhaps somebody ought to fix that, but it's such a corner case that no one has 
bothered yet.






Hmm, I don't particularly like that answer as I'd have preferred a "syntax 
error", but I do understand it.

Thanks for the answer, Adrian; and thanks for the expansion, Tom.

Kevin



This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread PegoraroF10
So, what should I tune on autovacuum ?
My script was running strangely. Postgres log shows me the time spent to
create functions. That happens when creating triggers and tables too.
Sometimes it´s too fast and sometimes ...

statement: create or replace function valoresdfe... 0 mins 1.135 secs
statement: create or replace function dadosorigem...0 mins 0.055 secs
statement: CREATE OR REPLACE FUNCTION SONU...   0 mins 0.013 secs
statement: create or replace function contatoscampa...  2 mins 13.492 secs
statement: create or replace function FORMATARTELEF...  0 mins 0.013 secs
statement: create or replace function ChecaVar  0 mins 0.012 secs
statement: CREATE or replace FUNCTION criatrigge... 1 mins 16.42 secs

So, is this a vacuum problem ? What do I need to configure it  ?
And again, if I do a Reindex database before creating that schema, it works
perfectly.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread Adrian Klaver

On 7/25/19 3:16 PM, PegoraroF10 wrote:

So, what should I tune on autovacuum ?
My script was running strangely. Postgres log shows me the time spent to
create functions. That happens when creating triggers and tables too.
Sometimes it´s too fast and sometimes ...


I didn't realize there is too fast:)  More below.



statement: create or replace function valoresdfe... 0 mins 1.135 secs
statement: create or replace function dadosorigem...0 mins 0.055 secs
statement: CREATE OR REPLACE FUNCTION SONU...   0 mins 0.013 secs
statement: create or replace function contatoscampa...  2 mins 13.492 secs
statement: create or replace function FORMATARTELEF...  0 mins 0.013 secs
statement: create or replace function ChecaVar  0 mins 0.012 secs
statement: CREATE or replace FUNCTION criatrigge... 1 mins 16.42 secs


Are there other messages immediately(or close vicinity) before/after the 
slow statements?





So, is this a vacuum problem ? What do I need to configure it  ?


Configuration of autovacuum is done in postgresql.conf using these settings:

https://www.postgresql.org/docs/11/runtime-config-autovacuum.html

You might want to send the settings you have to the list. Also the 
setting for track_counts.




And again, if I do a Reindex database before creating that schema, it works
perfectly.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread PegoraroF10
Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread Adrian Klaver

On 7/25/19 4:01 PM, PegoraroF10 wrote:

Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.


So what are the settings?


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-25 Thread Adrian Klaver

On 7/25/19 4:01 PM, PegoraroF10 wrote:

Nope, no one message near those statements.
I haven´t changed anything on Postgres.conf related with autovacuum.



You probably should also look at this system view:

https://www.postgresql.org/docs/11/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW

to see what autovacuum activity has occurred on the tables.




--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com