Re: pg_upgrade and wraparound

2018-06-26 Thread Arjen Nienhuis
On Tue, Jun 26, 2018 at 8:38 AM Alexander Shutyaev  wrote:
>
> Hello again,
>
> I've performed another test - I've migrated to the new cluster using dump 
> restore: pg_dumpall | psql. It went well, although it took 6 days while 
> pg_upgrade usually took a night.
>
> Is there any hope the issue with pg_upgrade can be resolved? If not, could 
> you give me some hints as to how can I decrease time needed for pg_dumpall | 
> psql?

If you use pg_dump with the custom format you can use pg_restore
--jobs=n to do a parallel restore.

I think you can do pg_dumpall --globals-only to restore users, and
then do a pg_dump per database.

Groeten, Arjen



Problem Postgres

2018-06-26 Thread Emanuele Musella
Good morning,

we have the following error:

2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere
informazioni sul file "base/16395/19739338": Permission denied
2018-06-26 09:48:44 CEST ISTRUZIONE:  select
p.datname,pg_database_size(p.datname) from pg_database p

This error repetly every 20 minutes.

Can you help me?

Regards

Emanuele Musella


Re: Schema/Data conversion opensource tools from MySQL to PostgreSQL

2018-06-26 Thread Thomas Poty
Hi Chiranjeevi,

I know my answer doesn't answer your request but we have just converted a
database from mysql to postgresql with about 1000 tables.
We had to convert some functions present in mysql but not in postgresql
Also we had to convert queries.
I converted all of that manually, table by table, column by column,etc

We developped a program that compare structure , a program that migrate
data and a program that compare data.
Also we had to convert queries.

All of that to say I don't think you can convert only by running a tool
like that.

I also think it is a bit difficult to suggest a tool without knowing
exactly what is needed.

Regards
Thomas

Le mar. 26 juin 2018 à 08:25, Arnaud L.  a écrit :

> Le 25-06-18 à 18:47, chiru r a écrit :
> > Please suggest Schema/Data conversion opensource tools from MySQL to
> > PostgreSQL.
>
> Hi.
> I used this php script which did a pretty good job :
> https://github.com/AnatolyUss/FromMySqlToPostgreSql
>
> --
> Regards
>
>


Re: Problem Postgres

2018-06-26 Thread Andreas Kretschmer




Am 26.06.2018 um 10:05 schrieb Emanuele Musella:

Good morning,

we have the following error:

2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere 
informazioni sul file "base/16395/19739338": Permission denied
2018-06-26 09:48:44 CEST ISTRUZIONE:  select 
p.datname,pg_database_size(p.datname) from pg_database p





have you manually manipulate files below the postgres-data-directory?
Which os you are using?


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Problem Postgres

2018-06-26 Thread Thomas Poty
Hello,
Can you try with a superuser?

Le mar. 26 juin 2018 à 10:06, Emanuele Musella  a
écrit :

> Good morning,
>
> we have the following error:
>
> 2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere
> informazioni sul file "base/16395/19739338": Permission denied
> 2018-06-26 09:48:44 CEST ISTRUZIONE:  select
> p.datname,pg_database_size(p.datname) from pg_database p
>
> This error repetly every 20 minutes.
>
> Can you help me?
>
> Regards
>
> Emanuele Musella
>


Re: Problem Postgres

2018-06-26 Thread Fabio Pardi
Hi Emanuele,


For the next time, is best if you post to an italian mailing list if your log 
messages are in italian. Else change settings in order to have english errors.

Back to your problem:

Looks like permissions on your installation are not OK, and have been 
manipulated from the default ones. The fact that happens every 20 minutes, 
means maybe something 'scheduled' is occurring, like, eg, a query (maybe from 
your monitoring system), or a checkpoint.


Could you please:

* post the full logfile of the next line, the one on 'ISTRUZIONE' and anything 
else you think is relevant in the logs

* tell us more about your installation (from package, compiled, which version 
of Postres and OS, if you did anything special after installation)

*post the permissions of the folder 'base' and those of base/16395/19739338

* tell us under which user is postgres running


regards,

fabio pardi



On 26/06/18 10:05, Emanuele Musella wrote:
> Good morning,
>
> we have the following error:
>
> 2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere informazioni 
> sul file "base/16395/19739338": Permission denied
> 2018-06-26 09:48:44 CEST ISTRUZIONE:  select 
> p.datname,pg_database_size(p.datname) from pg_database p
>
> This error repetly every 20 minutes.
>
> Can you help me?
>
> Regards 
>
> Emanuele Musella



Re: Single query uses all system memory

2018-06-26 Thread Magnus Hagander
On Tue, Jun 26, 2018 at 11:40 AM, Ivar Fredriksen  wrote:

> A single large query is able to spend all the system memory (as reported
> by top), and the oom_reaper kills postgres. See bottom of email for an
> example query and logs.
>
>
>
> Expected behavior would be that postgres is not killed and the query is
> limited to the shared_buffer memory, potentially failing, but not killing
> the postgres process.
>

Then your expectations are completely wrong. shared_buffers have nothing to
do with limiting the memory of a query. shared_buffers set the size of the
PostgreSQL cache, not the working set. That's controlled by work_mem (see
below).



> Do I have some fundamental flaw in my understanding of this, or is there
> some sort of configuration that should be in place that is missing? The
> offending sample query has been
>
> rewritten to not use joins, and will no longer crash postgres. I am not
> asking anyone to spend a lot of time analyzing the query itself, it is just
> provided as an example for when the problem will occur.
>
> My question is more in a broader sense why one query is eating through all
> system memory, and is there anything I can do to prevent this from
> happening?
>
> We have set shared_buffers to 256MB on a test-system that has 1GB memory.
> Production machines have more resources, but the errors are happening in
> exactly the same way, so I assume (perhaps wrongly) that using the test
>
> system specs and logs might give me the answers I need to figure out what
> is happening.
>
>
>
> Technical details are provided below, a big thanks to anyone who can
> provide any help or insight to this.
>
>
>
> Regards,
>
> Ivar Fredriksen
>
>
>
> PostgreSQL version number:
>
> PostgreSQL 10.4 (Debian 10.4-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled
> by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
>
> Installed with the debian packages for postgres found at: deb
> http://apt.postgresql.org/pub/repos/apt/
>
>
>
> Changes made to the settings in the postgresql.conf file:
>
>  name | current_setting |
> source
>
> --+-
> +--
>
>  application_name | psql|
> client
>
>  checkpoint_completion_target | 0.9 |
> configuration file
>
>  client_encoding  | UTF8|
> client
>
>  cluster_name | 10/main |
> configuration file
>
>  DateStyle| ISO, MDY|
> configuration file
>
>  default_statistics_target| 100 |
> configuration file
>
>  default_text_search_config   | pg_catalog.english  |
> configuration file
>
>  dynamic_shared_memory_type   | posix   |
> configuration file
>
>  effective_cache_size | 1536MB  |
> configuration file
>
>  external_pid_file| /var/run/postgresql/10-main.pid |
> configuration file
>
>  lc_messages  | C   |
> configuration file
>
>  lc_monetary  | C   |
> configuration file
>
>  lc_numeric   | en_US.UTF-8 |
> configuration file
>
>  lc_time  | C   |
> configuration file
>
>  listen_addresses | *   |
> configuration file
>
>  log_line_prefix  | %m [%p] %q%u@%d |
> configuration file
>
>  log_timezone | UTC |
> configuration file
>
>  maintenance_work_mem | 128MB   |
> configuration file
>
>  max_connections  | 100 |
> configuration file
>
>  max_stack_depth  | 2MB |
> environment variable
>
>  max_wal_size | 2GB |
> configuration file
>
>  min_wal_size | 1GB |
> configuration file
>
>  port | 5432|
> configuration file
>
>  search_path  | "$user", public, pg_catalog |
> configuration file
>
>  shared_buffers   | 256MB   |
> configuration file
>
>  ssl  | on  |
> configuration file
>
>  ssl_cert_file| /etc/ssl/certs/ssl-cert-snakeoil.pem|
> configuration file
>
>  ssl_key_file | /etc/ssl/private/ssl-cert-snakeoil.key  |
> configuration file
>
>  standard_conforming_strings

We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread amandeep singh
Hi Team


We have been observing our postgres database from past few days,We found few 
queries running three times simultaneously with same parameters.I would like to 
back track how a query is running multiple times.

2018-06-26 15:22:13.620045+05:30 | 00:00:00.198777 | active | 22234 | select * 
from xyz where x_id=$1

2018-06-26 15:22:13.618832+05:30 | 00:00:00.1 | active | 22233 | select * 
from xyz where x_id=$1

2018-06-26 15:22:13.612721+05:30 | 00:00:00.206101 | active | 23927 | select * 
from xyz where x_id=$1

Our application is developed in java struts 1.3 and hibernate as ORM and tomcat 
8.5 for hosting.

PostgreSQL version: 10.3

Kindly suggest.

Thanks and Regards:
Amandeep Singh


Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Andreas Kretschmer




Am 26.06.2018 um 12:19 schrieb amandeep singh:
We have been observing our postgres database from past few days,We 
found few queries running three times simultaneously with same 
parameters.I would like to back track how a query is running multiple 
times.


they are independend each other. You can run those queries with EXPLAIN 
ANALYSE to see the execution plan.


Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Single query uses all system memory

2018-06-26 Thread Fabio Pardi
Hi Ivar,

I agree with Magnus. As addition, also i would change:

 effective_cache_size

and

 maintenance_work_mem

Then disable OOM killer, change logging in order to log files on disk to see if 
your work_mem is too low and spills on disk, rethink your data structure and, 
overall, I think that you can have a look to this page, which I find a good 
starting point for a proper tuning:

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server


regards,

fabio pardi


On 26/06/18 12:13, Magnus Hagander wrote:
>
>
> On Tue, Jun 26, 2018 at 11:40 AM, Ivar Fredriksen  > wrote:
>
> A single large query is able to spend all the system memory (as reported 
> by top), and the oom_reaper kills postgres. See bottom of email for an 
> example query and logs.
>
>  
>
> Expected behavior would be that postgres is not killed and the query is 
> limited to the shared_buffer memory, potentially failing, but not killing the 
> postgres process.
>
>
> Then your expectations are completely wrong. shared_buffers have nothing to 
> do with limiting the memory of a query. shared_buffers set the size of the 
> PostgreSQL cache, not the working set. That's controlled by work_mem (see 
> below).
>
>  
>
> Do I have some fundamental flaw in my understanding of this, or is there 
> some sort of configuration that should be in place that is missing? The 
> offending sample query has been
>
> rewritten to not use joins, and will no longer crash postgres. I am not 
> asking anyone to spend a lot of time analyzing the query itself, it is just 
> provided as an example for when the problem will occur.
>
> My question is more in a broader sense why one query is eating through 
> all system memory, and is there anything I can do to prevent this from 
> happening? 
>
> We have set shared_buffers to 256MB on a test-system that has 1GB memory. 
> Production machines have more resources, but the errors are happening in 
> exactly the same way, so I assume (perhaps wrongly) that using the test
>
> system specs and logs might give me the answers I need to figure out what 
> is happening.
>
>  
>
> Technical details are provided below, a big thanks to anyone who can 
> provide any help or insight to this.
>
>  
>
> Regards,
>
> Ivar Fredriksen
>
>  
>
> PostgreSQL version number:
>
> PostgreSQL 10.4 (Debian 10.4-2.pgdg90+1) on x86_64-pc-linux-gnu, compiled 
> by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
>
> Installed with the debian packages for postgres found at: deb 
> http://apt.postgresql.org/pub/repos/apt/ 
> 
>
>  
>
> Changes made to the settings in the postgresql.conf file:
>
>  name | current_setting     | 
>    source
>
> 
> --+-+--
>
>  application_name | psql    | 
> client
>
>  checkpoint_completion_target | 0.9     | 
> configuration file
>
>  client_encoding  | UTF8    | 
> client
>
>  cluster_name | 10/main | 
> configuration file
>
>  DateStyle    | ISO, MDY    | 
> configuration file
>
>  default_statistics_target    | 100 | 
> configuration file
>
>  default_text_search_config   | pg_catalog.english  | 
> configuration file
>
>  dynamic_shared_memory_type   | posix   | 
> configuration file
>
>  effective_cache_size | 1536MB  | 
> configuration file
>
>  external_pid_file    | /var/run/postgresql/10-main.pid | 
> configuration file
>
>  lc_messages  | C   | 
> configuration file
>
>  lc_monetary  | C   | 
> configuration file
>
>  lc_numeric   | en_US.UTF-8     | 
> configuration file
>
>  lc_time  | C   | 
> configuration file
>
>  listen_addresses | *   | 
> configuration file
>
>  log_line_prefix  | %m [%p] %q%u@%d | 
> configuration file
>
>  log_timezone | UTC | 
> configuration file
>
>  maintenance_work_mem | 128MB   | 
> configuration file
>
>  max_connections      | 100 | 
> configuration file
>
>  max_stack_depth  | 2MB | 

Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Saurabh Agrawal
>
>
> Am 26.06.2018 um 12:19 schrieb amandeep singh:
>
>> We have been observing our postgres database from past few days,We found
>> few queries running three times simultaneously with same parameters.I would
>> like to back track how a query is running multiple times.
>
>
Can you check the requests made by your application/ ORM? This looks like
application is making multiple requests, rather than something happening on
the database?


Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Edson Carlos Ericksson Richter

Em 26/06/2018 08:49, Saurabh Agrawal escreveu:



Am 26.06.2018 um 12:19 schrieb amandeep singh:

We have been observing our postgres database from past few
days,We found few queries running three times simultaneously
with same parameters.I would like to back track how a query is
running multiple times.


Can you check the requests made by your application/ ORM? This looks 
like application is making multiple requests, rather than something 
happening on the database?




I agree with Saurabh Agrawal.
Sometimes, users just double (or triple) click a form button, and then 
resulting activity is executed more than once...
What I used is to deactivate buttons after first click. This is even 
more noticeable when working on intranet apps ou high speed internet 
connections.


Users, sometimes, need to be educated that one click is enough :-)

Regards,

Edson.



Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Andreas Kretschmer
On 26 June 2018 12:32:44 CEST, Andreas Kretschmer  
wrote:
>
>
>Am 26.06.2018 um 12:19 schrieb amandeep singh:
>> We have been observing our postgres database from past few days,We 
>> found few queries running three times simultaneously with same 
>> parameters.I would like to back track how a query is running multiple
>
>> times.
>
>they are independend each other. You can run those queries with EXPLAIN
>
>ANALYSE to see the execution plan.
>
>Regards, Andreas

seems like i misunderstud your query. are you sure, that the parameter $1 is 
equal across all different connections?

Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Re: Problem Postgres

2018-06-26 Thread Enrico Pirozzi

Hi Emanuele,

you can post here in italian :)

http://lists.psql.it/mailman/listinfo

Regards
Enrico

Il 26/06/2018 10:45, Fabio Pardi ha scritto:


Hi Emanuele,


For the next time, is best if you post to an italian mailing list if 
your log messages are in italian. Else change settings in order to 
have english errors.


Back to your problem:

Looks like permissions on your installation are not OK, and have been 
manipulated from the default ones. The fact that happens every 20 
minutes, means maybe something 'scheduled' is occurring, like, eg, a 
query (maybe from your monitoring system), or a checkpoint.



Could you please:

* post the full logfile of the next line, the one on 'ISTRUZIONE' and 
anything else you think is relevant in the logs


* tell us more about your installation (from package, compiled, which 
version of Postres and OS, if you did anything special after installation)


*post the permissions of the folder 'base' and those of 
base/16395/19739338


* tell us under which user is postgres running


regards,

fabio pardi



On 26/06/18 10:05, Emanuele Musella wrote:

Good morning,

we have the following error:

2018-06-26 09:48:44 CEST ERRORE:  non è stato possibile ottenere 
informazioni sul file "base/16395/19739338": Permission denied
2018-06-26 09:48:44 CEST ISTRUZIONE:  select 
p.datname,pg_database_size(p.datname) from pg_database p


This error repetly every 20 minutes.

Can you help me?

Regards

Emanuele Musella



--
Questo messaggio è stato analizzato con Libra ESVA ed è risultato non 
infetto.




--
Enrico Pirozzi
NBS Group S.r.l.
via Val Tiberina 23/A
63074, San Benedetto del Tronto (AP)
Tel. 0735-7626201





Re: Using COPY to import large xml file

2018-06-26 Thread Anto Aravinth
Thanks a lot everyone. After playing around with small dataset, I could
able to make datasets that are easy to go with COPY. Creating datasets of
around 50GB took say 2hrs (I can definitely improve on this).

54M records, COPY took around 35 minutes! Awesome.. :) :)

Mean time, I understood few things like vacuum etc.

Really loving postgres!

Thanks,
Anto.

On Tue, Jun 26, 2018 at 3:40 AM, Tim Cross  wrote:

>
> Anto Aravinth  writes:
>
> > Thanks a lot. But I do got lot of challenges! Looks like SO data contains
> > lot of tabs within itself.. So tabs delimiter didn't work for me. I
> thought
> > I can give a special demiliter but looks like Postrgesql copy allow only
> > one character as delimiter :(
> >
> > Sad, I guess only way is to insert or do a through serialization of my
> data
> > into something that COPY can understand.
> >
>
> The COPY command has a number of options, including setting what is used
> as the delimiter - it doesn't have to be tab. You need to also look at
> the logs/output to see exactly why the copy fails.
>
> I'd recommend first pre-processing your input data to make sure it is
> 'clean' and all the fields actually match with whatever DDL you have
> used to define your db tables etc. I'd then select a small subset and
> try different parameters to the copy command until you get the right
> combination of data format and copy definition.
>
> It may take some effort to get the right combination, but the result is
> probably worth it given your data set size i.e. difference between hours
> and days.
>
> --
> Tim Cross
>


Re: Schema/Data conversion opensource tools from MySQL to PostgreSQL

2018-06-26 Thread chiru r
Thank you for your inputs...

I am looking for tool to reduce the manualy effort in Migration. I
undestand manual efforts needed to migare 100%  to PostgreSQL.

On Tue, Jun 26, 2018 at 4:08 AM, Thomas Poty  wrote:

> Hi Chiranjeevi,
>
> I know my answer doesn't answer your request but we have just converted a
> database from mysql to postgresql with about 1000 tables.
> We had to convert some functions present in mysql but not in postgresql
> Also we had to convert queries.
> I converted all of that manually, table by table, column by column,etc
>
> We developped a program that compare structure , a program that migrate
> data and a program that compare data.
> Also we had to convert queries.
>
> All of that to say I don't think you can convert only by running a tool
> like that.
>
> I also think it is a bit difficult to suggest a tool without knowing
> exactly what is needed.
>
> Regards
> Thomas
>
>
> Le mar. 26 juin 2018 à 08:25, Arnaud L.  a
> écrit :
>
>> Le 25-06-18 à 18:47, chiru r a écrit :
>> > Please suggest Schema/Data conversion opensource tools from MySQL to
>> > PostgreSQL.
>>
>> Hi.
>> I used this php script which did a pretty good job :
>> https://github.com/AnatolyUss/FromMySqlToPostgreSql
>>
>> --
>> Regards
>>
>>


Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread amandeep singh
Hi Andreas

The value for $1 is same in all queries.

@Edson: number of such running queries are always  3.

Get Outlook for Android



From: Andreas Kretschmer
Sent: Tuesday, 26 June, 6:04 PM
Subject: Re: We find few queries running three times simultaneously with same 
parameters on postgres db
To: pgsql-general@lists.postgresql.org


On 26 June 2018 12:32:44 CEST, Andreas Kretschmer wrote: > > >Am 26.06.2018 um 
12:19 schrieb amandeep singh: >> We have been observing our postgres database 
from past few days,We >> found few queries running three times simultaneously 
with same >> parameters.I would like to back track how a query is running 
multiple > >> times. > >they are independend each other. You can run those 
queries with EXPLAIN > >ANALYSE to see the execution plan. > >Regards, Andreas 
seems like i misunderstud your query. are you sure, that the parameter $1 is 
equal across all different connections? Andreas -- 2ndQuadrant - The PostgreSQL 
Support Company



Re: We find few queries running three times simultaneously with same parameters on postgres db

2018-06-26 Thread Melvin Davidson
On Tue, Jun 26, 2018 at 1:26 PM, amandeep singh  wrote:

> Hi Andreas
>
> The value for $1 is same in all queries.
>
> @Edson: number of such running queries are always  3.
>
> Get Outlook for Android 
>
>
>
> From: Andreas Kretschmer
> Sent: Tuesday, 26 June, 6:04 PM
> Subject: Re: We find few queries running three times simultaneously with
> same parameters on postgres db
> To: pgsql-general@lists.postgresql.org
>
>
> On 26 June 2018 12:32:44 CEST, Andreas Kretschmer wrote: > > >Am
> 26.06.2018 um 12:19 schrieb amandeep singh: >> We have been observing our
> postgres database from past few days,We >> found few queries running three
> times simultaneously with same >> parameters.I would like to back track how
> a query is running multiple > >> times. > >they are independend each other.
> You can run those queries with EXPLAIN > >ANALYSE to see the execution
> plan. > >Regards, Andreas seems like i misunderstud your query. are you
> sure, that the parameter $1 is equal across all different connections?
> Andreas -- 2ndQuadrant - The PostgreSQL Support Company
>
>
This query will show you the pid, client_addr and user that submitted the
queries, in addition to other useful information.

SELECT backend_start as be_start,
   datname,
   pid as pid,
   client_addr,
   usename as user,
   state,
   query,
   wait_event_type,
   query_start,
   current_timestamp - query_start as duration
  FROM pg_stat_activity
 WHERE pg_backend_pid() <> pid
ORDER BY 1,
 datname,
 query_start;


-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


CTE optimization fence

2018-06-26 Thread Guy Burgess

Hello,
I am running into performance issues with large CTE "WITH" queries (just 
for selecting, not updating).  I was surprised to find that the queries 
run much faster if I convert the CTEs to subqueries. From googling, I 
see that this is due to CTE acting as an optimization fence in PG. 
Unfortunately due to the application I'm dealing with, converting all 
CTE queries to subquery model is not feasible. Plus, the readability of 
CTE is a big bonus.


I see there was some discussion last year about removing the CTE 
optimization fence (e.g. 
http://www.postgresql-archive.org/CTE-inlining-td5958992.html) but can't 
find anything more recent. Does anyone know if this is still under 
consideration? For what it's worth, I would love some way to make CTEs 
inlined/optimized.


Thank you very much to the developers for a truly amazing database system.

Thanks
Guy




Re: CTE optimization fence

2018-06-26 Thread Tom Lane
Guy Burgess  writes:
> I see there was some discussion last year about removing the CTE 
> optimization fence (e.g. 
> http://www.postgresql-archive.org/CTE-inlining-td5958992.html) but can't 
> find anything more recent. Does anyone know if this is still under 
> consideration?

Nothing's actually happened since then ... it's still under consideration,
but we have to settle on a way of controlling it.

regards, tom lane



Re: CTE optimization fence

2018-06-26 Thread Thomas Kellerer
Tom Lane schrieb am 27.06.2018 um 05:48:
>> I see there was some discussion last year about removing the CTE 
>> optimization fence (e.g. 
>> http://www.postgresql-archive.org/CTE-inlining-td5958992.html) but can't 
>> find anything more recent. Does anyone know if this is still under 
>> consideration?
> 
> but we have to settle on a way of controlling it.

+1 from me. 

I am running more and more into situations where people consider this a bug 
rather than a feature.

FWIW, I think a GUC that switches between the current (mostly unwanted, at 
least surprising) 
way and one where the CTE is optimized together with the main query would suit 
"most" people.

For sake of compatibility this could default to the current behaviour



Re: CTE optimization fence

2018-06-26 Thread Rob Sargent
I don’t think the fence analogy works. Try wall (a la Berlin). 
I count myself amongst those who thought “Ooh this little CTE will garner just 
what the rest of the query needs”. Only to find the planner didn’t groc that 
optimization. 

Is it a matter of deciding to trust the SQLer and runtime analyzing the CTE 
product before continuing? As an SQLer I have doubts about that precept but 
without _some_ change in attack CTEs approach fluff. 

> On Jun 26, 2018, at 11:45 PM, Thomas Kellerer  wrote:
> 
> Tom Lane schrieb am 27.06.2018 um 05:48:
>>> I see there was some discussion last year about removing the CTE 
>>> optimization fence (e.g. 
>>> http://www.postgresql-archive.org/CTE-inlining-td5958992.html) but can't 
>>> find anything more recent. Does anyone know if this is still under 
>>> consideration?
>> 
>> but we have to settle on a way of controlling it.
> 
> +1 from me. 
> 
> I am running more and more into situations where people consider this a bug 
> rather than a feature.
> 
> FWIW, I think a GUC that switches between the current (mostly unwanted, at 
> least surprising) 
> way and one where the CTE is optimized together with the main query would 
> suit "most" people.
> 
> For sake of compatibility this could default to the current behaviour
>