Re: unsubscribe

2017-11-20 Thread Ryan
Hi,

All right, so not for lack of trying, I cannot figure out how to
unsubscribe. I've tried three different things, but they've either been
ineffective or result in me getting an automatic email that the attempt
failed. Kindly help? I enjoyed my daily digest of PostgreSQL messages, but
this has been a bit difficult to manage.

Best regards.

On Mon, Nov 20, 2017 at 9:37 PM, Danyelle Davis  wrote:

>
> On Mon, Nov 20, 2017 at 9:29 PM, Amila Jayasooriya <
> amilajayasoor...@gmail.com> wrote:
>
>> Please unsubscribe me from the list.
>>
>> Thanks and Regards
>>  Amila Jayasooriya
>>
>> On Tue, Nov 21, 2017 at 3:48 AM, Ibram Remzi 
>> wrote:
>>
>>> Please unsubscribe me from the list.
>>>
>>> Please look at the listserv for options on how to unsub.  Yall are
> clogging the list with these.
>
>
>
> 
>  Virus-free.
> www.avast.com
> 
> <#m_4396185798928146462_m_941694909827362166_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>


Re: unsubscribe

2017-11-20 Thread Ryan
So my searches on the web all seemed to be somehow connecting me with the
old majordomo2 system. I think that may have been the attempt that got me a
"failed" automatic email back. One other attempt, I couldn't find a list
with exactly this name and tried the pgus-general unsubscribe, which
failed. The third was something yet more convoluted. My email headers do
not contain the string "List-Unsubscribe", at least on this message.
Regarding a link, I don't know which previous email has any such link. A
couple of days ago, I was getting a daily digest. Tonight I came home to
check email and see many dozens of messages. I have no idea what to look
for where. I understand and sympathize with the frustration you all must
feel with me. I'm sure you put crystal clear instructions somewhere about
how to handle this, and if I were a more responsible person with fewer
things to do on a daily basis, I would and should have paid more careful
attention.

On Mon, Nov 20, 2017 at 10:42 PM, Danyelle Davis 
wrote:

> have you tried the link provided in the previous email?
>
> On Mon, Nov 20, 2017 at 10:16 PM, Ryan  wrote:
>
>> Hi,
>>
>> All right, so not for lack of trying, I cannot figure out how to
>> unsubscribe. I've tried three different things, but they've either been
>> ineffective or result in me getting an automatic email that the attempt
>> failed. Kindly help? I enjoyed my daily digest of PostgreSQL messages, but
>> this has been a bit difficult to manage.
>>
>> Best regards.
>>
>>
>> On Mon, Nov 20, 2017 at 9:37 PM, Danyelle Davis 
>> wrote:
>>
>>>
>>> On Mon, Nov 20, 2017 at 9:29 PM, Amila Jayasooriya <
>>> amilajayasoor...@gmail.com> wrote:
>>>
>>>> Please unsubscribe me from the list.
>>>>
>>>> Thanks and Regards
>>>>  Amila Jayasooriya
>>>>
>>>> On Tue, Nov 21, 2017 at 3:48 AM, Ibram Remzi 
>>>> wrote:
>>>>
>>>>> Please unsubscribe me from the list.
>>>>>
>>>>> Please look at the listserv for options on how to unsub.  Yall are
>>> clogging the list with these.
>>>
>>>
>>>
>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
>>>  Virus-free.
>>> www.avast.com
>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>>> <#m_-8088747671016791610_m_3944732850624955406_m_4396185798928146462_m_941694909827362166_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>
>>
>>
>


Re: unsubscribe

2017-11-20 Thread Ryan
Great, thanks. At risk of provoking the ire of all on the listserv with one
final message before I sign out, PostgreSQL rocks. I love the
documentation, the support, the community, the software, the C back-end,
the features (really wish for 8-bit native integers, though, because it
makes a big storage difference even after TOAST in use cases with large
arrays; TINYINT extension broken in PG10).

In short, all of you who make it what it is. Thank you!

On Mon, Nov 20, 2017 at 10:57 PM, Danyelle Davis 
wrote:

> https://lists.postgresql.org/manage/
>
>
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
>  Virus-free.
> www.avast.com
> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
> <#m_5093462671357003734_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>
> On Mon, Nov 20, 2017 at 10:47 PM, Ryan  wrote:
>
>> So my searches on the web all seemed to be somehow connecting me with the
>> old majordomo2 system. I think that may have been the attempt that got me a
>> "failed" automatic email back. One other attempt, I couldn't find a list
>> with exactly this name and tried the pgus-general unsubscribe, which
>> failed. The third was something yet more convoluted. My email headers do
>> not contain the string "List-Unsubscribe", at least on this message.
>> Regarding a link, I don't know which previous email has any such link. A
>> couple of days ago, I was getting a daily digest. Tonight I came home to
>> check email and see many dozens of messages. I have no idea what to look
>> for where. I understand and sympathize with the frustration you all must
>> feel with me. I'm sure you put crystal clear instructions somewhere about
>> how to handle this, and if I were a more responsible person with fewer
>> things to do on a daily basis, I would and should have paid more careful
>> attention.
>>
>> On Mon, Nov 20, 2017 at 10:42 PM, Danyelle Davis 
>> wrote:
>>
>>> have you tried the link provided in the previous email?
>>>
>>> On Mon, Nov 20, 2017 at 10:16 PM, Ryan  wrote:
>>>
>>>> Hi,
>>>>
>>>> All right, so not for lack of trying, I cannot figure out how to
>>>> unsubscribe. I've tried three different things, but they've either been
>>>> ineffective or result in me getting an automatic email that the attempt
>>>> failed. Kindly help? I enjoyed my daily digest of PostgreSQL messages, but
>>>> this has been a bit difficult to manage.
>>>>
>>>> Best regards.
>>>>
>>>>
>>>> On Mon, Nov 20, 2017 at 9:37 PM, Danyelle Davis 
>>>> wrote:
>>>>
>>>>>
>>>>> On Mon, Nov 20, 2017 at 9:29 PM, Amila Jayasooriya <
>>>>> amilajayasoor...@gmail.com> wrote:
>>>>>
>>>>>> Please unsubscribe me from the list.
>>>>>>
>>>>>> Thanks and Regards
>>>>>>  Amila Jayasooriya
>>>>>>
>>>>>> On Tue, Nov 21, 2017 at 3:48 AM, Ibram Remzi 
>>>>>> wrote:
>>>>>>
>>>>>>> Please unsubscribe me from the list.
>>>>>>>
>>>>>>> Please look at the listserv for options on how to unsub.  Yall are
>>>>> clogging the list with these.
>>>>>
>>>>>
>>>>>
>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>
>>>>>  Virus-free.
>>>>> www.avast.com
>>>>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>
>>>>> <#m_5093462671357003734_m_-3120866418184761144_m_-8088747671016791610_m_3944732850624955406_m_4396185798928146462_m_941694909827362166_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>>>>
>>>>
>>>>
>>>
>>
>


Re: unsubscribe

2017-11-20 Thread Ryan
.Hmm, yes, indeed. I tried to login to that site with my listserv account
credentials, and that got me nowhere either.

Also, I just checked this message too. Downloaded the raw message including
headers. Search in text editor for string "List-Unsubscribe" and anything
at all like it comes up with only the text you wrote in your email, Tom.
I'm definitely not getting it... I thought maybe I had set these messages
to forward through my work email account, but I just verified that there is
no forwarding. I subscribed under the address at which I'm reading them.

Ryan

On Mon, Nov 20, 2017 at 11:07 PM, Tom Lane  wrote:

> Danyelle Davis  writes:
> > https://lists.postgresql.org/manage/
>
> But note that that's only going to help if you've set up a "PG community
> account", which up to now nobody has needed unless they wanted to edit
> our wiki or work in our commitfest app, so it's a good bet that most
> subscribers to our lists haven't got one.  The List-Unsubscribe link
> is supposed to work without needing a community account.  It disturbs
> me that Ryan says he's not seeing one in traffic from the list.
> I certainly am getting one in all the mail I get from the new server.
>
> I wonder whether there are complicating factors Ryan hasn't mentioned,
> like maybe his traffic is getting forwarded from some other account
> on some other email provider.  If so, maybe the List-Unsubscribe
> header is getting dropped in forwarding?  That's a reach, but it's
> hard to explain otherwise.
>
> regards, tom lane
>


Is there a way to create a functional index that tables tableoid column as an arg?

2018-03-10 Thread Ryan Murphy
Hello Postgressers,

I am using table inheritance and have e.g. the following tables:

create table animal ( ... );
create table dog ( ... ) inherits (animal);
create table cat ( ... ) inherits (animal);
create table person ( ... ) inherits (animal);
create table musician ( ... ) inherits (person);
create table politician ( ... ) inherits (person);

Now I have a query that gets all the "animal"'s except for those that are
"person"'s.

select * from only animal

won't cut it, because it leaves out the dogs and cats.

select *, tableoid::regclass relname from animal
where relname != 'person'::regclass

also won't cut it because it leaves out the musicians and politicians.

So I have created an immutable function is_a_kind_of(tbl regclass,
parent_tbl regclass) that returns true iff tbl is identical with, or
directly or indirectly inherits from, parent_tbl.  For example:

is_a_kind_of('person','person') => true
is_a_kind_of('person','animal') => true
is_a_kind_of('musician','person') => true
is_a_kind_of('animal','person') => false
is_a_kind_of('dog','person') => false

No problems so far.  Now my query works:

select *,tableoid from "animal"
where not is_a_kind_of(tableoid::regclass::text, 'person')

This query is somewhat slow though - I'd like to index the is_a_kind_of()
call.  And Postgres supports functional indexes!  So I try:

create index animal_is_person on animal (
is_a_kind_of(tableoid::regclass, 'person') );

    ERROR:  index creation on system columns is not supported

I see that this is because "tableoid" is a system column. Does anyone know
any workaround for this?  So close yet so far away!

Thanks!
Ryan


Re: Is there a way to create a functional index that tables tableoid column as an arg?

2018-03-13 Thread Ryan Murphy
Hi David!  Thanks for the reply.

> tableoid might be an exception to
> this, but it does not really seem like a useful column to index,
> giving it would be indexing the same value for each record in the
> table.

Unless you're using inheritance - then tableoid may vary.  That's the case
I'm interested in.


> -- get all animals that are persons
> select ... from animal where tableoid in (select
> get_inherited_tables('person'::regclass);
>
> -- get all animals that are not persons
> select ... from animal where tableoid not in (select
> get_inherited_tables('person'::regclass);
>
>
That's a great idea.  I'll try it!


> Just be careful around search_paths and your use of regclass. In this
> case, if "animal" was not in the first schema in search_path, but
> someone created another table called "person" that was in the first
> schema listed in search_path, then the query would not do what you
> want. You might want to consider prefixing the input parameter into
> get_inherited_tables with the schema name too.
>

Good point.

Thanks again!
Ryan


Can you make a simple view non-updatable?

2018-06-08 Thread Ryan Murphy
Hello.

I enjoy using VIEWs.  Often my views are updatable, either automatically
(due to being a simple 1-table view, or due to a TRIGGER).  Sometimes they
are meant to be just read-only.

Is there any way to set a VIEW to be read-only -- specifically, can I do
this for a view that is automatically updatable due to being simple?

The reason I want this:  It will help me encode into my schema the
distinction between views that are supposed to behave like full-fledged
"subtypes" of a larger relation and need to be updatable, vs those that are
merely a report / literally just a "view".

Thanks!
Ryan


Re: Can you make a simple view non-updatable?

2018-06-08 Thread Ryan Murphy
> maybe it is time to overhaul the security concept.
>

I could see how I could revoke permissions from, say, all users that aren't
superusers to INSERT or UPDATE certain views.  However, if possible it
would be nice to get an error message about the VIEW not being updatable,
rather than a user access error, which could be misleading.

When I try to insert into a non-updatable VIEWs, I get this message:

ERROR:  cannot insert into view "test_view"
DETAIL:  Views containing GROUP BY are not automatically updatable.
HINT:  To enable inserting into the view, provide an INSTEAD OF INSERT
trigger or an unconditional ON INSERT DO INSTEAD rule.

It would be great to see something like this when trying to insert into a
simple VIEW that I had made non-updatable:

ERROR:  cannot insert into view "test_view2"
DETAIL:  This view has manually been made non-updatable.


Re: Can you make a simple view non-updatable?

2018-06-13 Thread Ryan Murphy
On Fri, Jun 8, 2018 at 8:27 AM, Adrian Klaver 
wrote:

>
> Using INSTEAD OF trigger?:
>

Yep, that's one way.


Re: pg_dump and search_path

2019-07-10 Thread Ryan Lambert
I had a similar problem and was able to being the command with the
search_path to work around it.  I did this on Linux and it looks like you
are on Windows but I maybe you can do something similar that will work?

PGOPTIONS='-c search_path=staging, transient, pg_catalog'


*Ryan Lambert*
RustProof Labs


>


Re: pg_dump and search_path

2019-07-10 Thread Ryan Lambert
My exact situation was a deployment via sqitch,  It appears that uses psql
under the hood based on the error message I get.

Running just "sqitch deploy" I  get an error due to a non-fully qualified
name and a missing search path (my mistakes).  The error I get:

 + 004 .. psql:deploy/004.sql:72: ERROR:  relation "vobservations" does not
exist
LINE 11:FROM vobservations
 ^
not ok
"psql" unexpectedly returned exit value 3

Reverting all changes


Running the following works for me in this case and allows it to find the
view in the proper schema.

PGOPTIONS='-c search_path=piws,public' sqitch deploy

Ryan


Re: migrating from Oracle to PostgreSQL 11

2019-07-11 Thread Ryan Lambert
On Thu, Jul 11, 2019 at 1:28 AM Hitesh Chadda 
wrote:

> Hello,
> The target is PostgreSQL 10.1. I would like to know possible approach that
> can be followed for doing the migration.
>
> Regards
> H.kumar
>
>
10.9 is the current release in Pg10 and contains multiple security fixes
over 10.1. There is no good reason to start a new project on an outdated
minor release with known security issues.
https://why-upgrade.depesz.com/show?from=10.1&to=10.9&keywords=

As Gavin asked, why not start on Pg11 with 11.4 being the current release?

Ryan Lambert


Hash partitioning, what function is used to compute the hash?

2020-05-11 Thread Dennis Ryan
Regarding hash partitioning, what is the function/algorithm that is used to 
compute the hash for the partition key?  I need to write a query like

“SELECT unknown_partition_hash_function(id) AS hash_value, COUNT(id) AS 
number_of_records
FROM existing_table
GROUP BY 1”


Sent from Mail for Windows 10



PGSQL Phriday #001 - Two truths and a lie about PostgreSQL

2022-10-04 Thread Ryan Booz
Hello everyone!

Many months ago at PGConf NYC 2021 and on the pgsql-advocacy email list, I
talked about starting a monthly blogging event for the PostgreSQL
community. Two weeks ago I wrote a blog post explaining how the monthly
event would work and some of the first community members to host the first
six+ months. (https://www.softwareandbooz.com/introducing-psql-phriday/)

I'm pleased to share here that the first invite has been posted on my blog
and I'd be thrilled to have anyone contribute a blog post to the initiative
this Friday, October 7. The "rules" for contributing a post are outlined in
the invite, but please feel free to reach out to me if you have any
questions.

https://www.softwareandbooz.com/pgsql-phriday-001-invite/

Regards,
Ryan Booz


Autovacuum on Partitioned Tables

2022-10-31 Thread Ryan Ruenroeng
Hello there!

There is a statement in the Postgres Docs: "Partitioned tables are not
processed by autovacuum."

What does the above statement mean?
Does autovacuum not process both the parent and the child tables in a
partition relationship?

What is the definition of a partitioned table?

I have a table with 50+ million rows that gets data added to/wiped from it
every 90 days. We are planning to break this table into a few thousand
partitions. More partitions will likely be added in the future, but we
don't have plans to delete any of the partitions. Will we need to manually
track the statistics of these partitions and manually vacuum the tables or
will autovacuum help to manage them?

Best,
Ryan

Ryan N Ruenroeng (He/His)
(717) 578-3929
| rruenro...@gmail.com
| Madison, WI <https://maps.google.com/?q=Madison,%20WI>
<https://github.com/rruenroeng> <https://www.facebook.com/ryan.ruenroeng>
<https://www.linkedin.com/in/ryan-ruenroeng>


Re: Autovacuum on Partitioned Tables

2022-11-01 Thread Ryan Ruenroeng
Thanks all for your responses. We have a couple of tables. Each with
50-70mil rows currently, but they are expected to grow. Partitioning seems
to be a better long-term strategy, queries to these tables, using their
existing indexes, leaves them basically unusable (lng run times).

FYI, we are currently on PG v11.16

Let's assume we do go with my current proposal because it is inline with
how we are planning to drop partitions that stop seeing activity in the
future:

[snip] from Ron's post above:
Autovacuum will handle it.

I still have some doubts based on this:

"Tuples changed in partitions and inheritance children do not trigger
analyze on the parent table. If the parent table is empty or rarely
changed, it may never be processed by autovacuum, and the statistics
for the inheritance tree as a whole won't be collected. It is
necessary to run ANALYZE* on the parent table manually in order to
keep the statistics up to date." *[Link
<https://www.postgresql.org/docs/15/routine-vacuuming.html>]

Q1: Will we at least need to call Analyze via a cron job on the parent
table to ensure that the statistics are up to date for autovacuum to
catch the tables?

>From reading the documentation that a few of you have pointed me to,
I'm led to believe that the parent table is the "Partition" table. The
children tables are treated by the autovacuum as tables

Q2: Autovacuum will act on the partitions/children to the parent
table. *Is that a correct statement?*

--
It's good to know that the query optimizer will improve with partitions on
versions 12+. Thank you.

Best,
Ryan

Ryan N Ruenroeng (He/His)
(717) 578-3929
| rruenro...@gmail.com
| Madison, WI <https://maps.google.com/?q=Madison,%20WI>
<https://github.com/rruenroeng> <https://www.facebook.com/ryan.ruenroeng>
<https://www.linkedin.com/in/ryan-ruenroeng>


On Tue, Nov 1, 2022 at 2:54 AM Ron  wrote:

> On 10/31/22 23:05, Tom Lane wrote:
> [snip]
> > TBH, if you've got 50m rows, I'm not sure you need partitions at all.
>
> Big rows (i.e. document storage tables with bytea or xml fields) can make
> databases explode in size even with only 50M rows.
>
> (Yes, I know the arguments against it, but it works quite well when the
> database is in a cloud instance.  Worries about backup times, at least,
> are
> eliminated.)
>
> --
> Angular momentum makes the world go 'round.
>
>
>


Hi All,

2023-02-21 Thread Ryan MYJ
Currently I'm working on upgrading a postgresql version from 9.3 to 12. The
database size is around 700G.
I finished the pg_upgrade process but when I ran 'analyze_new_cluster.sh'
it stucked at the first log 'vacuumdb: processing database "otdb":
Generating minimal optimizer statistics (1 target)'.

Then I checked the active query, the following query has been running for
several hours:
   pid   |db | start |   lap   |
 query
-+---+---+-+--
 1822933 |  otdb  | 2023-02-18 03:02:36.035755+08 | 04:56:41.408329 |
ANALYZE pg_catalog.pg_class;

Do you know what I can do to speed up this process?  Because on another
Testing environment(around 300G data), this 'analyze_new_cluster.sh' only
took several minutes to complete.

Thanks a lot.


Re: Import csv to temp table

2024-01-04 Thread Ryan Kelly
I use csv2table almost every day: https://github.com/f0rk/csv2table

to just emit a create table statement: csv2table --file your_csv.csv

pipe output to psql to create.

easily used to import data as: csv2table --file your_csv.csv --copy
--backslash -1 | psql your_database

use arguments like --timestamp to automagically detect types.

On Fri, Jan 5, 2024 at 12:45 AM Paolo Saudin  wrote:

> Il giorno mar 2 gen 2024 alle ore 21:17 Adrian Klaver <
> adrian.kla...@aklaver.com> ha scritto:
>
>> On 1/2/24 11:47, arun chirappurath wrote:
>>
>> Reply to list
>> Ccing list
>> > Hi Adrian,
>> >
>> > Love this tool..however it doesn't like supporting RDS.
>>
>> 1) This was Daniel Vérité's suggestion not mine.
>>
>> 2) Define "... doesn't like supporting RDS".
>>
>> a) You can generate an SQL statement without connecting to the database.
>> Then use that statement directly in the database.
>>
>> b) If SQLAlchemy can reach the database then csvkit should be able to.
>>
>> >
>> > https://csvkit.readthedocs.io/en/latest/
>> > 
>> >
>> > Regards
>> > Arun
>> >
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>>
>>
> You can user pgloader (https://pgloader.io/) as well to load data from
> CSV to Postgres
> Paolo
>


Re: Faster distinct query?

2021-09-22 Thread Ryan Booz
[Timescale Dev Advocate here]
I realize this might not be the most accepted answer (could be interpreted
as trying to "sell" something), but feels like an opportunity to talk about
DISTINCT queries and opportunities. Because you have that index,
Timescale 2.3 added a "Skip Scan" query planner node that works on regular
BTree indexes (it doesn't have to be time-series/TimescaleDB Hypertable
data at all). In this case, your distinct query would likely run in a few
milliseconds based on the counts you mention (170 stations, 3 channels per
station), and then the outer aggregation would do the GROUP BY. So, you
**could** add the TimescaleDB extension to your database (or a copy of) and
give it a try. You don't actually need to use any TimescaleDB features
otherwise.


   - A writeup of why this kind of DISTINCT query is slow in PostgreSQL
   (for now) and what we did to overcome it:
   
https://blog.timescale.com/blog/how-we-made-distinct-queries-up-to-8000x-faster-on-postgresql/
   - Plans for a similar feature in PostgreSQL proper that we'd totally
   support but hasn't made forward progress yet:
   https://commitfest.postgresql.org/19/1741/

Anyway, it might be worth a shot. HTH

Ryan B

On Wed, Sep 22, 2021 at 4:27 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Sep 22, 2021 at 1:21 PM Michael Lewis  wrote:
>
>> In the future, please share the plan returned by explain analyze, and
>> some data about how many rows in the involved tables,
>>
>
> I believe we consider it acceptable to link to an explain viewer, which is
> what the OP did.  Reading explain output in email has its own challenges,
> and I'd rather have the website than a text attachment.
>
>
>> How does the below work? It should do a very simple index scan only, then
>> aggregate the relative few rows after the fact.
>>
>> select station, array_agg(distinct(channel)) as channels
>> FROM(
>> SELECT station,channel FROM data GROUP BY station,channel
>> ) AS sub
>> group by station;
>>
>
> Yeah, am pondering this too, though seems like the queries should be
> identical so the plan/execution should be the same either way.
>
>
>> If there is correlation between station & channel, then you might look at
>> creating a multivariate statistics object and analyzing the table so the
>> planner can make better choices
>>
>
> There is no where clause so I'm doubtful there is much to be gained going
> down this path.  The Index-Only scan seems like an optimal way to obtain
> this data and the existing query already does that.  The aggregation path
> might vary though it seems like that shouldn't be the case here.
>
> David J.
>


Re: Faster distinct query?

2021-09-22 Thread Ryan Booz
Cool. I'd be interested to see the explain on it if you ever try it again.
On that cardinality, I'd expect it to be really fast, so I'm interested to
see if the (SkipScan) nodes were actually used.

On Wed, Sep 22, 2021 at 5:35 PM Israel Brewster 
wrote:

>
> On Sep 22, 2021, at 12:49 PM, Ryan Booz  wrote:
>
> [Timescale Dev Advocate here]
> I realize this might not be the most accepted answer (could be interpreted
> as trying to "sell" something), but feels like an opportunity to talk about
> DISTINCT queries and opportunities. Because you have that index,
> Timescale 2.3 added a "Skip Scan" query planner node that works on regular
> BTree indexes (it doesn't have to be time-series/TimescaleDB Hypertable
> data at all). In this case, your distinct query would likely run in a few
> milliseconds based on the counts you mention (170 stations, 3 channels per
> station), and then the outer aggregation would do the GROUP BY. So, you
> **could** add the TimescaleDB extension to your database (or a copy of) and
> give it a try. You don't actually need to use any TimescaleDB features
> otherwise.
>
>
> I had actually already done that, as I was considering, in spite of past
> negative experiences with timescaledb, experimenting with it on this DB to
> see if it worked any better with this data. Out of curiosity, I tried
> removing the timescaledb extension, whereupon the query in question took
> roughly twice as long. So you are right that installing timescaledb speeds
> things up, even when not using any timescaledb specific functions. So that
> was a good call. Thanks!
>
> ---
> Israel Brewster
> Software Engineer
> Alaska Volcano Observatory
> Geophysical Institute - UAF
> 2156 Koyukuk Drive
> Fairbanks AK 99775-7320
> Work: 907-474-5172
> cell:  907-328-9145
>
>
>
>- A writeup of why this kind of DISTINCT query is slow in PostgreSQL
>(for now) and what we did to overcome it:
>
> https://blog.timescale.com/blog/how-we-made-distinct-queries-up-to-8000x-faster-on-postgresql/
>- Plans for a similar feature in PostgreSQL proper that we'd totally
>support but hasn't made forward progress yet:
>https://commitfest.postgresql.org/19/1741/
>
> Anyway, it might be worth a shot. HTH
>
> Ryan B
>
> On Wed, Sep 22, 2021 at 4:27 PM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wed, Sep 22, 2021 at 1:21 PM Michael Lewis  wrote:
>>
>>> In the future, please share the plan returned by explain analyze, and
>>> some data about how many rows in the involved tables,
>>>
>>
>> I believe we consider it acceptable to link to an explain viewer, which
>> is what the OP did.  Reading explain output in email has its own
>> challenges, and I'd rather have the website than a text attachment.
>>
>>
>>> How does the below work? It should do a very simple index scan only,
>>> then aggregate the relative few rows after the fact.
>>>
>>> select station, array_agg(distinct(channel)) as channels
>>> FROM(
>>> SELECT station,channel FROM data GROUP BY station,channel
>>> ) AS sub
>>> group by station;
>>>
>>
>> Yeah, am pondering this too, though seems like the queries should be
>> identical so the plan/execution should be the same either way.
>>
>>
>>> If there is correlation between station & channel, then you might look
>>> at creating a multivariate statistics object and analyzing the table so the
>>> planner can make better choices
>>>
>>
>> There is no where clause so I'm doubtful there is much to be gained going
>> down this path.  The Index-Only scan seems like an optimal way to obtain
>> this data and the existing query already does that.  The aggregation path
>> might vary though it seems like that shouldn't be the case here.
>>
>> David J.
>>
>
>


Re: Faster distinct query?

2021-09-22 Thread Ryan Booz
Ah. I didn't realize that. If SkipScan was chosen, you'd actually see it as
one of the execution nodes. I also realize I was making a few assumptions
about your data, are channels shared among stations, or are all channels
unique (like an ID) per station? That would impact the index and approach.

Something like:

station | channel
--|---
11
12
23
24

or:
station | channel
--|---
11
12
21
22




On Wed, Sep 22, 2021 at 5:53 PM Israel Brewster 
wrote:

> On Sep 22, 2021, at 1:50 PM, Ryan Booz  wrote:
>
> Cool. I'd be interested to see the explain on it if you ever try it again.
> On that cardinality, I'd expect it to be really fast, so I'm interested to
> see if the (SkipScan) nodes were actually used.
>
>
> With timescaledb extension installed, the explain is what I posted in the
> original message (https://explain.depesz.com/s/mtxB#html). Without
> timescaledb installed, the explain looks the same, except it takes twice as
> long to run.
>
> Unless I missed something in your message, i.e. some sort of tweak to the
> query to get it to use the timescaledb features?
>
> ---
> Israel Brewster
> Software Engineer
> Alaska Volcano Observatory
> Geophysical Institute - UAF
> 2156 Koyukuk Drive
> Fairbanks AK 99775-7320
> Work: 907-474-5172
> cell:  907-328-9145
>
>
> On Wed, Sep 22, 2021 at 5:35 PM Israel Brewster 
> wrote:
>
>>
>> On Sep 22, 2021, at 12:49 PM, Ryan Booz  wrote:
>>
>> [Timescale Dev Advocate here]
>> I realize this might not be the most accepted answer (could be
>> interpreted as trying to "sell" something), but feels like an opportunity
>> to talk about DISTINCT queries and opportunities. Because you have that
>> index, Timescale 2.3 added a "Skip Scan" query planner node that works on
>> regular BTree indexes (it doesn't have to be time-series/TimescaleDB
>> Hypertable data at all). In this case, your distinct query would likely run
>> in a few milliseconds based on the counts you mention (170 stations, 3
>> channels per station), and then the outer aggregation would do the GROUP
>> BY. So, you **could** add the TimescaleDB extension to your database (or a
>> copy of) and give it a try. You don't actually need to use any TimescaleDB
>> features otherwise.
>>
>>
>> I had actually already done that, as I was considering, in spite of past
>> negative experiences with timescaledb, experimenting with it on this DB to
>> see if it worked any better with this data. Out of curiosity, I tried
>> removing the timescaledb extension, whereupon the query in question took
>> roughly twice as long. So you are right that installing timescaledb speeds
>> things up, even when not using any timescaledb specific functions. So that
>> was a good call. Thanks!
>>
>> ---
>> Israel Brewster
>> Software Engineer
>> Alaska Volcano Observatory
>> Geophysical Institute - UAF
>> 2156 Koyukuk Drive
>> Fairbanks AK 99775-7320
>> Work: 907-474-5172
>> cell:  907-328-9145
>>
>>
>>
>>- A writeup of why this kind of DISTINCT query is slow in PostgreSQL
>>    (for now) and what we did to overcome it:
>>
>> https://blog.timescale.com/blog/how-we-made-distinct-queries-up-to-8000x-faster-on-postgresql/
>>- Plans for a similar feature in PostgreSQL proper that we'd totally
>>support but hasn't made forward progress yet:
>>https://commitfest.postgresql.org/19/1741/
>>
>> Anyway, it might be worth a shot. HTH
>>
>> Ryan B
>>
>> On Wed, Sep 22, 2021 at 4:27 PM David G. Johnston <
>> david.g.johns...@gmail.com> wrote:
>>
>>> On Wed, Sep 22, 2021 at 1:21 PM Michael Lewis 
>>> wrote:
>>>
>>>> In the future, please share the plan returned by explain analyze, and
>>>> some data about how many rows in the involved tables,
>>>>
>>>
>>> I believe we consider it acceptable to link to an explain viewer, which
>>> is what the OP did.  Reading explain output in email has its own
>>> challenges, and I'd rather have the website than a text attachment.
>>>
>>>
>>>> How does the below work? It should do a very simple index scan only,
>>>> then aggregate the relative few rows after the fact.
>>>>
>>>> select station, array_agg(distinct(channel)) as channels
>>>> FROM(
>>>> SELECT station,channel FROM data GROUP BY station,channel
>>>> ) AS s

Re: Faster distinct query?

2021-09-22 Thread Ryan Booz
Sorry - break for dinner! So much happens on a global scale in a few
hours.  :-)!

I took a few minutes and created a simple example here of what I imagine
you have on that table. I only inserted ~80 million rows of test data, but
hopefully, it's somewhat representative.

TimescaleDB's current implementation of SkipScan only allows distinct on
one column, and because of where we can place the hook to read the query
cost, a LATERAL JOIN (or similar) can't be used to get both columns like
you want. So, to outsmart the planner, you can get your results with one of
the DISTINCT queries in a function. I realize this is getting a bit
specific, so it might not be an exact fit for you, but this query comes
back on my 78 million rows in 67ms. YMMV

Step 1: Create a function that returns the array of channels per station

CREATE FUNCTION channel_array(TEXT) RETURNS text[]
AS $$
SELECT array_agg(channel) FROM (
select distinct on (channel) channel from stations where station=$1
) a
$$
LANGUAGE SQL;

Step 2: Use a CTE for the distinct stations, querying the function for each
station

WITH s1 AS (
SELECT DISTINCT ON (station) station FROM stations
)
SELECT station, channel_array(station) channel
FROM s1;

If it's using the index, you should see something like:

Subquery Scan on s1  (cost=0.57..16.22 rows=19 width=34) (actual
time=0.580..4.809 rows=19 loops=1)

  ->  Unique  (cost=0.57..11.28 rows=19 width=2) (actual time=0.043..0.654
rows=19 loops=1)

->  Custom Scan (SkipScan) on stations  (cost=0.57..11.23 rows=19
width=2) (actual time=0.042..0.647 rows=19 loops=1)

  ->  Index Only Scan using idx_station_channel on stations
(cost=0.57..1807691.34 rows=7632 width=2) (actual time=0.040..0.641
rows=19 loops=1)
Index Cond: (station > NULL::text)


    Heap Fetches: 19



HTH,
Ryan


On Wed, Sep 22, 2021 at 6:22 PM Israel Brewster 
wrote:

> On Sep 22, 2021, at 2:05 PM, Ryan Booz  wrote:
>
> Ah. I didn't realize that. If SkipScan was chosen, you'd actually see it
> as one of the execution nodes. I also realize I was making a few
> assumptions about your data, are channels shared among stations, or are all
> channels unique (like an ID) per station? That would impact the index and
> approach.
>
>
> Ok, that may be a good point: “channel” is currently a varchar column,
> containing something like ‘BHZ’, ‘EHZ’, ‘BHE’ etc. There are only a handful
> of possible channels that I am currently aware of, which are shared among
> stations - most stations have a ‘BHZ’ channel, for example. That would be
> fairly simple to normalize out if that would help.
>
>
> Something like:
>
> station | channel
> --|---
> 11
> 12
> 23
> 24
>
> or:
> station | channel
> --|---
> 11
> 12
> 2    1
> 22
>
>
>
>
> On Wed, Sep 22, 2021 at 5:53 PM Israel Brewster 
> wrote:
>
>> On Sep 22, 2021, at 1:50 PM, Ryan Booz  wrote:
>>
>> Cool. I'd be interested to see the explain on it if you ever try it
>> again. On that cardinality, I'd expect it to be really fast, so I'm
>> interested to see if the (SkipScan) nodes were actually used.
>>
>>
>> With timescaledb extension installed, the explain is what I posted in the
>> original message (https://explain.depesz.com/s/mtxB#html). Without
>> timescaledb installed, the explain looks the same, except it takes twice as
>> long to run.
>>
>> Unless I missed something in your message, i.e. some sort of tweak to the
>> query to get it to use the timescaledb features?
>>
>> ---
>> Israel Brewster
>> Software Engineer
>> Alaska Volcano Observatory
>> Geophysical Institute - UAF
>> 2156 Koyukuk Drive
>> Fairbanks AK 99775-7320
>> Work: 907-474-5172
>> cell:  907-328-9145
>>
>>
>> On Wed, Sep 22, 2021 at 5:35 PM Israel Brewster 
>> wrote:
>>
>>>
>>> On Sep 22, 2021, at 12:49 PM, Ryan Booz  wrote:
>>>
>>> [Timescale Dev Advocate here]
>>> I realize this might not be the most accepted answer (could be
>>> interpreted as trying to "sell" something), but feels like an opportunity
>>> to talk about DISTINCT queries and opportunities. Because you have that
>>> index, Timescale 2.3 added a "Skip Scan" query planner node that works on
>>> regular BTree indexes (it doesn't have to be time-series/TimescaleDB
>>> Hypertable data at all). In this case, your distinct query would likely run
>>> in a few milliseconds based on the counts you mention (170 stations, 3
>>> channels pe

Re: Faster distinct query?

2021-09-23 Thread Ryan Booz
Heh, I honestly forgot about the recursive CTE. Certainly worth a try and
wouldn't require installing other extensions.

This is what depesz is referring to:
https://wiki.postgresql.org/wiki/Loose_indexscan

On Thu, Sep 23, 2021 at 3:04 AM hubert depesz lubaczewski 
wrote:

> On Wed, Sep 22, 2021 at 12:05:22PM -0800, Israel Brewster wrote:
> > I was wondering if there was any way to improve the performance of this
> query:
> >
> > SELECT station,array_agg(distinct(channel)) as channels FROM data GROUP
> BY station;
> >
> > The explain execution plan can be found here:
> > https://explain.depesz.com/s/mtxB#html <
> https://explain.depesz.com/s/mtxB#html>
> >
> > and it looks pretty straight forward. It does an index_only scan,
> followed by an aggregate, to produce a result that is a list of stations
> along with a list of channels associated with each (there can be anywhere
> from 1 to 3 channels associated with each station). This query takes around
> 5 minutes to run.
> >
> > To work around the issue, I created a materialized view that I can
> update periodically, and of course I can query said view in no time flat.
> However, I’m concerned that as the dataset grows, the time it takes to
> refresh the view will also grow (correct me if I am wrong there).
> >
> > This is running PostgreSQL 13, and the index referenced is a two-column
> index on data(station, channel)
>
> It looks that there is ~ 170 stations, and ~ 800 million rows int he
> table.
>
> can you tell us how many rows has this:
>
> select distinct station, channel from data;
>
> If this is not huge, then you can make the query run much faster using
> skip scan - recursive cte.
>
> Best regards,
>
> depesz
>
>
>
>


Re: Growth planning

2021-10-04 Thread Ryan Booz
As for clustering, unfortunately, it's a one-time operation in Postgres (as
far as I'm aware), so you'd have to "cluster" the index every time after an
insert or update of data. If it is partitioned, I presume it can be run on
the index of each partition table individually - but I'm not sure.

On Mon, Oct 4, 2021 at 6:05 PM Rob Sargent  wrote:

> On 10/4/21 3:37 PM, Israel Brewster wrote:
>
> On Oct 4, 2021, at 1:21 PM, Rob Sargent  wrote:
>
> My "strict" table per station suggestion was meant as an option to avoid
> the partitioning pain point entirely if it wasn't going to buy you
> anything. Namely querying more than one station's data.
>
>
> Ah, so in theory making “strict” tables for each would be easier than
> creating partitions for each? Something to consider for sure if so.
>
>
> In a write-once scenario such as this,  would a "clustered index" on
> datetime be stable, performant?  Seems a read-for-export could put the head
> down at time point A and just go?
>
> That’s beyond my level of DB admin knowledge, unfortunately :) I can
> certainly read up on it and give it a try though!
>
>
> I was hoping one of the smart people would chime in;)
>


WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-27 Thread Ryan, Les
Hello,

I'm hoping to get some suggestions on what to do here.  I am running PostgreSQL 
version 13.2 and am shipping the WAL files to a standby server.  Once a day I 
restart the standby server and it recovers the new WAL files that have been 
shipped to it.  Everything was working great until yesterday.  My problem is 
that the WAL recovery is now stopping before it recovers all of the available 
WAL files.  This happened once before and the only way I could get the WAL 
recovery to go past that file was to create a fresh back and restore that.  I'm 
hoping to avoid that as it takes about a week to create the backup.

Here are the specifics:

  *   PostgreSQL version 13.2
  *   The primary server creates the WAL files and a scheduled process copies 
them to a folder on the standby server.
  *   Once a day, the standby server is restared using the following command:
 *   "C:\Program Files\PostgreSQL\13\bin\pg_ctl" restart -D .\
  *   The log contains the following:
2021-10-27 10:26:30.508 MDT [6204] LOG:  starting PostgreSQL 13.2, compiled by 
Visual C++ build 1914, 64-bit
2021-10-27 10:26:30.509 MDT [6204] LOG:  listening on IPv6 address "::", port 
5432
2021-10-27 10:26:30.510 MDT [6204] LOG:  listening on IPv4 address "0.0.0.0", 
port 5432
2021-10-27 10:26:30.585 MDT [2012] LOG:  database system was shut down in 
recovery at 2021-10-27 10:26:29 MDT
2021-10-27 10:26:30.701 MDT [2012] LOG:  entering standby mode
2021-10-27 10:26:30.821 MDT [2012] LOG:  restored log file 
"000104190056" from archive
2021-10-27 10:26:31.158 MDT [2012] LOG:  restored log file 
"000104190052" from archive
2021-10-27 10:26:31.467 MDT [2012] LOG:  redo starts at 419/5229A858
2021-10-27 10:26:31.561 MDT [2012] LOG:  restored log file 
"000104190053" from archive
2021-10-27 10:26:32.108 MDT [2012] LOG:  restored log file 
"000104190054" from archive
2021-10-27 10:26:32.849 MDT [2012] LOG:  restored log file 
"000104190055" from archive
2021-10-27 10:26:33.612 MDT [2012] LOG:  restored log file 
"000104190056" from archive
2021-10-27 10:26:34.342 MDT [2012] LOG:  restored log file 
"000104190057" from archive
2021-10-27 10:26:35.146 MDT [2012] LOG:  restored log file 
"000104190058" from archive
2021-10-27 10:26:35.718 MDT [2012] LOG:  restored log file 
"000104190059" from archive
2021-10-27 10:26:36.188 MDT [2012] LOG:  restored log file 
"00010419005A" from archive
2021-10-27 10:26:36.750 MDT [2012] LOG:  consistent recovery state reached at 
419/5AB8
2021-10-27 10:26:36.752 MDT [6204] LOG:  database system is ready to accept 
read only connections
2021-10-27 10:26:36.823 MDT [6040] LOG:  started streaming WAL from primary at 
419/5A00 on timeline 1

  *   There are many more WAL files available starting with 
00010419005B but the restore process always stops at 
00010419005A.

I have two questions:

  *   Why does the WAL file recovery process now stop after it reads 
00010419005A?
  *   What do I need to do to get PostgreSQL to recover the rest of the 
available WAL files.

Thanks in advance for any suggestions.

Sincerely,
-Les


Les Ryan, P.Eng | WSP

SCADA Engineer
Energy, Resources & Industry

T +1 403-813-6327
E les.r...@wsp.com<mailto:les.r...@wsp.com>
O 405 18 St SE. Calgary, Alberta T2E 
6J5<https://www.google.ca/maps/place/405+18+St+SE,+Calgary,+AB+T2E+6J5/>





NOTICE: This communication and any attachments ("this message") may contain 
information which is privileged, confidential, proprietary or otherwise subject 
to restricted disclosure under applicable law. This message is for the sole use 
of the intended recipient(s). Any unauthorized use, disclosure, viewing, 
copying, alteration, dissemination or distribution of, or reliance on, this 
message is strictly prohibited. If you have received this message in error, or 
you are not an authorized or intended recipient, please notify the sender 
immediately by replying to this message, delete this message and all copies 
from your e-mail system and destroy any printed copies. You are receiving this 
communication because you are listed as a current WSP contact. Should you have 
any questions regarding WSP's electronic communications policy, please consult 
our Anti-Spam Commitment at www.wsp.com/casl<http://www.wsp.com/casl>. For any 
concern or if you believe you should not be receiving this message, please 
forward this message to caslcomplia...@wsp.com<mailto:caslcomplia...@wsp.com> 
so that we can promptly address your request. Note that not all messages sent 
by WSP qualify as commercial electronic messages.

AVIS : Ce message, incluant tout fichier l'accompagnant (< le

RE: WAL File Recovery on Standby Server Stops Before End of WAL Files

2021-10-28 Thread Ryan, Les
Hi Kyotaro and Dilip,

Thank you for getting back to me.

Kyotaro: I ran pg_dump and the output was "pg_waldump: fatal: could not read 
file "00010419005A": read 50 of 8192".  I'm guessing that it means 
that wal file 00010419005A is corrupted and that is why the 
recovery process stops there.  Is there any way to fix the file?

Dilip:   setting the log level to debug2 did not provide any additional 
information.  Here are the log entries:

2021-10-28 06:51:06.166 MDT [7556] LOG:  restored log file 
"000104190059" from archive
2021-10-28 06:51:06.464 MDT [7556] DEBUG:  got WAL segment from archive
2021-10-28 06:51:06.579 MDT [7556] LOG:  restored log file 
"00010419005A" from archive
2021-10-28 06:51:06.854 MDT [7556] DEBUG:  got WAL segment from archive
2021-10-28 06:51:07.107 MDT [7556] LOG:  consistent recovery state reached at 
419/5AB8
2021-10-28 06:51:07.107 MDT [7556] DEBUG:  switched WAL source from archive to 
stream after failure
2021-10-28 06:51:07.109 MDT [7844] LOG:  database system is ready to accept 
read only connections
2021-10-28 06:51:07.152 MDT [7844] DEBUG:  forked new backend, pid=6900 
socket=6068

I set the log level to debug5 and here is what I got:

2021-10-28 06:25:41.262 MDT [6288] CONTEXT:  WAL redo at 419/5ABFFF60 for 
Btree/INSERT_LEAF: off 130
2021-10-28 06:25:41.262 MDT [6288] DEBUG:  record known xact 33776257 
latestObservedXid 33776257
2021-10-28 06:25:41.262 MDT [6288] CONTEXT:  WAL redo at 419/5ABFFFA0 for 
Heap/INSERT: off 95 flags 0x00
2021-10-28 06:25:41.262 MDT [6288] LOG:  consistent recovery state reached at 
419/5AB8
2021-10-28 06:25:41.263 MDT [6288] DEBUG:  switched WAL source from archive to 
stream after failure
2021-10-28 06:25:41.264 MDT [5512] LOG:  database system is ready to accept 
read only connections

Does the "switched WAL source from archive to stream after failure" indicate a 
problem with the WAL file?

Anyway, it looks like I need to restore the standby server from a new backup.  
Thank you both for your help.

Sincerely,
-Les

-Original Message-
From: Dilip Kumar 
Sent: October 27, 2021 10:29 PM
To: Kyotaro Horiguchi 
Cc: Ryan, Les ; pgsql-generallists.postgresql.org 

Subject: Re: WAL File Recovery on Standby Server Stops Before End of WAL Files

On Thu, Oct 28, 2021 at 7:28 AM Kyotaro Horiguchi  
wrote:
>
> At Wed, 27 Oct 2021 16:42:52 +, "Ryan, Les" 
> wrote in
> > 2021-10-27 10:26:31.467 MDT [2012] LOG:  redo starts at 419/5229A858
> ...
> > 2021-10-27 10:26:36.188 MDT [2012] LOG:  restored log file
> > "00010419005A" from archive
> > 2021-10-27 10:26:36.750 MDT [2012] LOG:  consistent recovery state
> > reached at 419/5AB8
> > 2021-10-27 10:26:36.752 MDT [6204] LOG:  database system is ready to
> > accept read only connections
> > 2021-10-27 10:26:36.823 MDT [6040] LOG:  started streaming WAL from
> > primary at 419/5A00 on timeline 1
> >
> >   *   There are many more WAL files available starting with 
> > 00010419005B but the restore process always stops at 
> > 00010419005A.
> >
> > I have two questions:
> >
> >   *   Why does the WAL file recovery process now stop after it reads 
> > 00010419005A?
> >   *   What do I need to do to get PostgreSQL to recover the rest of the 
> > available WAL files.
>
> The info above alone donesn't clearly suggest anything about the
> reason. Could you show the result of "pg_waldump
> 00010419005A 2>&1 | tail -5"?  What I'm expecting to see
> is an error message from pg_waldump before the end of the file. It
> would be the immediate cause of the problem.

+1, that will be the best place to start with, additionally, you can
enable DEBUG2 message so that from logs we can identify why it could not 
continue recovery from the archive.

--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com




NOTICE: This communication and any attachments ("this message") may contain 
information which is privileged, confidential, proprietary or otherwise subject 
to restricted disclosure under applicable law. This message is for the sole use 
of the intended recipient(s). Any unauthorized use, disclosure, viewing, 
copying, alteration, dissemination or distribution of, or reliance on, this 
message is strictly prohibited. If you have received this message in error, or 
you are not an authorized or intended recipient, please notify the sender 
immediately by replying to this message, delete this message and all copies 
from your e-mail system and destroy any printed copies. You are receiving this 
communication because you are listed as a current WSP contact. Should you have 
any quest

Re: DBeaver does not show all tables in a Schema

2021-10-29 Thread Ryan Booz
In a recent update (not sure when), the default for DBeaver seems to have
changed so that the navigator view is set to "simple", rather than
"advanced" which shows all objects.

Right-click the server -> edit connection -> Select "General" -> verify
"Navigator View"



On Fri, Oct 29, 2021 at 9:48 AM Shaozhong SHI 
wrote:

> It is a new installation with all permissions.  Nothing has done to it.
>
> Regards,
>
> David
>
> On Fri, 29 Oct 2021 at 14:14, negora  wrote:
>
>> Are you sure that you're not applying a filter in the "Database
>> Navigator" panel of DBeaver?
>>
>> Sometimes, it occurs to me that I apply a filter for certain database, I
>> forget to remove it, then open another database, and some or all tables
>> do not appear in the navigator.
>>
>>
>> On 29/10/2021 12:46, Shaozhong SHI wrote:
>> > I used a DBeaver to connect to postgres but it does not show all
>> > tables in a schema.
>> >
>> > Can anyone shed light on this?
>> >
>> > Regards,
>> >
>> > David
>>
>>
>>


postgresql 9.6x installers for windows are no longer available

2021-11-17 Thread Ryan Wexler
Does anyone know what happened to the EDB windows postgresql 9.6 installers?

The host skips from 9.5->10
https://www.enterprisedb.com/downloads/postgres-postgresql-downloads


Best way to select a random row from a derived table

2018-01-27 Thread Ryan Murphy
Hello hackers and postgressers,

I am aware of 2 ways to select a random row from a table:

1) select * from table_name order by random() limit 1;
-- terribly inefficient

2) select * from table_name tablesample system_rows(1) limit 1;
-- only works on tables, not views or subqueries

Is there an option that is reasonably efficient and can be used on views
and subqueries?

Thanks!
Ryan


Downsides of liberally using CREATE TEMP TABLE ... ON COMMIT DROP

2018-01-28 Thread Ryan Murphy
Hello,

I heard at a PostgreSQL talk that you should not liberally create temp
tables in the course of frequently-used functions etc, because (roughly)
you're using up some of the same resources that you for your regular tables.

Is this true?  Is there an important reason not to have e.g. a plpgsql
function that uses a temp table?  What are the specific problems if I do
this?  Is the problem ameliorated if I add ON COMMIT DROP?

Best,
Ryan


Re: Downsides of liberally using CREATE TEMP TABLE ... ON COMMIT DROP

2018-01-28 Thread Ryan Murphy
>
> I believe the main, and maybe only, concern is the bloating of the system
> catalog tables since you are constantly adding and removing records.  Yes,
> they will be vacuumed but vacuuming and bloat on catalog tables slows every
> single query down to some, degree since every query has to lookup its
> objects is those catalogs.  Though caching probably alleviates some of that
>

Yes, that's exactly the concern I heard, thanks for reminding me.

If I want to e.g. temporarily store a "setof records" or a "table" result
in a variable as part of a calculation in a plpgsql function, do I have any
other option than CREATE TEMPORARY TABLE?  It didn't seem to work when I
DECLAREd a variable of type "setof table_name" or "setof
table_name%rowtype", and then SELECT INTO it.