Re: Any insights on Qlik Sense using CURSOR ?

2021-05-21 Thread Franck Routier (perso)
For the record,

Qlik uses the odbc driver with useDeclareFetch=1, hence the use of cursors.

By default, postgresql planner tries to optimize the execution plan for 
retrieving 10℅ of the records when using a cursor. This can be controlled with 
cursor_tuple_fraction parameter.

In my case, setting it to 1.0 (instead of the default 0.1) boosted the query 
from more than 1 hour (sometime going crazy to several hours) to 15 minutes.

In general, I think 1.0 is the correct value when using Qlik, as loaders will 
read all rows.

Franck

Le 20 mai 2021 21:33:25 GMT+02:00, "Franck Routier (perso)"  a 
écrit :
>Thanks Ganesh,
>
>this gave me the select that is slow. It effectively looks like this:
>
>begin; declare "SQL_CUR4" cursor with hold for select ...
>
>then a bunch of:
>
>fetch 10 in "SQL_CUR4"
>
>then a commit
>
>I also found this
>article 
>https://www.cybertec-postgresql.com/en/declare-cursor-in-postgresql-or-how-to-reduce-memory-consumption/
>to be interesting as an introduction to CURSOR with Postgresql.
>
>I'll now work on this query to try to understand the problem.
>
>Franck
>
>Le jeudi 20 mai 2021 à 17:59 +0530, Ganesh Korde a écrit :
>> 
>> Hi,
>> On Tue, May 18, 2021 at 6:22 PM Franck Routier (perso)
>>  wrote:
>> > Hi,
>> > 
>> > I am using postgresql 12.7 on Ubunut as a datawarehouse, that is
>then
>> > queried by QlikSense to produce business analytics.
>> > 
>> > One of my dataloaders, that runs multiple queries, sometimes takes
>> > about 
>> > 3 hours to feed Qlik with the relevant records (about 10M records),
>> > but 
>> > sometimes goes crazy and times out (as Qlik stops it when it takes
>> > more 
>> > than 480 minutes).
>> > 
>> > The point is that Qlik is using a CURSOR to retrive the data. I'm
>not
>> > familiar with CURSOR and postgresql documentation mainly cites
>> > functions 
>> > as use case. I don't really know how Qlik creates these cursors
>when 
>> > executing my queries...
>> > 
>> > I tried load_min_duration to pinpoint the problem, but only shows
>> > things 
>> > like that:
>> > 
>> > ...
>> > LOG: duration : 294774.600 ms, instruction : fetch 10 in
>> > "SQL_CUR4"
>> > LOG: duration : 282867.279 ms, instruction : fetch 10 in
>> > "SQL_CUR4"
>> > ...
>> > 
>> > So I don't know exactly which of my queries is hiding behind 
>> > "SQL_CUR4"...
>> > 
>> > Is there a way to log the actual query ?
>> > Is using a CURSOR a best practice to retrieve big datasets ? (it
>> > seems 
>> > Qlik is using it for every connection on Postgresql)
>> > Does each FETCH re-run the query, or is the result somehow cached
>(on
>> > disk ?) ?
>> > 
>> > Thanks for any insight on CURSOR and/or Qlik queries on Postgresql
>!
>> > 
>> > Best regards,
>> > Franck
>> > 
>> > 
>> 
>> Have you tried setting the parameter below?
>> log_statement = 'all' 
>>  
>> you will get all queries logged into log files.
>> 
>> Regards,
>> Ganesh Korde.

-- Envoyé depuis /e/ Mail.

I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Simon Connah
Hi,

I'm running the following command to dump my database:

/usr/bin/pg_dump 
--file=/home/simon/nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql 
--dbname=nanoscopic_db --clean --create --if-exists 
--username=nanoscopic_db_user --host=127.0.0.1 --port=5432

and yet when I run that all I get in the SQL file is the following:

https://gist.github.com/simonconnah/e1a15b1536b6e519b84481ae74f082bf

I'm at a total loss. I've tried all the relevant looking command line switches 
and nothing seems to dump the actual contents of the database. It just dumps 
the extension command. Can anyone help me to figure this out please? It is 
probably something stupid that I am doing wrong.

Simon.


signature.asc
Description: OpenPGP digital signature


Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Vijaykumar Jain
Can you try dumping using verbose flag.
-v

Just want to confirm if the user has relevant permissions.


On Fri, May 21, 2021, 3:04 PM Simon Connah 
wrote:

> Hi,
>
> I'm running the following command to dump my database:
>
> /usr/bin/pg_dump
> --file=/home/simon/nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql
> --dbname=nanoscopic_db --clean --create --if-exists
> --username=nanoscopic_db_user --host=127.0.0.1 --port=5432
>
> and yet when I run that all I get in the SQL file is the following:
>
> https://gist.github.com/simonconnah/e1a15b1536b6e519b84481ae74f082bf
>
> I'm at a total loss. I've tried all the relevant looking command line
> switches and nothing seems to dump the actual contents of the database. It
> just dumps the extension command. Can anyone help me to figure this out
> please? It is probably something stupid that I am doing wrong.
>
> Simon.
>


Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Simon Connah
‐‐‐ Original Message ‐‐‐
On Friday, May 21st, 2021 at 10:55, Vijaykumar Jain 
 wrote:

> Can you try dumping using verbose flag.
> -v
> 

> Just want to confirm if the user has relevant permissions.
> 

> On Fri, May 21, 2021, 3:04 PM Simon Connah  
> wrote:
> 

> > Hi,
> > 

> > I'm running the following command to dump my database:
> > 

> > /usr/bin/pg_dump 
> > --file=/home/simon/nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql 
> > --dbname=nanoscopic_db --clean --create --if-exists 
> > --username=nanoscopic_db_user --host=127.0.0.1 --port=5432
> > 

> > and yet when I run that all I get in the SQL file is the following:
> > 

> > https://gist.github.com/simonconnah/e1a15b1536b6e519b84481ae74f082bf
> > 

> > I'm at a total loss. I've tried all the relevant looking command line 
> > switches and nothing seems to dump the actual contents of the database. It 
> > just dumps the extension command. Can anyone help me to figure this out 
> > please? It is probably something stupid that I am doing wrong.
> > 

> > Simon.

pg_dump: last built-in OID is 16383

pg_dump: reading extensions

pg_dump: identifying extension members

pg_dump: reading schemas

pg_dump: reading user-defined tables

pg_dump: reading user-defined functions

pg_dump: reading user-defined types

pg_dump: reading procedural languages

pg_dump: reading user-defined aggregate functions

pg_dump: reading user-defined operators

pg_dump: reading user-defined access methods

pg_dump: reading user-defined operator classes

pg_dump: reading user-defined operator families

pg_dump: reading user-defined text search parsers

pg_dump: reading user-defined text search templates

pg_dump: reading user-defined text search dictionaries

pg_dump: reading user-defined text search configurations

pg_dump: reading user-defined foreign-data wrappers

pg_dump: reading user-defined foreign servers

pg_dump: reading default privileges

pg_dump: reading user-defined collations

pg_dump: reading user-defined conversions

pg_dump: reading type casts

pg_dump: reading transforms

pg_dump: reading table inheritance information

pg_dump: reading event triggers

pg_dump: finding extension tables

pg_dump: finding inheritance relationships

pg_dump: reading column info for interesting tables

pg_dump: finding the columns and types of table "public.blog_user"

pg_dump: finding default expressions of table "public.blog_user"

pg_dump: finding check constraints for table "public.blog_user"

pg_dump: finding the columns and types of table "public.blog"

pg_dump: finding default expressions of table "public.blog"

pg_dump: finding the columns and types of table "public.blog_post"

pg_dump: finding default expressions of table "public.blog_post"

pg_dump: finding check constraints for table "public.blog_post"

pg_dump: finding the columns and types of table "public.blog_post_comment"

pg_dump: finding default expressions of table "public.blog_post_comment"

pg_dump: finding the columns and types of table "public.blog_page"

pg_dump: finding default expressions of table "public.blog_page"

pg_dump: finding the columns and types of table "public.blog_user_permissions"

pg_dump: finding default expressions of table "public.blog_user_permissions"

pg_dump: flagging inherited columns in subtables

pg_dump: reading indexes

pg_dump: reading indexes for table "public.blog_user"

pg_dump: reading indexes for table "public.blog"

pg_dump: reading indexes for table "public.blog_post"

pg_dump: reading indexes for table "public.blog_post_comment"

pg_dump: reading indexes for table "public.blog_page"

pg_dump: reading indexes for table "public.blog_user_permissions"

pg_dump: flagging indexes in partitioned tables

pg_dump: reading extended statistics

pg_dump: reading constraints

pg_dump: reading triggers

pg_dump: reading rewrite rules

pg_dump: reading policies

pg_dump: reading row security enabled for table 
"public.blog_user_blog_user_id_seq"

pg_dump: reading policies for table "public.blog_user_blog_user_id_seq"

pg_dump: reading row security enabled for table "public.blog_user"

pg_dump: reading policies for table "public.blog_user"

pg_dump: reading row security enabled for table "public.blog_blog_id_seq"

pg_dump: reading policies for table "public.blog_blog_id_seq"

pg_dump: reading row security enabled for table "public.blog"

pg_dump: reading policies for table "public.blog"

pg_dump: reading row security enabled for table 
"public.blog_post_blog_post_id_seq"

pg_dump: reading policies for table "public.blog_post_blog_post_id_seq"

pg_dump: reading row security enabled for table "public.blog_post"

pg_dump: reading policies for table "public.blog_post"

pg_dump: reading row security enabled for table 
"public.blog_post_comment_blog_post_comment_id_seq"

pg_dump: reading policies for table 
"public.blog_post_comment_blog_post_comment_id_seq"

pg_dump: reading row security enabled for table "public.blog_post_comment"

pg_dump: reading policies for table "public

Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Vijaykumar Jain
i just did a dump of a db which was owned by postgres but some tables owned
by other users and it ran fine.
I am not sure of that nanoscopic extension though.



***
createdb -e foobar;

postgres=# \c foobar
You are now connected to database "foobar" as user "postgres".
foobar=# set role demo;
SET
foobar=> create table xx(id int);
CREATE TABLE
foobar=> \dt
   List of relations
 Schema | Name | Type  | Owner
+--+---+---
 public | xx   | table | demo
(1 row)

foobar=> insert into xx values (1);
INSERT 0 1
foobar=> \dt
   List of relations
 Schema | Name | Type  | Owner
+--+---+---
 public | xx   | table | demo
(1 row)

foobar=> \l
 List of databases
   Name|  Owner   | Encoding |  Collate   |   Ctype|   Access
privileges
---+--+--+++---
 demo  | demo_rw  | UTF8 | en_US.utf8 | en_US.utf8 |
 foobar| postgres | UTF8 | en_US.utf8 | en_US.utf8 |
 postgres  | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
 template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres
  +
   |  |  |||
postgres=CTc/postgres
 template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres
  +
   |  |  |||
postgres=CTc/postgres
(5 rows)
***


***
pg_dump --file=nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql
--dbname=foobar --clean --create --if-exists --username=demo -v
--host=127.0.0.1 --port=5432

... last lines from the verbose dump

pg_dump: dropping DATABASE foobar
pg_dump: creating DATABASE "foobar"
pg_dump: connecting to new database "foobar"
pg_dump: creating TABLE "public.xx"
pg_dump: processing data for table "public.xx"
pg_dump: dumping contents of table "public.xx"


CREATE DATABASE foobar WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE =
'en_US.utf8';


ALTER DATABASE foobar OWNER TO postgres;

\connect foobar

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

SET default_tablespace = '';

SET default_table_access_method = heap;

--
-- TOC entry 200 (class 1259 OID 26105)
-- Name: xx; Type: TABLE; Schema: public; Owner: demo
--

CREATE TABLE public.xx (
id integer
);


ALTER TABLE public.xx OWNER TO demo;

--
-- TOC entry 2232 (class 0 OID 26105)
-- Dependencies: 200
-- Data for Name: xx; Type: TABLE DATA; Schema: public; Owner: demo
--

COPY public.xx (id) FROM stdin;
1
\.


-- Completed on 2021-05-21 15:54:08 IST

--
-- PostgreSQL database dump complete
--
***
works fine.
I do not know that extension(nanoscopic) though.

it is reading some tables in a public schema, but not even dumping the
schema.

yep, thats odd if it does not throw any errors, coz any errors wrt
permissions are thrown right away to console.

maybe someone with more exp would be able to help.


On Fri, 21 May 2021 at 15:32, Simon Connah 
wrote:

> ‐‐‐ Original Message ‐‐‐
> On Friday, May 21st, 2021 at 10:55, Vijaykumar Jain <
> vijaykumarjain.git...@gmail.com> wrote:
>
> Can you try dumping using verbose flag.
> -v
>
> Just want to confirm if the user has relevant permissions.
>
>
> On Fri, May 21, 2021, 3:04 PM Simon Connah 
> wrote:
>
>> Hi,
>>
>> I'm running the following command to dump my database:
>>
>> /usr/bin/pg_dump
>> --file=/home/simon/nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql
>> --dbname=nanoscopic_db --clean --create --if-exists
>> --username=nanoscopic_db_user --host=127.0.0.1 --port=5432
>>
>> and yet when I run that all I get in the SQL file is the following:
>>
>> https://gist.github.com/simonconnah/e1a15b1536b6e519b84481ae74f082bf
>>
>> I'm at a total loss. I've tried all the relevant looking command line
>> switches and nothing seems to dump the actual contents of the database. It
>> just dumps the extension command. Can anyone help me to figure this out
>> please? It is probably something stupid that I am doing wrong.
>>
>> Simon.
>>
>
> *pg_dump: *last built-in OID is 16383
> *pg_dump: *reading extensions
> *pg_dump: *identifying extension members
> *pg_dump: *reading schemas
> *pg_dump: *reading user-defined tables
> *pg_dump: *reading user-defined functions
> *pg_dump: *reading user-defined types
> *pg_dump: *reading procedural languages
> *pg_dump: *reading user-defined aggregate functions
> *pg_dump: *reading user-defined operators
> *pg_dump: *reading user-defined access methods
> *pg_dump: *reading user-defined operator classes
> *pg_dump: *reading user-defined operator families
> *pg_dump: *reading user-defin

Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Simon Connah
This is the source code of the extension in question:

https://github.com/xmrsoftware/nanoscopic/tree/master/sql/nanoscopic
‐‐‐ Original Message ‐‐‐
On Friday, May 21st, 2021 at 11:29, Vijaykumar Jain 
 wrote:

> i just did a dump of a db which was owned by postgres but some tables owned 
> by other users and it ran fine.I am not sure of that nanoscopic extension 
> though.
> 

> ***
> createdb -e foobar;
> 

> postgres=# \c foobarYou are now connected to database "foobar" as user 
> "postgres".foobar=# set role demo;SETfoobar=> create table xx(id int);CREATE 
> TABLEfoobar=> \dt       List of relations Schema | Name | Type  | 
> Owner+--+---+--- public | xx   | table | demo(1 row)
> 

> foobar=> insert into xx values (1);INSERT 0 1foobar=> \dt       List of 
> relations Schema | Name | Type  | Owner+--+---+--- public 
> | xx   | table | demo(1 row)
> 

> foobar=> \l                                 List of databases   Name    |  
> Owner   | Encoding |  Collate   |   Ctype    |   Access 
> privileges---+--+--+++---
>  demo      | demo_rw  | UTF8     | en_US.utf8 | en_US.utf8 | foobar    | 
> postgres | UTF8     | en_US.utf8 | en_US.utf8 | postgres  | postgres | UTF8   
>   | en_US.utf8 | en_US.utf8 | template0 | postgres | UTF8     | en_US.utf8 | 
> en_US.utf8 | =c/postgres          +           |          |          |         
>    |            | postgres=CTc/postgres template1 | postgres | UTF8     | 
> en_US.utf8 | en_US.utf8 | =c/postgres          +           |          |       
>    |            |            | postgres=CTc/postgres(5 
> rows)***
> 

> ***
> pg_dump --file=nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql 
> --dbname=foobar --clean --create --if-exists --username=demo -v 
> --host=127.0.0.1 --port=5432
> 

> ... last lines from the verbose dump
> 

> pg_dump: dropping DATABASE foobarpg_dump: creating DATABASE "foobar"pg_dump: 
> connecting to new database "foobar"pg_dump: creating TABLE 
> "public.xx"pg_dump: processing data for table "public.xx"pg_dump: dumping 
> contents of table "public.xx"
> 

> CREATE DATABASE foobar WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE = 
> 'en_US.utf8';
> 

> ALTER DATABASE foobar OWNER TO postgres;
> 

> \connect foobar
> 

> SET statement_timeout = 0;SET lock_timeout = 0;SET 
> idle_in_transaction_session_timeout = 0;SET client_encoding = 'UTF8';SET 
> standard_conforming_strings = on;SELECT pg_catalog.set_config('search_path', 
> '', false);SET check_function_bodies = false;SET xmloption = content;SET 
> client_min_messages = warning;SET row_security = off;
> 

> SET default_tablespace = '';
> 

> SET default_table_access_method = heap;
> 

>  TOC entry 200 (class 1259 OID 26105)-- Name: xx; Type: TABLE; Schema: 
> public; Owner: demo--
> 

> CREATE TABLE public.xx (    id integer);
> 

> ALTER TABLE public.xx OWNER TO demo;
> 

>  TOC entry 2232 (class 0 OID 26105)-- Dependencies: 200-- Data for Name: 
> xx; Type: TABLE DATA; Schema: public; Owner: demo--
> 

> COPY public.xx (id) FROM stdin;1\.
> 

> -- Completed on 2021-05-21 15:54:08 IST
> 

>  PostgreSQL database dump complete--***works 
> fine.I do not know that extension(nanoscopic) though.
> 

> it is reading some tables in a public schema, but not even dumping the schema.
> 

> yep, thats odd if it does not throw any errors, coz any errors wrt 
> permissions are thrown right away to console.
> 

> maybe someone with more exp would be able to help.
> 

> On Fri, 21 May 2021 at 15:32, Simon Connah  
> wrote:
> 

> > ‐‐‐ Original Message ‐‐‐
> > On Friday, May 21st, 2021 at 10:55, Vijaykumar Jain 
> >  wrote:
> > 

> > > Can you try dumping using verbose flag.
> > > -v
> > > 

> > > Just want to confirm if the user has relevant permissions.
> > > 

> > > On Fri, May 21, 2021, 3:04 PM Simon Connah 
> > >  wrote:
> > > 

> > > > Hi,
> > > > 

> > > > I'm running the following command to dump my database:
> > > > 

> > > > /usr/bin/pg_dump 
> > > > --file=/home/simon/nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql 
> > > > --dbname=nanoscopic_db --clean --create --if-exists 
> > > > --username=nanoscopic_db_user --host=127.0.0.1 --port=5432
> > > > 

> > > > and yet when I run that all I get in the SQL file is the following:
> > > > 

> > > > https://gist.github.com/simonconnah/e1a15b1536b6e519b84481ae74f082bf
> > > > 

> > > > I'm at a total loss. I've tried all the relevant looking command line 
> > > > switches and nothing seems to dump the actual contents of the database. 
> > > > It just dumps the extension command. Can anyone help me to figure this 
> > > > out please? It is probably something stupid that I am doing wrong.
> > > > 

> > > > Simon.
> > 

> > pg_dump: last built-in OID is 16383
> > 

> > pg_dump: reading extensions
> > 

Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Vijaykumar Jain
ok,

I think this is what it is.

I copied the files to the extensions folder.

ls /opt/postgresql-13/local/share/extension/nanoscopic*
/opt/postgresql-13/local/share/extension/nanoscopic--1.0.sql
/opt/postgresql-13/local/share/extension/nanoscopic.control

and loaded the extensions.
the relations are created as a result of the extension.

foobar=# create extension nanoscopic;
CREATE EXTENSION
foobar=# \dt
 List of relations
 Schema | Name  | Type  |  Owner
+---+---+--
 public | blog  | table | postgres
 public | blog_page | table | postgres
 public | blog_post | table | postgres
 public | blog_post_comment | table | postgres
 public | blog_user | table | postgres
 public | blog_user_permissions | table | postgres
(6 rows)

foobar=# drop extension nanoscopic;
DROP EXTENSION
foobar=# \dt
Did not find any relations.


when you dump the db, only the create extension statement is dumped, not
its relations.

when you reload the db from the dump file, the extension is created and
relations too are created via that extension.

But I do not know the theory of how pg_dump deals with relations and the
data created via extensions at load time and further when they are modified.

I'll do some lookup on this.



















On Fri, 21 May 2021 at 16:29, Simon Connah 
wrote:

> This is the source code of the extension in question:
> https://github.com/xmrsoftware/nanoscopic/tree/master/sql/nanoscopic
> ‐‐‐ Original Message ‐‐‐
> On Friday, May 21st, 2021 at 11:29, Vijaykumar Jain <
> vijaykumarjain.git...@gmail.com> wrote:
>
> i just did a dump of a db which was owned by postgres but some tables
> owned by other users and it ran fine.
> I am not sure of that nanoscopic extension though.
>
>
>
> ***
> createdb -e foobar;
>
> postgres=# \c foobar
> You are now connected to database "foobar" as user "postgres".
> foobar=# set role demo;
> SET
> foobar=> create table xx(id int);
> CREATE TABLE
> foobar=> \dt
>List of relations
>  Schema | Name | Type  | Owner
> +--+---+---
>  public | xx   | table | demo
> (1 row)
>
> foobar=> insert into xx values (1);
> INSERT 0 1
> foobar=> \dt
>List of relations
>  Schema | Name | Type  | Owner
> +--+---+---
>  public | xx   | table | demo
> (1 row)
>
> foobar=> \l
>  List of databases
>Name|  Owner   | Encoding |  Collate   |   Ctype|   Access
> privileges
>
> ---+--+--+++---
>  demo  | demo_rw  | UTF8 | en_US.utf8 | en_US.utf8 |
>  foobar| postgres | UTF8 | en_US.utf8 | en_US.utf8 |
>  postgres  | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
>  template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres
> +
>|  |  |||
> postgres=CTc/postgres
>  template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres
> +
>|  |  |||
> postgres=CTc/postgres
> (5 rows)
> ***
>
>
> ***
> pg_dump --file=nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql
> --dbname=foobar --clean --create --if-exists --username=demo -v
> --host=127.0.0.1 --port=5432
>
> ... last lines from the verbose dump
>
> pg_dump: dropping DATABASE foobar
> pg_dump: creating DATABASE "foobar"
> pg_dump: connecting to new database "foobar"
> pg_dump: creating TABLE "public.xx"
> pg_dump: processing data for table "public.xx"
> pg_dump: dumping contents of table "public.xx"
>
>
> CREATE DATABASE foobar WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE
> = 'en_US.utf8';
>
>
> ALTER DATABASE foobar OWNER TO postgres;
>
> \connect foobar
>
> SET statement_timeout = 0;
> SET lock_timeout = 0;
> SET idle_in_transaction_session_timeout = 0;
> SET client_encoding = 'UTF8';
> SET standard_conforming_strings = on;
> SELECT pg_catalog.set_config('search_path', '', false);
> SET check_function_bodies = false;
> SET xmloption = content;
> SET client_min_messages = warning;
> SET row_security = off;
>
> SET default_tablespace = '';
>
> SET default_table_access_method = heap;
>
> --
> -- TOC entry 200 (class 1259 OID 26105)
> -- Name: xx; Type: TABLE; Schema: public; Owner: demo
> --
>
> CREATE TABLE public.xx (
> id integer
> );
>
>
> ALTER TABLE public.xx OWNER TO demo;
>
> --
> -- TOC entry 2232 (class 0 OID 26105)
> -- Dependencies: 200
> -- Data for Name: xx; Type: TABLE DATA; Schema: public; Owner: demo
> --
>
> COPY public.xx (id) FROM stdin;
> 1
> \.
>
>
> -- Completed on 2021-05-21 15:54:08 IST
>
> --
> -- PostgreSQL database dump complete
> --
> ***
> works fine.
> I do not know that extension(nanoscopic) though.
>
> it is reading some tables in a public schem

Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Vijaykumar Jain
so this is the summary.

I modified the extension to have a simple sql that created table and
inserted a value.

***

postgres@go:~$ cat
/opt/postgresql-13/local/share/extension/nanoscopic--1.0.sql


create table foo(id int);
insert into foo values (1);


postgres@go:~$ cat
/opt/postgresql-13/local/share/extension/nanoscopic.control
default_version = '1.0'
comment = 'Database requirements for the Nanoscopic blogging platform'
encoding = UTF8
superuser = false
trusted = false

postgres@go:~$ psql foobar
psql (13.2)
Type "help" for help.

foobar=# drop extension nanoscopic;
DROP EXTENSION
foobar=# \dt
Did not find any relations.
foobar=# \q
postgres@go:~$ stoppg
waiting for server to shut down done
server stopped
postgres@go:~$ startpg
waiting for server to start done
server started
postgres@go:~$ psql foobar
psql (13.2)
Type "help" for help.

foobar=# set role demo;
SET
foobar=> create extension nanoscopic;
CREATE EXTENSION
foobar=> \dt
   List of relations
 Schema | Name | Type  | Owner
+--+---+---
 public | foo  | table | demo
(1 row)

foobar=> table foo;
 id

  1
(1 row)

foobar=> insert into foo values (2);   -- i add more data to the table
created via extension
INSERT 0 1
foobar=> table foo;
 id

  1
  2
(2 rows)

foobar=> \q
postgres@go:~$ pg_dump
--file=nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql --dbname=foobar
--clean --create --if-exists --username=demo -v --host=127.0.0.1 --port=5432
Password:
pg_dump: last built-in OID is 16383
pg_dump: reading extensions
pg_dump: identifying extension members
pg_dump: reading schemas
pg_dump: reading user-defined tables
pg_dump: reading user-defined functions
pg_dump: reading user-defined types
pg_dump: reading procedural languages
pg_dump: reading user-defined aggregate functions
pg_dump: reading user-defined operators
pg_dump: reading user-defined access methods
pg_dump: reading user-defined operator classes
pg_dump: reading user-defined operator families
pg_dump: reading user-defined text search parsers
pg_dump: reading user-defined text search templates
pg_dump: reading user-defined text search dictionaries
pg_dump: reading user-defined text search configurations
pg_dump: reading user-defined foreign-data wrappers
pg_dump: reading user-defined foreign servers
pg_dump: reading default privileges
pg_dump: reading user-defined collations
pg_dump: reading user-defined conversions
pg_dump: reading type casts
pg_dump: reading transforms
pg_dump: reading table inheritance information
pg_dump: reading event triggers
pg_dump: finding extension tables
pg_dump: finding inheritance relationships
pg_dump: reading column info for interesting tables
pg_dump: finding the columns and types of table "public.foo"
pg_dump: flagging inherited columns in subtables
pg_dump: reading indexes
pg_dump: flagging indexes in partitioned tables
pg_dump: reading extended statistics
pg_dump: reading constraints
pg_dump: reading triggers
pg_dump: reading rewrite rules
pg_dump: reading policies
pg_dump: reading row security enabled for table "public.foo"
pg_dump: reading policies for table "public.foo"
pg_dump: reading publications
pg_dump: reading publication membership
pg_dump: reading subscriptions
pg_dump: reading large objects
pg_dump: reading dependency data
pg_dump: saving encoding = UTF8
pg_dump: saving standard_conforming_strings = on
pg_dump: saving search_path =
pg_dump: saving database definition
pg_dump: dropping DATABASE foobar
pg_dump: creating DATABASE "foobar"
pg_dump: connecting to new database "foobar"
pg_dump: creating EXTENSION "nanoscopic"
pg_dump: creating COMMENT "EXTENSION nanoscopic"
postgres@go:~$ more nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql
--
-- PostgreSQL database dump
--

-- Dumped from database version 13.2
-- Dumped by pg_dump version 13.2

-- Started on 2021-05-21 17:03:32 IST

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

DROP DATABASE IF EXISTS foobar;
--
-- TOC entry 2238 (class 1262 OID 26804)
-- Name: foobar; Type: DATABASE; Schema: -; Owner: postgres
--

CREATE DATABASE foobar WITH TEMPLATE = template0 ENCODING = 'UTF8' LOCALE =
'en_US.utf8';


ALTER DATABASE foobar OWNER TO postgres;

\connect foobar

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- TOC entry 2 (class 3079 OID 26997)
-- Name: nanoscopic; Type: EXTENSION; Schema: -; Owner: -
--

CREATE EXTENSION IF NOT EXI

Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Vijaykumar Jain
PostgreSQL: Documentation: 13: 37.17. Packaging Related Objects into an
Extension 

so it works as expected.
someone would have to point to the reference wrt modification of data in
objects created via extension.

The main advantage of using an extension, rather than just running the
SQL script
to load a bunch of “loose” objects into your database, is that PostgreSQL will
then understand that the objects of the extension go together. You can drop
all the objects with a single DROP EXTENSION
 command (no
need to maintain a separate “uninstall” script). Even more useful,
pg_dump knows
that it should not dump the individual member objects of the extension — it
will just include a CREATE EXTENSION command in dumps, instead. This vastly
simplifies migration to a new version of the extension that might contain
more or different objects than the old version. Note however that you must
have the extension's control, script, and other files available when
loading such a dump into a new database.



On Fri, 21 May 2021 at 17:07, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:

> so this is the summary.
>
> I modified the extension to have a simple sql that created table and
> inserted a value.
>
> ***
>
> postgres@go:~$ cat
> /opt/postgresql-13/local/share/extension/nanoscopic--1.0.sql
>
>
> create table foo(id int);
> insert into foo values (1);
>
>
> postgres@go:~$ cat
> /opt/postgresql-13/local/share/extension/nanoscopic.control
> default_version = '1.0'
> comment = 'Database requirements for the Nanoscopic blogging platform'
> encoding = UTF8
> superuser = false
> trusted = false
>
> postgres@go:~$ psql foobar
> psql (13.2)
> Type "help" for help.
>
> foobar=# drop extension nanoscopic;
> DROP EXTENSION
> foobar=# \dt
> Did not find any relations.
> foobar=# \q
> postgres@go:~$ stoppg
> waiting for server to shut down done
> server stopped
> postgres@go:~$ startpg
> waiting for server to start done
> server started
> postgres@go:~$ psql foobar
> psql (13.2)
> Type "help" for help.
>
> foobar=# set role demo;
> SET
> foobar=> create extension nanoscopic;
> CREATE EXTENSION
> foobar=> \dt
>List of relations
>  Schema | Name | Type  | Owner
> +--+---+---
>  public | foo  | table | demo
> (1 row)
>
> foobar=> table foo;
>  id
> 
>   1
> (1 row)
>
> foobar=> insert into foo values (2);   -- i add more data to the table
> created via extension
> INSERT 0 1
> foobar=> table foo;
>  id
> 
>   1
>   2
> (2 rows)
>
> foobar=> \q
> postgres@go:~$ pg_dump
> --file=nanoscopic_db_127_0_0_1-2021_05_21_10_25_28-dump.sql --dbname=foobar
> --clean --create --if-exists --username=demo -v --host=127.0.0.1 --port=5432
> Password:
> pg_dump: last built-in OID is 16383
> pg_dump: reading extensions
> pg_dump: identifying extension members
> pg_dump: reading schemas
> pg_dump: reading user-defined tables
> pg_dump: reading user-defined functions
> pg_dump: reading user-defined types
> pg_dump: reading procedural languages
> pg_dump: reading user-defined aggregate functions
> pg_dump: reading user-defined operators
> pg_dump: reading user-defined access methods
> pg_dump: reading user-defined operator classes
> pg_dump: reading user-defined operator families
> pg_dump: reading user-defined text search parsers
> pg_dump: reading user-defined text search templates
> pg_dump: reading user-defined text search dictionaries
> pg_dump: reading user-defined text search configurations
> pg_dump: reading user-defined foreign-data wrappers
> pg_dump: reading user-defined foreign servers
> pg_dump: reading default privileges
> pg_dump: reading user-defined collations
> pg_dump: reading user-defined conversions
> pg_dump: reading type casts
> pg_dump: reading transforms
> pg_dump: reading table inheritance information
> pg_dump: reading event triggers
> pg_dump: finding extension tables
> pg_dump: finding inheritance relationships
> pg_dump: reading column info for interesting tables
> pg_dump: finding the columns and types of table "public.foo"
> pg_dump: flagging inherited columns in subtables
> pg_dump: reading indexes
> pg_dump: flagging indexes in partitioned tables
> pg_dump: reading extended statistics
> pg_dump: reading constraints
> pg_dump: reading triggers
> pg_dump: reading rewrite rules
> pg_dump: reading policies
> pg_dump: reading row security enabled for table "public.foo"
> pg_dump: reading policies for table "public.foo"
> pg_dump: reading publications
> pg_dump: reading publication membership
> pg_dump: reading subscriptions
> pg_dump: reading large objects
> pg_dump: reading dependency data
> pg_dump: saving encoding = UTF8
> pg_dump: saving standard_conforming_strings = on
> pg_dump: saving search_path =
> pg_dump: saving database definition
> pg_dump: dropping DATABASE foobar
> pg_dump: creating DATABASE "foobar"
> p

Re: I have no idea why pg_dump isn't dumping all of my data

2021-05-21 Thread Ian Lawrence Barwick
2021年5月21日(金) 20:42 Vijaykumar Jain :
>
> PostgreSQL: Documentation: 13: 37.17. Packaging Related Objects into an 
> Extension
>
> so it works as expected.
> someone would have to point to the reference wrt modification of data in 
> objects created via extension.

If you want to be able to dump data inserted into extension tables,
the tables will need
to be marked using "pg_extension_config_dump()" in the extension script,  see:

  
https://www.postgresql.org/docs/current/extend-extensions.html#EXTEND-EXTENSIONS-CONFIG-TABLES

and also:

   https://pgpedia.info/p/pg_extension_config_dump.html


Regards

Ian Barwick

-- 
EnterpriseDB: https://www.enterprisedb.com




Re: Plan for exclusive backup method

2021-05-21 Thread MEERA
Hi Magnus,

Thank you.

On Wed, May 19, 2021 at 3:30 PM Magnus Hagander  wrote:

> On Wed, May 19, 2021 at 11:58 AM MEERA  wrote:
> >
> > Hi team,
> >
> > https://www.postgresql.org/docs/12/continuous-archiving.html
> >
> > It is mentioned here that exclusive backup method is deprecated and
> should be avoided. it is now recommended that all users upgrade their
> scripts to use non-exclusive backups.
> >
> > Does this mean select pg_start_backup('label', false, true) is
> deprecated?
>
> Yes.
>
> > In a future version, will it not allow the third argument to be true?
>
> The exact future has not been decided, but most likely yes. And as
> most installations using it today are unsafe, it is recommended that
> you don't use it even before then. It basically exists for backwards
> compatibility with PostgreSQL prior to 9.6.
>
> --
>  Magnus Hagander
>  Me: https://www.hagander.net/
>  Work: https://www.redpill-linpro.com/
>


-- 
thanks and regards,
Meera R Nair


Re: Question about integer out of range in function

2021-05-21 Thread Peter J. Holzer
On 2021-05-16 11:09:38 -0700, David G. Johnston wrote:
> On Sunday, May 16, 2021, Condor  wrote:
> 
> 
>      new_time = fromtime * 1000; -- here is line 19
>      
> 
> 
> An integer times an integer results in an integer.  Period.  Neither fromtime
> nor new_time have been assigned to yet, the in-memory result of the 
> computation
> is only allocated integer bits and if you overflow that you get an error.

This is also true in C, btw, except that in C an overflow of a signed
int is undefined behaviour while an unsigned int is required to wrap
around.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: The contents of the pg_timezone_names view bring some surprises

2021-05-21 Thread Adrian Klaver

On 5/19/21 5:50 PM, Bryn Llewellyn wrote:

Thanks, as ever, David and Tom, for your quick responses. Thanks also to 
Adrian Klaver, who replied in a branched thread with this—in response to 
my comment about my reading of the information content of the 
pg_timezone_abbrevs view: « This claims (as I read it) that a time zone 
abbreviation uniquely determines an offset from UTC. »






*Secondly, Adrians's response.*

Yes, the point that a timezone abbreviation does not uniquely determine 
the timezone offset is taken now. But notice this:


« In short, this is the difference between abbreviations and full names: 
abbreviations represent a specific offset from UTC…»

from

"8.5.3. Time Zones"
https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-TIMEZONES 



This seems to me to be flat-out wrong. An abbreviation, in general, does 
not represent a specific offset from UTC. Rather, it can represent two 
or more different offsets.


It is not flat out wrong. An abbreviation, say the one I'm in now PDT, 
will only represent a specific offset(-07), whereas the timezone I'm in, 
America/Los_Angeles, represents two offsets(-08/-07) the value of which 
depends on the date. Now there maybe another abbreviation that uses that 
same offset, but again it only represents a single offset.




Nonsense, eh? As David said, it's an instance of the more general:

set timezone = 'Foo42Bar';
show timezone;

I wish there was a way to turn this off and accept only 
pg_timestamp_names.name values.


The second reason is that the abbreviations confuse ordinary readers who 
are slow to remember the "up is down" story.




The issue is you are looking for logic in a system that is based on 
political decisions. For instance there is a brewing West Coast 
movement, whereby the states on the US West Coast are looking to drop 
the DST transition with or without the approval of Congress. COVID 
stalled it, but I expect it will appear again in the near future.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Fwd: Proposed Chinese Translation of Community Code of Conduct

2021-05-21 Thread Ron

I see the attachments in your first email.

On 5/21/21 12:22 PM, Stacey Haysler wrote:



And, once again, the attachments did not make it through. Re-sending from 
my Gmail in hope that the attachments make it through this time.


Regards,
Stacey

Begin forwarded message:

*From: *Stacey Haysler mailto:sta...@haysler.sh>>
*Subject: **Proposed Chinese Translation of Community Code of Conduct*
*Date: *May 21, 2021 at 9:58:12 AM PDT
*To: *pgsql-general@lists.postgresql.org 



The PostgreSQL Community Code of Conduct Committee has received a draft of 
the Chinese translation of the Code of Conduct Policy updated August 18, 
2020 for review.


The English version of the Policy is at:
_https://www.postgresql.org/about/policies/coc/_

The translation was created by:
Wensheng Zhang (张文升)


The translation was reviewed by:
Yandong Yao (姚延栋)

The proposed translation is attached as a both a text file and a PDF to 
this message.


If you have any comments or suggestions for the translation, please bring 
them to our attention no later than 5:00 PM Pacific Time on May 28, 2021.


Thank you.

Regards,
Stacey

Stacey Haysler
Chair
PostgreSQL Community Code of Conduct Committee





--
Angular momentum makes the world go 'round.