Re: Odd Row Estimates in Query Plan (rows=75)

2018-08-16 Thread Laurenz Albe
Don Seiler wrote:
> We have a report query that has gone from maybe a few seconds to run to a few 
> minutes to run since mid-July.
> Looking at the output of EXPLAIN ANALYZE, the row count estimates are way 
> off, even though this table was
> just analyzed a day or so ago. What's more bizarre to me is that the row 
> count esimate is *always* 75 for
> every node of the plan, where the actual rows is in the hundreds or 
> thousands. This table is one of the
> busiest tables in our production database (many inserts and updates). It is 
> autovacuumed and autoanalyzed
> a few times per week, although I'm looking to change it to a nightly manual 
> schedule to avoid daytime autovacuums.
> 
>  Hash Join  (cost=1869142.34..1869146.15 rows=75 width=88) (actual 
> time=179877.869..179878.011 rows=759 loops=1)
>Hash Cond: (stores.pkey = lt.store_pkey)
>Buffers: shared hit=1654593 read=331897 dirtied=249
>->  Seq Scan on stores  (cost=0.00..2.77 rows=77 width=22) (actual 
> time=0.007..0.023 rows=78 loops=1)
>  Buffers: shared hit=2
>->  Hash  (cost=1869141.40..1869141.40 rows=75 width=50) (actual 
> time=179877.847..179877.847 rows=759 loops=1)
>  Buckets: 1024  Batches: 1  Memory Usage: 73kB
>  Buffers: shared hit=1654591 read=331897 dirtied=249
>  ->  Subquery Scan on lt  (cost=1869138.59..1869141.40 rows=75 
> width=50) (actual time=179875.976..179877.697 rows=759 loops=1)
>Buffers: shared hit=1654591 read=331897 dirtied=249
>->  GroupAggregate  (cost=1869138.59..1869140.65 rows=75 
> width=50) (actual time=179875.976..179877.606 rows=759 loops=1)
>  Group Key: lts.store_pkey, lts.owner, 
> (date_trunc('minute'::text, lts.date_gifted))
>  Filter: (count(*) IS NOT NULL)
>  Buffers: shared hit=1654591 read=331897 dirtied=249
>  ->  Sort  (cost=1869138.59..1869138.78 rows=75 width=42) 
> (actual time=179875.961..179876.470 rows=6731 loops=1)
>Sort Key: lts.store_pkey, lts.entry_source_owner, 
> (date_trunc('minute'::text, lts.date_gifted))
>Sort Method: quicksort  Memory: 757kB
>Buffers: shared hit=1654591 read=331897 dirtied=249
>->  Index Scan using gifts_date_added on gifts lts 
>  (cost=0.56..1869136.25 rows=75 width=42) (actual time=190.657..179870.165 
> rows=6731 loops=1)
>  Index Cond: ((date_added > '2018-07-14 
> 11:13:05'::timestamp without time zone) AND (date_added < '2018-08-13 
> 14:14:21'::timestamp without time zone))
>  Filter: ((date_gifted >= '2018-08-13 
> 11:13:05'::timestamp without time zone) AND (date_gifted < '2018-08-13 
> 14:14:21'::timestamp without time zone))
>  Rows Removed by Filter: 938197
>  Buffers: shared hit=1654591 read=331897 
> dirtied=249
>  Planning time: 0.426 ms
>  Execution time: 179893.894 ms
> 
> I don't have a version of this query from prior to this summer, but getting 
> explain plan for older data from
> older sandboxes show a similar plan.
> 
> Sidenote: I am suggesting that an index be added on the date_gifted field as 
> that is far more selective and avoids
> throwing rows away. However I'm very interested in why every node dealing 
> with the gifts table thinks rows=75
> when the actual is much, much higher. And 75 seems like too round of a number 
> to be random?

Yes, I would say that adding an index on "date_gifted" would help.  You may end
up with two bitmap index scans that get combined.
Make sure "work_mem" is big enough to avoid lossy bitmaps (indicated in the 
plan).

About the misestimate:

You could try running ANALYZE with an increased "default_statistics_target" and 
see
if that changes the estimate.
If yes, then maybe you should increase statistics for that table or (seing that 
you are
querying current values) you should collect statistics more often.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Re: During promotion, new master tries to archive same segment twice

2018-08-16 Thread Phil Endecott

David Steele wrote:

On 8/15/18 4:25 PM, Phil Endecott wrote:
- Should my archive_command detect the case where it is asked to 
write the same file again with the same contents, and report 
success in that case?


Yes.



There are a number of cases where the same WAL
segment can be pushed more than once, especially after failures where
Postgres is not sure that the command completed.  The archive command
should handle this gracefully.



Hmm, OK.  Here's what the current docs say:

Section 25.3.1:

"The archive command should generally be designed to refuse to
overwrite any pre-existing archive file. This is an important
safety feature to preserve the integrity of your archive in case
of administrator error (such as sending the output of two
different servers to the same archive directory).

It is advisable to test your proposed archive command to ensure
that it indeed does not overwrite an existing file, and that it
returns nonzero status in this case."

And section 26.2.9:

"When continuous WAL archiving is used in a standby, there
are two different scenarios: the WAL archive can be shared
between the primary and the standby, or the standby can
have its own WAL archive.  When the standby has its own WAL
archive, set archive_mode to always, and the standby will call
the archive command for every WAL segment it receives, whether
it's by restoring from the archive or by streaming replication.
The shared archive can be handled similarly, but the
archive_command must test if the file being archived exists
already, and if the existing file has identical contents.
This requires more care in the archive_command, as it must be
careful to not overwrite an existing file with different contents,
but return success if the exactly same file is archived twice.
And all that must be done free of race conditions, if two
servers attempt to archive the same file at the same time."


So you're saying that that's wrong, and that I must always
handle the case when the same WAL segment is written twice.

I'll file a bug against the documentation.



pgBackRest has done this for years and it saves a *lot* of headaches.


The system to which I am sending the WAL files is a rsync.net
account.  I use it because of its reliability, but methods for
transferring files are limited largely to things like scp and
rsync.


Thanks, Phil.







Re: During promotion, new master tries to archive same segment twice

2018-08-16 Thread Phil Endecott

Adrian Klaver wrote:

On 08/15/2018 01:25 PM, Phil Endecott wrote:

Dear Experts,

Here is my latest issue with replication:

I have 3 systems, X, Y and Z.  Initially X is replicated to Y
and Y is replicated to Z; in each case the replication involves
log-shipping using archive_command and restore_command (via a
4th system called "backup") and then streaming.

The plan is to shut down X and to promote Y to be the new master.

I shut down X, and as expected Y complains that it con no longer
connect to it:

2018-08-15 15:10:58.785 UTC [617] LOG:  replication terminated by primary server
2018-08-15 15:10:58.785 UTC [617] DETAIL:  End of WAL reached on timeline 1 at 
7/E998.
2018-08-15 15:10:58.785 UTC [617] FATAL:  could not send end-of-streaming 
message to primary: no COPY in progress
scp: backup/postgresql/archivedir/0001000700E9: No such file or 
directory
2018-08-15 15:10:59.928 UTC [354] LOG:  invalid resource manager ID 22 at 
7/E998
2018-08-15 15:11:00.092 UTC [5856] FATAL:  could not connect to the primary 
server: FATAL:  the database system is shutting down

Note that in the middle of that, after streaming has failed it
tries to fetch the next WAL segment, E9, from the backup; this
fails because this segment doesn't exist.

Then I pg_ctl promote on Y:

2018-08-15 15:12:11.888 UTC [354] LOG:  received promote request
2018-08-15 15:12:11.888 UTC [354] LOG:  redo done at 7/E928
2018-08-15 15:12:11.888 UTC [354] LOG:  last completed transaction was at log 
time 2018-08-15 14:45:05.961153+00
scp: backup/postgresql/archivedir/0001000700E9: No such file or 
directory
scp: backup/postgresql/archivedir/0002.history: No such file or directory
2018-08-15 15:12:13.316 UTC [354] LOG:  selected new timeline ID: 2
2018-08-15 15:12:13.368 UTC [354] FATAL:  could not open file "recovery.conf": 
Permission denied

So in the process of exiting recovery mode it tries to rename
recovery.conf to recovery.done, but it fails to do so because
my recovery.conf was owned by root, not user postgres.  Oooops.
Perhaps it would be appropriate to check for that when it
initially reads recovery.conf?  Anyway, I fix the permissions
and restart it.  It starts in recovery mode and cannot connect
to the master, as expected.

2018-08-15 15:16:58.061 UTC [6036] LOG:  restored log file 
"0001000700E8" from archive
2018-08-15 15:16:58.108 UTC [6036] LOG:  redo starts at 7/E8678300
2018-08-15 15:16:58.731 UTC [6087] LOG:  archive command failed with exit code 1
2018-08-15 15:16:58.731 UTC [6087] DETAIL:  The failed archive command was: ssh backup 
test ! -f backup/postgresql/Y/0001000700E8 && scp 
pg_xlog/0001000700E8 backup:backup/postgresql/Y/0001000700E8
scp: backup/postgresql/archivedir/0001000700E9: No such file or 
directory


The above is not clear to me. My best guess:

1) The ssh backup ... is archiving WAL's from Y to Y specific archive on 
'backup'?


Yes, there are two archive directories on "backup", one called
"archivedir" which is for the X-to-Y replication and a second
called "Y" which is for the Y-to-Z replication.

2) The scp: backup ... is the restore_command fetching archived WAL's 
that originated on X and where sent to backup/postgresql/archivedir/ ?


Yes.


3) If 2) is true why is it showing up in the error for the archive command?


It's not part of the error for the archive command; it's just the
next thing in the log file.  Y is (1) trying write WAL segment
E8 to the Y-to-Z archive and failing because it is already there and
(2) trying to read WAL segment E9 from the X-to-Y archive and failing
because it doesn't exist (and never will).  (1) is the problem, (2) is
expected.


Can you show us your actual archive and restore commands?


On Y:

restore_command = 'scp backup:backup/postgresql/archivedir/%f %p'
archive_command = 'ssh backup test ! -f backup/postgresql/Y/%f && scp 
%p backup:backup/postgresql/Y/%f'



2018-08-15 15:16:58.823 UTC [6036] LOG:  consistent recovery state reached at 
7/E998
2018-08-15 15:16:58.823 UTC [6036] LOG:  invalid resource manager ID 22 at 
7/E998
2018-08-15 15:16:58.824 UTC [6035] LOG:  database system is ready to accept 
read only connections
2018-08-15 15:16:58.855 UTC [6094] FATAL:  could not connect to the primary 
server: could not connect to server: Connection refused





I can now connect to the db and do read/write operations, but it
continues to try and fail to re-write segment E8 to its archive.

Eventually I manually delete E8 from the archive; it then
succeeds at writing it with the same content as before and seems to
continue to function normally.


Which archive?


The Y-to-Z archive, backup:backup/postgresql/Y/...E8.


Questions:

- Can all of this be blamed on the failure of the first attempt
to promote due to the wrong permissions on recovery.conf?

- Should my archive_command detect the case where it is asked to
write the same file again with the same conte

regex match and special characters

2018-08-16 Thread Alex Kliukin
Hi,

Here is a simple SQL statement that gives different results on PostgreSQL 9.6 
and PostgreSQL 10+. The space character at the end of the string is actually 
U+2006 SIX-PER-EM SPACE 
(http://www.fileformat.info/info/unicode/char/2006/index.htm)

test=# select 'abcd ' ~ 'abcd\s';
 ?column?
--
 t
(1 row)

test=# select version();
 version
-
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 6.4.0-r1 
p1.3) 6.4.0, 64-bit
(1 row)


On another server (running on the same system on a different port)

postgres=# select version();
version
---
 PostgreSQL 9.6.9 on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 6.4.0-r1 
p1.3) 6.4.0, 64-bit
(1 row)

postgres=# select 'abcd ' ~ 'abcd\s';
 ?column?
--
 f
(1 row)

For both clusters, the client encoding is UTF8, the database encoding and 
collation is UTF8 and en_US.utf8 respectively, and the lc_ctype is en_US.utf8. 
I am accessing the databases running locally by ssh-ing first to the host. 

I observed similar issues with other Linux-based servers running Ubuntu, in all 
cases the regex resulted in true on PostgreSQL 10+ and false on earlier 
versions (down to 9.3). The query comes from a table check that suddenly 
stopped accepting rows valid in the older version during the migration. Making 
it  select 'abcd ' ~ E'abcd\\s' doesn't  modify the outcome, unsurprisingly.

Is it reproducible for others here as well? Given that it is, Is there a way to 
make both versions behave the same?

Cheers,
Alex



Logical replication from standby

2018-08-16 Thread Andreas Joseph Krogh
Any plans to support $subject?
 
Offloading the primary by replicating from standby would be very useful.
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 




Re: During promotion, new master tries to archive same segment twice

2018-08-16 Thread David Steele
On 8/16/18 4:37 AM, Phil Endecott wrote:
> David Steele wrote:
>> On 8/15/18 4:25 PM, Phil Endecott wrote:
>>> - Should my archive_command detect the case where it is asked to
>>> write the same file again with the same contents, and report success
>>> in that case?
>>
>> Yes.
> 
>> There are a number of cases where the same WAL
>> segment can be pushed more than once, especially after failures where
>> Postgres is not sure that the command completed.  The archive command
>> should handle this gracefully.
> 
> Hmm, OK.  Here's what the current docs say:
> 
> Section 25.3.1:
> 
> "The archive command should generally be designed to refuse to
> overwrite any pre-existing archive file. This is an important
> safety feature to preserve the integrity of your archive in case
> of administrator error (such as sending the output of two
> different servers to the same archive directory).
> 
> It is advisable to test your proposed archive command to ensure
> that it indeed does not overwrite an existing file, and that it
> returns nonzero status in this case."
> 
> And section 26.2.9:
> 
> "When continuous WAL archiving is used in a standby, there
> are two different scenarios: the WAL archive can be shared
> between the primary and the standby, or the standby can
> have its own WAL archive.  When the standby has its own WAL
> archive, set archive_mode to always, and the standby will call
> the archive command for every WAL segment it receives, whether
> it's by restoring from the archive or by streaming replication.
> The shared archive can be handled similarly, but the
> archive_command must test if the file being archived exists
> already, and if the existing file has identical contents.
> This requires more care in the archive_command, as it must be
> careful to not overwrite an existing file with different contents,
> but return success if the exactly same file is archived twice.
> And all that must be done free of race conditions, if two
> servers attempt to archive the same file at the same time."
> 
> So you're saying that that's wrong, and that I must always
> handle the case when the same WAL segment is written twice.

Seems like an omission in section 25.3.1 rather than a problem in 26.2.9.

Duplicate WAL is possible in *all* cases.  A trivial example is that
Postgres calls archive_command and it succeeds but an error happens
(e.g. network) right before Postgres is notified.  It will wait a bit
and try the same WAL segment again.

> I'll file a bug against the documentation.

OK.

>> pgBackRest has done this for years and it saves a *lot* of headaches.
> 
> The system to which I am sending the WAL files is a rsync.net
> account.  I use it because of its reliability, but methods for
> transferring files are limited largely to things like scp and
> rsync.

Rsync and scp are not good tools to use for backup because there is no
guarantee of durability, i.e. the file is not synced to disk before
success is returned.  rsync.net may have durability guarantees but you
should verify that with them.

Even so, crafting a safe archive_command using these tools is going to
be very tricky.

Regards,
-- 
-David
da...@pgmasters.net



Re: During promotion, new master tries to archive same segment twice

2018-08-16 Thread Stephen Frost
Greetings,

* Phil Endecott (spam_from_pgsql_li...@chezphil.org) wrote:
> David Steele wrote:
> >pgBackRest has done this for years and it saves a *lot* of headaches.
> 
> The system to which I am sending the WAL files is a rsync.net
> account.  I use it because of its reliability, but methods for
> transferring files are limited largely to things like scp and
> rsync.

You might consider using a well-tested PG backup tool locally and then
simply rsync that backup over to rsync.net.  Certainly with pgbackrest,
we intend and expect people to use more routine "just copy all the
files" backup methods for backing up the repository.

Basically, you could pgbackrest to /some/local/path and then rsync from
there over to rsync.net.

Having an rsync or scp 'storage' option (similar to the s3 one) for
pgbackrest might be interesting..  We discussed having the ability to
start a backup from the PG server at one point but in that discussion
we were thinking pgbackrest would also be installed on the backup
server.  This would be different from that in that the remote side would
only need to support rsync or scp.  You'd have to accept that if the
the backup server dies then you lose data though, since I don't believe
there's a way to ask for an fsync() through rsync or scp, which makes it
a much less compelling feature, unless rsync.net guarantees writes
somehow..?

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Logical replication from standby

2018-08-16 Thread Andres Freund
On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
> Any plans to support $subject?

There's LOADS of discussion on this on the lists.



Sv: Re: Logical replication from standby

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 14:18:12, skrev Andres Freund mailto:and...@anarazel.de>>:
On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
 > Any plans to support $subject?

 There's LOADS of discussion on this on the lists.
 
I couldn't find anything specifically regarding "from standby", other than 
previous posts by me, which didn't really result in anything.
 
--
 Andreas Joseph Krogh



Re: Logical replication from standby

2018-08-16 Thread Andres Freund
Hi,

On 2018-08-16 14:32:34 +0200, Andreas Joseph Krogh wrote:
> På torsdag 16. august 2018 kl. 14:18:12, skrev Andres Freund 
>   >:
> On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
>  > Any plans to support $subject?
> 
>  There's LOADS of discussion on this on the lists.
>  
> I couldn't find anything specifically regarding "from standby", other than 
> previous posts by me, which didn't really result in anything.

See e.g.
http://archives.postgresql.org/message-id/CAMsr%2BYEVmBJ%3DdyLw%3D%2BkTihmUnGy5_EW4Mig5T0maieg_Zu%3DXCg%40mail.gmail.com
and also https://commitfest.postgresql.org/15/788/ etc.

Greetings,

Andres Freund



Sv: Re: Logical replication from standby

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 14:50:47, skrev Andres Freund mailto:and...@anarazel.de>>:
Hi,

 On 2018-08-16 14:32:34 +0200, Andreas Joseph Krogh wrote:
 > På torsdag 16. august 2018 kl. 14:18:12, skrev Andres Freund 
  >:
 > On 2018-08-16 13:35:06 +0200, Andreas Joseph Krogh wrote:
 >  > Any plans to support $subject?
 >
 >  There's LOADS of discussion on this on the lists.
 >  
 > I couldn't find anything specifically regarding "from standby", other than
 > previous posts by me, which didn't really result in anything.

 See e.g.
 
http://archives.postgresql.org/message-id/CAMsr%2BYEVmBJ%3DdyLw%3D%2BkTihmUnGy5_EW4Mig5T0maieg_Zu%3DXCg%40mail.gmail.com
 and also https://commitfest.postgresql.org/15/788/ etc.
 
I saw the commitfest-item but sadly nothing has happended.
Thanks for pointing out the thread.
 
Craig, if you are picking up this, are you planning to work more on this?
 
--
 Andreas Joseph Krogh



Re: Copying data from a CSV file into a table dynamically

2018-08-16 Thread pavan95
Thank you so much Ron.  You saved my efforts after a slight modification of
quotes of the command you said, it started working fine.

The modified command is as below:

cat /tmp/xyz/abc/postgresql-`date --date="0 days ago" +%Y-%m-%d`_*.csv | 
psql -U aaa -d mydb -c "COPY postgres_log1 FROM STDIN WITH csv;"

Team,

Thank you so much for your valuable time spent on this!!

Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: regex match and special characters

2018-08-16 Thread Adrian Klaver

On 08/16/2018 03:59 AM, Alex Kliukin wrote:

Hi,

Here is a simple SQL statement that gives different results on PostgreSQL 9.6 
and PostgreSQL 10+. The space character at the end of the string is actually 
U+2006 SIX-PER-EM SPACE 
(http://www.fileformat.info/info/unicode/char/2006/index.htm)

test=# select 'abcd ' ~ 'abcd\s';
  ?column?
--
  t
(1 row)

test=# select version();
  version
-
  PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 6.4.0-r1 
p1.3) 6.4.0, 64-bit
(1 row)


On another server (running on the same system on a different port)

postgres=# select version();
 version
---
  PostgreSQL 9.6.9 on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 6.4.0-r1 
p1.3) 6.4.0, 64-bit
(1 row)

postgres=# select 'abcd ' ~ 'abcd\s';
  ?column?
--
  f
(1 row)

For both clusters, the client encoding is UTF8, the database encoding and 
collation is UTF8 and en_US.utf8 respectively, and the lc_ctype is en_US.utf8. 
I am accessing the databases running locally by ssh-ing first to the host.

I observed similar issues with other Linux-based servers running Ubuntu, in all 
cases the regex resulted in true on PostgreSQL 10+ and false on earlier 
versions (down to 9.3). The query comes from a table check that suddenly 
stopped accepting rows valid in the older version during the migration. Making 
it  select 'abcd ' ~ E'abcd\\s' doesn't  modify the outcome, unsurprisingly.

Is it reproducible for others here as well? Given that it is, Is there a way to 
make both versions behave the same?


select version();
  version 



 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
4.8.5, 64-bit



lc_collate  | en_US.UTF-8 


lc_ctype| en_US.UTF-8


test=# select 'abcd'||chr(2006) ~ E'abcd\s';
 ?column?
--
 f
(1 row)

In your example you are working on Postgres devel. Have you tried it on 
Postgres 10 and/or 11?




Cheers,
Alex





--
Adrian Klaver
adrian.kla...@aklaver.com



Copy over large data Postgresql 9.5

2018-08-16 Thread Vikas Sharma
Hello Experts,

I need to transfer pgsql 9.5 data of 90GB from one Cloud provider to
other.  We have plenty of downtime to do this. I will be copying over data
directory after shutting down pgsql services on the source.
The problem is how can I confirm that the data has been copied over
correctly. Is the pgsql service starting OK on target is enough to ensure
that or is there anything more funky to do that?

I will create compressed archive of data and calculate SHA-256 checksum and
compare after copying over to target.

At source pgsql is in streaming replication using repmgr (3.5) so I think I
will have to recreate repmgr database after copying over and register
master again.

Should I vacuum (full) and reindex pgsql on target to gather the statistics
again.

Could you advise on the above please?

Regards
Vikas


Re: regex match and special characters

2018-08-16 Thread Tom Lane
Alex Kliukin  writes:
> Here is a simple SQL statement that gives different results on PostgreSQL 9.6 
> and PostgreSQL 10+. The space character at the end of the string is actually 
> U+2006 SIX-PER-EM SPACE 
> (http://www.fileformat.info/info/unicode/char/2006/index.htm)

I think the reason for the discrepancy is that in v10 we fixed the regex
locale support so that it could properly classify code points above U+7FF,
cf

https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=c54159d44ceaba26ceda9fea1804f0de122a8f30

So 10 is giving the right answer (i.e. that \s matches U+2006).
9.x is not, but we're not going to back-patch such a large change.

regards, tom lane



Re: During promotion, new master tries to archive same segment twice

2018-08-16 Thread Adrian Klaver

On 08/16/2018 01:48 AM, Phil Endecott wrote:

Adrian Klaver wrote:

On 08/15/2018 01:25 PM, Phil Endecott wrote:

Dear Experts,




The above is not clear to me. My best guess:




It's not part of the error for the archive command; it's just the
next thing in the log file.  Y is (1) trying write WAL segment
E8 to the Y-to-Z archive and failing because it is already there and
(2) trying to read WAL segment E9 from the X-to-Y archive and failing
because it doesn't exist (and never will).  (1) is the problem, (2) is
expected.


Hmm, still not sure why they are both showing up in the same LOG: entry 
as the entry was for the failure of the archive_command.


What is archive_mode set to on Y?


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Copy over large data Postgresql 9.5

2018-08-16 Thread Andreas Kretschmer
On 16 August 2018 15:41:31 CEST, Vikas Sharma  wrote:
>Hello Experts,
>
>I need to transfer pgsql 9.5 data of 90GB from one Cloud provider to
>other.  We have plenty of downtime to do this. I will be copying over
>data
>directory after shutting down pgsql services on the source.

Why not using streaming replication without downtime?

Regards, Andreas


-- 
2ndQuadrant - The PostgreSQL Support Company



Sv: Re: regex match and special characters

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 15:16:52, skrev Adrian Klaver <
adrian.kla...@aklaver.com >:
On 08/16/2018 03:59 AM, Alex Kliukin wrote:
 > Hi,
 >
 > Here is a simple SQL statement that gives different results on PostgreSQL 
9.6 and PostgreSQL 10+. The space character at the end of the string is 
actually U+2006 SIX-PER-EM SPACE 
(http://www.fileformat.info/info/unicode/char/2006/index.htm)
 >
 > test=# select 'abcd ' ~ 'abcd\s';
 >   ?column?
 > --
 >   t
 > (1 row)
 >
 > test=# select version();
 >                                               version
 > 
-
 >   PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 
6.4.0-r1 p1.3) 6.4.0, 64-bit
 > (1 row)
 >
 >
 > On another server (running on the same system on a different port)
 >
 > postgres=# select version();
 >                                              version
 > 
---
 >   PostgreSQL 9.6.9 on x86_64-pc-linux-gnu, compiled by gcc (Gentoo 6.4.0-r1 
p1.3) 6.4.0, 64-bit
 > (1 row)
 >
 > postgres=# select 'abcd ' ~ 'abcd\s';
 >   ?column?
 > --
 >   f
 > (1 row)
 >
 > For both clusters, the client encoding is UTF8, the database encoding and 
collation is UTF8 and en_US.utf8 respectively, and the lc_ctype is en_US.utf8. 
I am accessing the databases running locally by ssh-ing first to the host.
 >
 > I observed similar issues with other Linux-based servers running Ubuntu, in 
all cases the regex resulted in true on PostgreSQL 10+ and false on earlier 
versions (down to 9.3). The query comes from a table check that suddenly 
stopped accepting rows valid in the older version during the migration. Making 
it  select 'abcd ' ~ E'abcd\\s' doesn't  modify the outcome, unsurprisingly.
 >
 > Is it reproducible for others here as well? Given that it is, Is there a 
way to make both versions behave the same?

 select version();
                                        version

 

   PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux)
 4.8.5, 64-bit


 lc_collate                          | en_US.UTF-8

 lc_ctype                            | en_US.UTF-8


 test=# select 'abcd'||chr(2006) ~ E'abcd\s';
   ?column?
 --
   f
 (1 row)

 In your example you are working on Postgres devel. Have you tried it on
 Postgres 10 and/or 11?
 
char(2006) produces the wrong character as 2006 is the hex-value. You have to 
use 8198:
 
andreak@[local]:5433 10.4 andreak=# select version(); 
 
┌┐
 │    version 
│
 
├┤
 │ PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
7.3.0-16ubuntu3) 7.3.0, 64-bit │
 
└┘
 (1 row)

andreak@[local]:5433 10.4 andreak=# select 'abcd'||chr(8198) ~ 'abcd\s'; 
 ┌──┐
 │ ?column? │
 ├──┤
 │ t    │
 └──┘
 (1 row)
  
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: Sv: Re: regex match and special characters

2018-08-16 Thread Adrian Klaver

On 08/16/2018 07:04 AM, Andreas Joseph Krogh wrote:

char(2006) produces the wrong character as 2006 is the hex-value. You 
have to use 8198:

andreak@[local]:543310.4 andreak=# select version();
┌┐
│    version 
 │

├┤
│ PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 
7.3.0-16ubuntu3) 7.3.0, 64-bit │

└┘
(1 row)

andreak@[local]:543310.4 andreak=# select 'abcd'||chr(8198) ~ 'abcd\s';
┌──┐
│ ?column? │
├──┤
│ t    │
└──┘
(1 row)


Argh, read the wrong line. Thanks for the correction. Still:

test=# select version();
  version 



 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
4.8.5, 64-bit

(1 row)


test=# select 'abcd'||chr(8198) ~ E'abcd\s';
 ?column?
--
 f
(1 row)



--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Sv: Re: Sv: Re: regex match and special characters

2018-08-16 Thread Andreas Joseph Krogh
På torsdag 16. august 2018 kl. 16:32:40, skrev Adrian Klaver <
adrian.kla...@aklaver.com >:
On 08/16/2018 07:04 AM, Andreas Joseph Krogh wrote:

 > char(2006) produces the wrong character as 2006 is the hex-value. You
 > have to use 8198:
 > andreak@[local]:543310.4 andreak=# select version();
 > 
┌┐
 > │    version
 >  │
 > 
├┤
 > │ PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
 > 7.3.0-16ubuntu3) 7.3.0, 64-bit │
 > 
└┘
 > (1 row)
 >
 > andreak@[local]:543310.4 andreak=# select 'abcd'||chr(8198) ~ 'abcd\s';
 > ┌──┐
 > │ ?column? │
 > ├──┤
 > │ t    │
 > └──┘
 > (1 row)

 Argh, read the wrong line. Thanks for the correction. Still:

 test=# select version();
                                        version

 

   PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux)
 4.8.5, 64-bit
 (1 row)


 test=# select 'abcd'||chr(8198) ~ E'abcd\s';
   ?column?
 --
   f
 (1 row)
 
When using E-syntax you need to double the backslash for escaping:
 
andreak@[local]:5433 10.4 andreak=# select 'abcd'||chr(8198) ~ E'abcd\\s'; 
 ┌──┐
 │ ?column? │
 ├──┤
 │ t    │
 └──┘
 (1 row)
  
 
-- Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 
 


 


Re: Sv: Re: Sv: Re: regex match and special characters

2018-08-16 Thread Adrian Klaver

On 08/16/2018 07:47 AM, Andreas Joseph Krogh wrote:
På torsdag 16. august 2018 kl. 16:32:40, skrev Adrian Klaver 
mailto:adrian.kla...@aklaver.com>>:


On 08/16/2018 07:04 AM, Andreas Joseph Krogh wrote:

 > char(2006) produces the wrong character as 2006 is the hex-value. You
 > have to use 8198:
 > andreak@[local]:543310.4 andreak=# select version();
 >

┌┐
 > │    version
 >  │
 >

├┤
 > │ PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
 > 7.3.0-16ubuntu3) 7.3.0, 64-bit │
 >

└┘
 > (1 row)
 >
 > andreak@[local]:543310.4 andreak=# select 'abcd'||chr(8198) ~
'abcd\s';
 > ┌──┐
 > │ ?column? │
 > ├──┤
 > │ t    │
 > └──┘
 > (1 row)

Argh, read the wrong line. Thanks for the correction. Still:

test=# select version();
                                        version



   PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux)
4.8.5, 64-bit
(1 row)


test=# select 'abcd'||chr(8198) ~ E'abcd\s';
   ?column?
--
   f
(1 row)

When using E-syntax you need to double the backslash for escaping:
andreak@[local]:543310.4 andreak=# select 'abcd'||chr(8198) ~ E'abcd\\s';
┌──┐
│ ?column? │
├──┤
│ t    │
└──┘
(1 row)


Hmm:

test=# show standard_conforming_strings;
 standard_conforming_strings
-
 off
(1 row)

test=# show escape_string_warning;
 escape_string_warning
---
 on
(1 row)



test=# select 'abcd'||chr(8198) ~ 'abcd\s';
WARNING:  nonstandard use of escape in a string literal
LINE 1: select 'abcd'||chr(8198) ~ 'abcd\s';
   ^
HINT:  Use the escape string syntax for escapes, e.g., E'\r\n'.
 ?column?
--
 f
(1 row)

test=# select 'abcd'||chr(8198) ~ 'abcd\\s';
WARNING:  nonstandard use of \\ in a string literal
LINE 1: select 'abcd'||chr(8198) ~ 'abcd\\s';
   ^
HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
 ?column?
--
 t
(1 row)


test=# set escape_string_warning = 'off';
SET
test=# show escape_string_warning;
 escape_string_warning
---
 off
(1 row)

test=# select 'abcd'||chr(8198) ~ 'abcd\s';
 ?column?
--
 f
(1 row)

test=# set standard_conforming_strings = 'on';
SET
test=# show standard_conforming_strings;
 standard_conforming_strings
-
 on
(1 row)

test=# select 'abcd'||chr(8198) ~ 'abcd\s';
 ?column?
--
 t
(1 row)


Wonder if the OP has standard_conforming_strings='off' and
escape_string_warning='off'?




--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Sv: Re: Sv: Re: regex match and special characters

2018-08-16 Thread Tom Lane
Andreas Joseph Krogh  writes:
> When using E-syntax you need to double the backslash for escaping:
> andreak@[local]:5433 10.4 andreak=# select 'abcd'||chr(8198) ~ E'abcd\\s'; 

Another thing to keep in mind is that this is all locale-dependent
(specifically, LC_CTYPE, I believe).  In plain old C locale, nothing
beyond the standard ASCII whitespace characters will match \s.  I'm not
sure how universal it is for other locales to treat characters like
U+2006 as whitespace.

regards, tom lane



Re: Sv: Re: Sv: Re: regex match and special characters

2018-08-16 Thread Adrian Klaver

On 08/16/2018 08:13 AM, Adrian Klaver wrote:




Wonder if the OP has standard_conforming_strings='off' and
escape_string_warning='off'?


In the above referring to 9.6.9 instance.





--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Sv: Re: Sv: Re: regex match and special characters

2018-08-16 Thread Adrian Klaver

On 08/16/2018 08:19 AM, Adrian Klaver wrote:

On 08/16/2018 08:13 AM, Adrian Klaver wrote:




Wonder if the OP has standard_conforming_strings='off' and
escape_string_warning='off'?


In the above referring to 9.6.9 instance.


Well that theory is no good:

test=# select version();
   version 


-
 PostgreSQL 9.6.7 on x86_64-pc-linux-gnu, compiled by gcc (SUSE Linux) 
4.8.5, 64-bit

(1 row)


test=# set standard_conforming_strings='off';
SET
test=# show standard_conforming_strings;
 standard_conforming_strings
-
 off

test=# show escape_string_warning;
 escape_string_warning
---
 off


test=# select 'abcd'||chr(8198) ~ E'abcd\s';
 ?column?
--
 f
(1 row)

test=# select 'abcd'||chr(8198) ~ E'abcd\\s';
 ?column?
--
 f
(1 row)

test=# set escape_string_warning='on';
SET
test=# set standard_conforming_strings='on';
SET


test=# select 'abcd'||chr(8198) ~ 'abcd\s';
 ?column?
--
 f
(1 row)

test=# select 'abcd'||chr(8198) ~ E'abcd\\s';
 ?column?
--
 f
(1 row)









--
*Andreas Joseph Krogh*
CTO / Partner - Visena AS
Mobile: +47 909 56 963
andr...@visena.com 
www.visena.com 










--
Adrian Klaver
adrian.kla...@aklaver.com



pg_upgrade (and recovery) pitfalls

2018-08-16 Thread PO
Hello -general!

(didn't want to pester -hackers with this, as I don't know if I've been doing 
something terribly wrong ;-)


Consider the following scenario/setup:
- 4 DB servers in 2 DCs
  - 1 primary (in DC1)
  - 1 sync secondary (in other DC)
  - 2 async secondaries (distributed over DCs)
  - 1 server running BARMAN for backups/archiving

- Puppet takes care of the PG config, recovery.conf etc.

- ssh connections are only allowed and enabled to and from the BARMAN 
host/user, no ssh between the DB hosts.

- as we switch between servers pretty often, BARMAN provides a single archive 
for all 4 servers. Only the primary does WAL archiving.

- General requirements are:
  - *always* have one sync secondary online (no exceptions)
  - best possible availability ==> shortest possible downtime



The approach for major upgrades is to upgrade 2 servers and see if everything 
works as expected, then 
- either follow up with the remaining secondaries or
- promote one of them and carry on with the old version

Since we always have 2 spare secondaries to fall back to, pg_upgrade in 
"--link" mode is a viable approach, as it's nice and fast.


The "naive" idea was to shutdown all instances (starting with the primary to 
enable final syncs), run "pg_upgrade -k" on both the former primary and the 
former sync secondary, re-link the recovery.conf on the secondary, re-enable 
the "primary" IP and start both.

D'oh! The secondary is complaining about a different cluster identifier:
"FATAL:  database system identifier differs between the primary and standby"

(From looking at the code, I could not determine straight away when and how 
this identifier is generated, but I guess it is somewhere in initdb.c?)

So, as we can't rsync (no ssh...), which would probably finish in a few 
seconds, a pg_basebackup is due. Which can be a PITA when the DB size is 
scraping on a TB and you have a single 1GB/sec connection. Bye, bye, 
availability (remember the primary requirement?).

==> So by now, we're only pg_upgrade'ing the primary and follow up with a 
pg_basebackup to the secondary, planning for much longer downtimes. <==


After finishing the pg_basebackup, re-link the recovery.conf, start. 
The recovery finds history-files from higher timelines in the archive, starts 
to recover those (?) and then complains that the timeline doesn't match (don't 
take the numbers here too seriously, this is from a low-traffic test system, 
the fork off TL 1 was at least a year ago):

restored log file "0002.history" from archive
restored log file "0003.history" from archive
restored log file "0004.history" from archive
FATAL:  requested timeline 3 is not a child of this server's history
DETAIL:  Latest checkpoint is at 9C/36044D28 on timeline 1, but in the history 
of the requested timeline, the server forked off from that timeline at 
69/8800.

This mess can probably be cleaned up manually (delete the 00[234].history 
etc. on both the secondary and the BARMAN archive), however to be 100% safe (or 
when you're unexperienced), you take another basebackup :/

And - after moving the *.history files out of the way in the archive - the 
secondary finally starts and starts receiving WALs.

Sidenote: this second problem happened to us as well after a colleague promoted 
a secondary for some R/W tests w/out first disabling the archive_command in the 
morning and then re-built it using pg_basebackup in the afternoon.
I have to say that it took me quite a while to figure out what was going on 
there... sudden timeline jumps, presumably out of nowhere?!?


Now, questions/wishes:
- did I do something obvious(ly) wrong?

- why does a recovery, based on a recovery.conf that points to a reachable 
primary (which obviously communicates its own timeline), still look for higher 
timelines' history-files in the archive and tries to jump onto these timelines? 
This doesn't seem reasoable to me at all...

- is there a way to have pg_upgrade/initdb use a particular "database system 
identifier" or some kind of "IV" for the new instance, allowing for identical 
upgrades on primary and secondary? Apart from that number (and the system 
tables' OIDs), the data directories should be identical, as far as I understand 
pg_upgrade's functionality? 
  (- and where do I find that identifier?)

- is there a way to have pg_upgrade/initdb initiate the new cluster on the same 
(or a higher) timeline, to prevent f***ing up the archive? If not, I'd 
certainly appreciate such an option! (I'm well aware that I need a new 
basebackup after the upgrade anyway, but alas!)

- any other hints (apart from rsync or other ssh-based methods) how I could get 
around the pg_basebackup would be highly appreciated. Something that only 
clones the system tables and not the bulk of the (identical) data files?
- is my situation (mandantory sync secondary) so unusual that nobody ever 
thought about the above? ;-)

Cheers & best regards,

Gunnar "Nick" Bluth



Re: During promotion, new master tries to archive same segment twice

2018-08-16 Thread Phil Endecott

Adrian Klaver wrote:

On 08/16/2018 01:48 AM, Phil Endecott wrote:

Adrian Klaver wrote:

On 08/15/2018 01:25 PM, Phil Endecott wrote:

Dear Experts,




The above is not clear to me. My best guess:




It's not part of the error for the archive command; it's just the
next thing in the log file.  Y is (1) trying write WAL segment
E8 to the Y-to-Z archive and failing because it is already there and
(2) trying to read WAL segment E9 from the X-to-Y archive and failing
because it doesn't exist (and never will).  (1) is the problem, (2) is
expected.


Hmm, still not sure why they are both showing up in the same LOG: entry 
as the entry was for the failure of the archive_command.


It's not.  It's just the next thing in the log file.  It seems that
stderr output from scp appears raw in the log file.


What is archive_mode set to on Y?


archive_mode = always


Regards, Phil.







Re: using graph model with PostgreSQL

2018-08-16 Thread Data Ace
I think's its a forked PostgreSQL, try AgensGraph:

https://www.postgresql.org/download/products/8/

On Wed, Aug 15, 2018 at 7:09 AM, 김세훈  wrote:

> Hi there,
>
> currently I'm using PostgreSQL with PostGIS extension to handle geospatial
> data.
>
> In my project I need to apply some graph algorithms like MST for some
> network of GPS coordinates.
>
> I know there is some way of using Neo4j with PostgreSQL but is there any
> other way to construct
>
> graph model within PostgreSQL environment?
>
> Any external modules would be welcomed.
>
>
> Thanks.
>
>
>


Re: pg_upgrade (and recovery) pitfalls

2018-08-16 Thread Stephen Frost
Greetings,

* PO (gunnar.bl...@pro-open.de) wrote:
> Consider the following scenario/setup:
> - 4 DB servers in 2 DCs
>   - 1 primary (in DC1)
>   - 1 sync secondary (in other DC)
>   - 2 async secondaries (distributed over DCs)

I'm a bit surprised that you're ok with the latency imposed by using
sync replication to another data center.  I'm guessing they're pretty
close to each other?

> - General requirements are:
>   - *always* have one sync secondary online (no exceptions)

Well, you kind of have to or everything stops. ;)

> The "naive" idea was to shutdown all instances (starting with the primary to 
> enable final syncs), run "pg_upgrade -k" on both the former primary and the 
> former sync secondary, re-link the recovery.conf on the secondary, re-enable 
> the "primary" IP and start both.
> 
> D'oh! The secondary is complaining about a different cluster identifier:
> "FATAL:  database system identifier differs between the primary and standby"

No, you can't do that.

> (From looking at the code, I could not determine straight away when and how 
> this identifier is generated, but I guess it is somewhere in initdb.c?)

Yes.

> So, as we can't rsync (no ssh...), which would probably finish in a few 
> seconds, a pg_basebackup is due. Which can be a PITA when the DB size is 
> scraping on a TB and you have a single 1GB/sec connection. Bye, bye, 
> availability (remember the primary requirement?).

The rsync *might* finish quickly but it depends a lot on the specifics
of your environment- for example, the rsync method doesn't do anything
for unlogged tables, so if you have large unlogged tables you can end up
with them getting copied over and that can take a long time, so, some
prep work should be done to make sure you nuke any unlogged tables
before you go through with the process (or do something similar).

pg_basebackup has the unfortunate issue that it's single-threaded,
meaning that enabling compression probably will cause the system to
bottle-neck on the single CPU before reaching your 1Gb/s bandwidth
limit anyway.  You could parallelize the backup/restore using pgbackrest
or, in recent versions I think, with barman, and that should at least
get you to be able to fill the 1Gb/s pipe with compressed data for the
backup.  You're likely still looking at an hour or more though to get
all that data copied over that small a pipe.

> ==> So by now, we're only pg_upgrade'ing the primary and follow up with a 
> pg_basebackup to the secondary, planning for much longer downtimes. <==

I have to say that I probably would argue that you should really have at
least two replicas in the same DC as the primary and then use
quorom-based syncronous replication.  Presumably, that'd also increase
the bandwidth available to you for rebuilding the replica, reducing the
downtime associated with that.  That might also get you to the point
where you could use the rsync method that's discussed in the pg_upgrade
docs to get the replicas back online.

> After finishing the pg_basebackup, re-link the recovery.conf, start. 
> The recovery finds history-files from higher timelines in the archive, starts 
> to recover those (?) and then complains that the timeline doesn't match 
> (don't take the numbers here too seriously, this is from a low-traffic test 
> system, the fork off TL 1 was at least a year ago):
> 
> restored log file "0002.history" from archive
> restored log file "0003.history" from archive
> restored log file "0004.history" from archive
> FATAL:  requested timeline 3 is not a child of this server's history
> DETAIL:  Latest checkpoint is at 9C/36044D28 on timeline 1, but in the 
> history of the requested timeline, the server forked off from that timeline 
> at 69/8800.
> 
> This mess can probably be cleaned up manually (delete the 00[234].history 
> etc. on both the secondary and the BARMAN archive), however to be 100% safe 
> (or when you're unexperienced), you take another basebackup :/

Whoa  No, this isn't good- once you've done a pg_upgrade, you're
on a *new* cluster, really.  There's no playing forward between an old
PG server and a new one that's been pg_upgrade'd and you should really
be using a tool that makes sure you can't end up with a messed up
archive like that.  What seems to be happening here is that your restore
command is trying to pull from the *old* server's WAL and history files
and that's *wrong*.

pgbackrest has a way to handle this and keep the stanza name the same by
using a 'stanza-upgrade', but in no case should a restore command be
pulling WAL files (of any sort) from the archive of a server with a
different system identifier.  pgbackrest won't let that happen.

> And - after moving the *.history files out of the way in the archive - the 
> secondary finally starts and starts receiving WALs.

That's really grotty. :(

> Sidenote: this second problem happened to us as well after a colleague 
> promoted a secondary for some R/W tests w/out first disabling t

PostgreSQL System Views or Dictionary Tables

2018-08-16 Thread Diego Grampin

Hello,

I'm new at PostgreSQL, but known Oracle since ten years ago.

In PostgreSQL, where can i found system views or dictionary tables (v$*, 
dba_* at oracle) or structural dba tables ?. And comand prompt 
parameters of the database (show param X at Oracle) ?
The porpose of this is to familiarized with the structure of Postgresql 
databases.


_My test environment:_ Windows physical server machine intel x86-64 
architecture with PostgreSQL Database version 10.5 and pgAdmin 4 version 
3.2 .


Thanks so much for your help.
Regards,
Diego.


Re: PostgreSQL System Views or Dictionary Tables

2018-08-16 Thread Adrian Klaver

On 08/16/2018 02:06 PM, Diego Grampin wrote:

Hello,

I'm new at PostgreSQL, but known Oracle since ten years ago.

In PostgreSQL, where can i found system views or dictionary tables (v$*, 
dba_* at oracle) or structural dba tables ?. And comand prompt 
parameters of the database (show param X at Oracle) ?
The porpose of this is to familiarized with the structure of Postgresql 
databases.


https://www.postgresql.org/docs/10/static/catalogs.html

https://www.postgresql.org/docs/10/static/information-schema.html

https://www.postgresql.org/docs/10/static/sql-show.html



_My test environment:_ Windows physical server machine intel x86-64 
architecture with PostgreSQL Database version 10.5 and pgAdmin 4 version 
3.2 .


Thanks so much for your help.
Regards,
Diego.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: PostgreSQL System Views or Dictionary Tables

2018-08-16 Thread David G. Johnston
On Thu, Aug 16, 2018 at 2:06 PM, Diego Grampin 
wrote:

> Hello,
>
> I'm new at PostgreSQL, but known Oracle since ten years ago.
>
> In PostgreSQL, where can i found system views or dictionary tables (v$*,
> dba_* at oracle) or structural dba tables ?. And comand prompt parameters
> of the database (show param X at Oracle) ?
> The porpose of this is to familiarized with the structure of Postgresql
> databases.
>
> *My test environment:* Windows physical server machine intel x86-64
> architecture with PostgreSQL Database version 10.5 and pgAdmin 4 version
> 3.2 .
>

https://www.postgresql.org/docs/10/static/index.html

Section VII is internals - the "structural dba tables" are termed "System
Catalogs" in PostgreSQL.

David J.


Difference between "autovacuum_naptime" and "autovacuum_vacuum_cost_delay"?

2018-08-16 Thread Raghavendra Rao J S V
Hi All,

I have gone through several documents but I am still have confusion related
to "autovacuum_naptime" and "autovacuum_vacuum_cost_delay". Could you
clarify me with an example.

When Auto vacuum worker process will start?

When Auto vacuum worker process will stop?

Does Auto vacuum worker process will sleep like Auto vacuum launcher
process ?

What is the difference between Auto vacuum launcher process and Auto vacuum
worker process?



-- 
Regards,
Raghavendra Rao J S V


Re: Difference between "autovacuum_naptime" and "autovacuum_vacuum_cost_delay"?

2018-08-16 Thread Joshua D. Drake

On 08/16/2018 06:10 PM, Raghavendra Rao J S V wrote:

Hi All,

I have gone through several documents but I am still have confusion 
related to "autovacuum_naptime" and "autovacuum_vacuum_cost_delay". 
Could you clarify me with an example.


When Auto vacuum worker process will start?



Autovacuum checks for relations that need to be vacuumed/analyzed every 
"naptime"



When Auto vacuum worker process will stop?


When it is done with the list of relations that needed work that were 
found at the launch of "naptime"




Does Auto vacuum worker process will sleep like Auto vacuum launcher 
process ?


The launcher process sleeps for naptime, then wakes up to check what 
needs to be worked on




What is the difference between Auto vacuum launcher process and Auto 
vacuum worker process?


The launcher is the process that spawns the worker processes (I think).

JD




--
Regards,
Raghavendra Rao J S V



--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
***  A fault and talent of mine is to tell it exactly how it is.  ***
PostgreSQL centered full stack support, consulting and development.
Advocate: @amplifypostgres || Learn: https://postgresconf.org
* Unless otherwise stated, opinions are my own.   *



Re: Difference between "autovacuum_naptime" and "autovacuum_vacuum_cost_delay"?

2018-08-16 Thread Raghavendra Rao J S V
Thank you very much for your response.

Could you clarify me below things,please?

What is the difference between "autovacuum_naptime" and
"autovacuum_vacuum_cost_delay"?

What is the difference between "autovacuum launcher process" and
"autovacuum worker process"?

How to control the number of "autovacuum launcher process" and "autovacuum
worker process"?

Does "autovacuum launcher process" sleeps? If yes,which parameter controls
it?

Does "autovacuum worker process" sleeps? If yes,which parameter controls it?

Regards,
Raghavendra Rao


On 17 August 2018 at 09:30, Joshua D. Drake  wrote:

> On 08/16/2018 06:10 PM, Raghavendra Rao J S V wrote:
>
> Hi All,
>
> I have gone through several documents but I am still have confusion
> related to "autovacuum_naptime" and "autovacuum_vacuum_cost_delay". Could
> you clarify me with an example.
>
> When Auto vacuum worker process will start?
>
>
> Autovacuum checks for relations that need to be vacuumed/analyzed every
> "naptime"
>
> When Auto vacuum worker process will stop?
>
>
> When it is done with the list of relations that needed work that were
> found at the launch of "naptime"
>
>
> Does Auto vacuum worker process will sleep like Auto vacuum launcher
> process ?
>
>
> The launcher process sleeps for naptime, then wakes up to check what needs
> to be worked on
>
>
> What is the difference between Auto vacuum launcher process and Auto
> vacuum worker process?
>
>
> The launcher is the process that spawns the worker processes (I think).
>
> JD
>
>
>
>
> --
> Regards,
> Raghavendra Rao J S V
>
>
> --
> Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
> ***  A fault and talent of mine is to tell it exactly how it is.  ***
> PostgreSQL centered full stack support, consulting and development.
> Advocate: @amplifypostgres || Learn: https://postgresconf.org
> * Unless otherwise stated, opinions are my own.   *
>
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425