Re: Barman disaster recovery solution

2019-02-28 Thread David Steele

On 2/28/19 9:21 AM, Achilleas Mantzios wrote:

On 28/2/19 1:08 π.μ., Ahmed, Nawaz wrote:


Hi,

I believe the "file copy" method (listed in the table) in pgbackrest 
is based on pg_basebackup, so i think it should be "pg_basebackup over 
ssh" as pgbackrest internally calls pg_basebackup. David Steele can 
correct me.



No, apparently pgbackrest does not rely on pg_basebackup.


pgBackRest implements file copy internally and does not rely on any 
command-line tools such as rsync, tar, pg_basebackup, etc.


Regards,
--
-David
da...@pgmasters.net



Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Andrew Gierth
> "Thomas" == Thomas Kellerer  writes:

[intarray woes]

 Thomas> Is this expected behaviour? Is this caused by the Postgres core
 Thomas> (e.g. the optimizer to taking the opclass into account) or is
 Thomas> it a "problem" in the way the intarray module defines its
 Thomas> operators?

It's basically a conflict between intarray (which is historically older)
and the built-in array indexing support.

The reason it happens is that the operator resolution logic matches an
(integer[] && integer[]) operator in preference to (anyarray && anyarray)
regardless of their relative position on the search_path. This
resolution happens before anything is known about any indexes that might
be applicable. Then later, at planning time, an index is chosen based on
the operator, not the reverse.

My own recommendation for most cases is to never install intarray on the
search path, and invoke its functions via explicit qualification or wrap
them in your own functions.

-- 
Andrew (irc:RhodiumToad)



Re: Optimizing Database High CPU

2019-02-28 Thread Scottix
Alright will try the upgrade.

> Is it a few transactions updating a lot of rows each, or many transactions 
> updating a few rows each?
It is a lot of transaction updating a few rows.

Then will look into a connection pooler.

Thanks for the response.

On Wed, Feb 27, 2019 at 2:01 PM Michael Lewis  wrote:
>>
>> If those 50-100 connections are all active at once, yes, that is high.  They 
>> can easily spend more time fighting each other over LWLocks, spinlocks, or 
>> cachelines rather than doing useful work.  This can be exacerbated when you 
>> have multiple sockets rather than all cores in a single socket.  And these 
>> problems are likely to present as high Sys times.
>>
>> Perhaps you can put up a connection pooler which will allow 100 connections 
>> to all think they are connected at once, but forces only 12 or so to 
>> actually be active at one time, making the others transparently queue.
>
>
> Can you expound on this or refer me to someplace to read up on this?
>
> Context, I don't want to thread jack though: I think I am seeing similar 
> behavior in our environment at times with queries that normally take seconds 
> taking 5+ minutes at times of high load. I see many queries showing 
> buffer_mapping as the LwLock type in snapshots but don't know if that may be 
> expected. In our environment PgBouncer will accept several hundred 
> connections and allow up to 100 at a time to be active on the database which 
> are VMs with ~16 CPUs allocated (some more, some less, multi-tenant and 
> manually sharded). It sounds like you are advocating for connection max very 
> close to the number of cores. I'd like to better understand the pros/cons of 
> that decision.



-- 
T: @Thaumion
IG: Thaumion
scot...@gmail.com



Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Thomas Kellerer
Andrew Gierth schrieb am 28.02.2019 um 10:29:
> [intarray woes]
> 
>  Thomas> Is this expected behaviour? Is this caused by the Postgres core
>  Thomas> (e.g. the optimizer to taking the opclass into account) or is
>  Thomas> it a "problem" in the way the intarray module defines its
>  Thomas> operators?
> 
> It's basically a conflict between intarray (which is historically older)
> and the built-in array indexing support.
> 
> The reason it happens is that the operator resolution logic matches an
> (integer[] && integer[]) operator in preference to (anyarray && anyarray)
> regardless of their relative position on the search_path. This
> resolution happens before anything is known about any indexes that might
> be applicable. Then later, at planning time, an index is chosen based on
> the operator, not the reverse.

That makes sense, thanks. 




Re: crosstab function

2019-02-28 Thread Morris de Oryx
Professor Mueller! I believe that we met, long ago. I graduated from your
department in 1984 where I worked closely with the wonderful, late Prof.
Dipple.

Postgres.app is a very easy way to work with Postgres, and it does include
support for tablefunc. If you ever want to check which extensions are
installed, run this line:

select * from pg_available_extensions order by name;

Your code looks correct on the face of it:

CREATE EXTENSION IF NOT EXISTS tablefunc;

Or, if you have set up schemas other than the default "public", you can
install into a specific schema:

CREATE EXTENSION IF NOT EXISTS tablefunc WITH SCHEMA extensions;

If you aren't already using custom schemas...I'll leave it alone for now.

As noted, you're installing into a specific database, so make sure that
you've connected where you expect and are in the database you mean. It's
fairly easy for a tool to default to something other than your custom
database. If it's not clear from the UI, or you just feel like testing by
hand, run this line:

SELECT current_database();

It's worth knowing that a Postgres extension is a packaging system. An
extension may include C code, setup scripts, straight SQL, a variety of
resources. Sometimes, you can open one up and harvest little bits of SQL
you want. For details:

https://www.postgresql.org/docs/10/extend-extensions.html

After a quick googling, it looks like you may be interested in textual
analysis. If so, Postgres has a *lot* of tools that can be of assistance.
Within Postgres.app, I can see at least the following:

citext
If you haven't noticed, and care, Postgres' default varchar/text field type
is case-sensitive. Ugh. The citext extension is searchable
case-insensitively out of the box. I use this for alpha/text fields when I
don't care about case-sensitive searches. For where that is, read
"everywhere".

fuzzystrmatch
https://www.postgresql.org/docs/10/fuzzystrmatch.html

Basic, name/word-matching fuzzy algorithms. The "phonetic" ones are not so
great, but Levenshtein is quite good, if a bit expensive to run.

Full Text Search
Huge subject, lots of options, modern versions of Postgres are quite strong
here.

unaccent
The description reads, "text search dictionary that removes accents." I
haven't needed it, and wonder if specifying a collation might not work
better?

pg_pgtrgm
https://www.postgresql.org/docs/10/pgtrgm.html

*N*-grams of length 3. This is a fantastic tool. N-grams have proven
themselves down the years for fuzzy string matching in multiple domains.
I've mostly used it historically on name data, but it works well on larger
text blocks as well. This holds up with many languages other than English.
It's pretty easy to use this extension.

There's another appealing extension named pg_similarity that includes a
huge range of text comparison and fuzzy ranking tools, but I do not know
how to compile it for macOS or get it to run with Postgres.app. If you are
interested in a specific algorithm, many are easily implemented in a SQL
statement or stored function. For example, Jaccard (and similar) ranking
metrics are produced arithmetically, so they're easy to reimplement.


Where **not** to use PostgreSQL?

2019-02-28 Thread Thomas Güttler

Hi experts,

where would you suggest someone to **not** use PostgreSQL?

Why would you do this?

What alternative would you suggest instead?


Regards,
  Thomas Güttler


--
Thomas Guettler http://www.thomas-guettler.de/
I am looking for feedback: https://github.com/guettli/programming-guidelines



Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Pavel Stehule
Hi

čt 28. 2. 2019 v 12:47 odesílatel Thomas Güttler <
guettl...@thomas-guettler.de> napsal:

> Hi experts,
>
> where would you suggest someone to **not** use PostgreSQL?
>
> Why would you do this?
>
> What alternative would you suggest instead?
>

Don't use Postgres like cache, don't use Postgres for non transactional
short life often updated data.

Use inmemory databases instead

Pavel


>
> Regards,
>Thomas Güttler
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
> I am looking for feedback:
> https://github.com/guettli/programming-guidelines
>
>


Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Chris Travers
On Thu, Feb 28, 2019 at 1:09 PM Pavel Stehule 
wrote:

> Hi
>
> čt 28. 2. 2019 v 12:47 odesílatel Thomas Güttler <
> guettl...@thomas-guettler.de> napsal:
>
>> Hi experts,
>>
>> where would you suggest someone to **not** use PostgreSQL?
>>
>
Hard question.  There are a lot of general places where PostgreSQL is not
by itself the best fit, and where a careful weighing of pros and cons would
need to be made before deciding to use it.

Having used PostgreSQL in place of ElasticSearch for PB-scale deployments,
as a high-throughput queue system, and near-real-time OLAP workloads, I am
fairly aware of how hard it can be pushed.

So the answers here are not "don't use PostgreSQL here" but "think about it
first and consider alternatives."



>
>> Why would you do this?
>>
>
Replacing with "What would you consider to be the tradeoffs?"

>
>> What alternative would you suggest instead?
>>
>
So a few possibilities:

1.  a) TB-scale full text search systems.
 b) PostgreSQL's full text search is quite capable but not so powerful
that it can completely replace Lucene-based systems.  So you have to
consider complexity vs functionality if you are tying with other data that
is already in PostgreSQL.  Note further that my experience with at least
ElasticSearch is that it is easier to scale something built on multiple
PostgreSQL instances into the PB range than it is to scale ElasticSearch
into the PB range.
 c) Solr or ElasticSearch

2.  a) High performance job queues
 b) PostgreSQL index and table structures are not well suited to large
numbers of ordered deletes.  There are ways around these problems and again
if other data is in PostgreSQL, the tradeoff is around transactional
behavior and complexity there vs ease of scaling performance.
 c) Redis if the job queue easily fits into a small enough amount of
memory, or Kafka if it does not

On to where you actually should never use PostgreSQL:

Don't use PostgreSQL for things where you do not want or cannot guarantee
transactional atomicity.
While it is possible to have untrusted languages have side effects in the
real world, the fact is that mixing transactions and non-transactional
behavior in this way adds a lot of really ugly complexity.
Use another development environment instead.


>
> Don't use Postgres like cache, don't use Postgres for non transactional
> short life often updated data.
>
> Use inmemory databases instead
>
> Pavel
>
>
>>
>> Regards,
>>Thomas Güttler
>>
>>
>> --
>> Thomas Guettler http://www.thomas-guettler.de/
>> I am looking for feedback:
>> https://github.com/guettli/programming-guidelines
>>
>>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Nicolas Grilly
On Thu, Feb 28, 2019 at 1:24 PM Chris Travers 
wrote:

> 1.  a) TB-scale full text search systems.
>  b) PostgreSQL's full text search is quite capable but not so powerful
> that it can completely replace Lucene-based systems.  So you have to
> consider complexity vs functionality if you are tying with other data that
> is already in PostgreSQL.  Note further that my experience with at least
> ElasticSearch is that it is easier to scale something built on multiple
> PostgreSQL instances into the PB range than it is to scale ElasticSearch
> into the PB range.
>  c) Solr or ElasticSearch
>

One question about your use of PostgreSQL for a TB-scale full-text search
system: Did you order search results using ts_rank or ts_rank_cd? I'm
asking because in my experience, PostgreSQL full-text search is extremely
efficient, until you need ranking. It's because the indexes don't contain
the necessary information for ranking, and because of this the heap has to
be consulted, which implies a lot of random IO.

I'd be curious to know a bit more about your experience in this regard.

Regards,

Nicolas Grilly

PS: A potential solution to the performance issue I mentioned is this PG
extension: https://github.com/postgrespro/rum


Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Chris Travers
On Thu, Feb 28, 2019 at 1:50 PM Nicolas Grilly 
wrote:

> On Thu, Feb 28, 2019 at 1:24 PM Chris Travers 
> wrote:
>
>> 1.  a) TB-scale full text search systems.
>>  b) PostgreSQL's full text search is quite capable but not so
>> powerful that it can completely replace Lucene-based systems.  So you have
>> to consider complexity vs functionality if you are tying with other data
>> that is already in PostgreSQL.  Note further that my experience with at
>> least ElasticSearch is that it is easier to scale something built on
>> multiple PostgreSQL instances into the PB range than it is to scale
>> ElasticSearch into the PB range.
>>  c) Solr or ElasticSearch
>>
>

> One question about your use of PostgreSQL for a TB-scale full-text search
> system: Did you order search results using ts_rank or ts_rank_cd? I'm
> asking because in my experience, PostgreSQL full-text search is extremely
> efficient, until you need ranking. It's because the indexes don't contain
> the necessary information for ranking, and because of this the heap has to
> be consulted, which implies a lot of random IO.
>
> I'd be curious to know a bit more about your experience in this regard.
>


Where I did this on the TB scale, we had some sort of ranking but it was
not based on ts_rank.

On the PB scale systems I work on now, it is distributed, and we don't
order in PostgreSQL (or anywhere else, though if someone wants to write to
disk and sort, they can do this I guess)

>
> Regards,
>
> Nicolas Grilly
>
> PS: A potential solution to the performance issue I mentioned is this PG
> extension: https://github.com/postgrespro/rum
>
>

-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more


Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Nicolas Grilly
On Thu, Feb 28, 2019 at 2:12 PM Chris Travers 
wrote:

> Where I did this on the TB scale, we had some sort of ranking but it was
> not based on ts_rank.
>
> On the PB scale systems I work on now, it is distributed, and we don't
> order in PostgreSQL (or anywhere else, though if someone wants to write to
> disk and sort, they can do this I guess)
>

Thanks!

>


Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Mark Moellering
I wish more people would ask this question, to me, it is the true mark of
experience.  In general, I think of PostgreSQL as the leading Relational
Database.  The farther you get away from relational data and relational
queries, the more I would say, you should look for other products or
solutions.  But if you want to store relational data and then run queries
over it, then stick with PostgreSQL.

My 2 scents..

Mark

On Thu, Feb 28, 2019 at 8:28 AM Nicolas Grilly 
wrote:

> On Thu, Feb 28, 2019 at 2:12 PM Chris Travers 
> wrote:
>
>> Where I did this on the TB scale, we had some sort of ranking but it was
>> not based on ts_rank.
>>
>> On the PB scale systems I work on now, it is distributed, and we don't
>> order in PostgreSQL (or anywhere else, though if someone wants to write to
>> disk and sort, they can do this I guess)
>>
>
> Thanks!
>
>>


Re: create unique constraint on jsonb->filed during create table

2019-02-28 Thread Andy Fan
Got it, thank you!

On Thu, Feb 28, 2019 at 12:48 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, February 27, 2019, Andy Fan 
> wrote:
>
>>
>> The following way works with 2 commands:
>>
>> zhifan=# create table t1 (a jsonb);
>> CREATE TABLE
>> zhifan=# create unique index t1_a_name on t1 ((a->'name'));
>> CREATE INDEX
>>
>> but know I want to merge them into 1 command, is it possible?
>>
>> zhifan=# create table t2 (a jsonb, constraint uk_t2_a_name
>> unique((a->'name')));
>> ERROR:  syntax error at or near "("
>> LINE 1: ...table t2 (a jsonb, constraint uk_t2_a_name unique((a->'name'...
>>
>
> Not according to the documentation.  Unique table constraints can only
> reference columns in the table as a whole.  An expression index must be
> created separately from the table to which it is attached.
>
> Or add a trigger to the table, populate an actual second column (making it
> unique), and add a table check constraint that that column and the
> expression are equal.  I suspect you’ll be happier having the PK as actual
> column data anyway.
>
> David J.
>
>


Pgbackrest Comparability issue

2019-02-28 Thread chiru r
Hi All,

I am using Pgbackrest and have an issue while upgrading our PostgreSQL
9.5.10 to PostgreSQL 10.4 version on same Physical linux server.
As part of testing I have stop the PostgreSQL 9.5 database on server and
PostgreSQL 10 is up and running and trying to configure and execute the
backups for postgreSQL 10.


I have change the pgbackrest.conf file and trying to create the stanza, it
is throwing *compilation errors*.

*Note :* PostgreSQL 95 backups were configured earlier on same server and
they were successful.

*Details : *
1. instance
postgres  9711 1  0 09:06 pts/100:00:00
/u01/postgres/10/bin/postgres -D /u02/pgdata01/10/data
postgres  9712  9711  0 09:06 ?00:00:00 postgres: logger process
postgres  9714  9711  0 09:06 ?00:00:00 postgres: checkpointer
process
postgres  9715  9711  0 09:06 ?00:00:00 postgres: writer process
postgres  9716  9711  0 09:06 ?00:00:00 postgres: wal writer process
postgres  9717  9711  0 09:06 ?00:00:00 postgres: autovacuum
launcher process
postgres  9719  9711  0 09:06 ?00:00:00 postgres: stats collector
process
postgres  9720  9711  0 09:06 ?00:00:00 postgres: bgworker: logical
replication launcher

2. Linux Server

*Redhat 7.2+*

3.  Pgbackrest conf file

[postgres@ITSUSRALSP03974 install]$ cat /etc/pgbackrest.conf
[online_backups10]
db-path=/u02/pgdata01/10/data
backup-user=postgres
db-socket-path=/tmp

#[online_backups]
#db-path=/u02/pgdata01/9.5/data
#backup-user=postgres
#db-socket-path=/tmp

[global]
retention-full=3
repo-path=/pgback
start-fast=y
stop-auto=y

*Error :  what we are getting . *

[postgres@server1 install]$ pgbackrest --stanza=online_backups10
--log-level-console=info check
Bareword "CFGCMD_HELP" not allowed while "strict subs" in use at
/bin/pgbackrest line 39.
Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at
/bin/pgbackrest line 39.
Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at
/bin/pgbackrest line 46.
Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST_DELAY" not allowed while "strict subs" in use at
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST_POINT" not allowed while "strict subs" in use at
/bin/pgbackrest line 51.
Bareword "CFGCMD_ARCHIVE_PUSH" not allowed while "strict subs" in use at
/bin/pgbackrest line 59.
Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in use at
/bin/pgbackrest line 71.
Bareword "CFGCMD_REMOTE" not allowed while "strict subs" in use at
/bin/pgbackrest line 83.
Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in use
at /bin/pgbackrest line 86.
Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in use
at /bin/pgbackrest line 87.
Bareword "CFGOPT_TYPE" not allowed while "strict subs" in use at
/bin/pgbackrest line 89.
Bareword "CFGOPTVAL_REMOTE_TYPE_BACKUP" not allowed while "strict subs" in
use at /bin/pgbackrest line 89.
Bareword "CFGOPT_REPO_TYPE" not allowed while "strict subs" in use at
/bin/pgbackrest line 89.
Bareword "CFGOPTVAL_REPO_TYPE_S3" not allowed while "strict subs" in use at
/bin/pgbackrest line 89.
Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in use at
/bin/pgbackrest line 89.
Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in use at
/bin/pgbackrest line 89.
Bareword "CFGOPT_BUFFER_SIZE" not allowed while "strict subs" in use at
/bin/pgbackrest line 100.
Bareword "CFGOPT_PROTOCOL_TIMEOUT" not allowed while "strict subs" in use
at /bin/pgbackrest line 100.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGCMD_INFO" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGCMD_RESTORE" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGCMD_CHECK" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGCMD_LOCAL" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
/bin/pgbackrest line 103.
Bareword "CFGCMD_LOCAL" not allowed while "strict subs" in use at
/bin/pgbackrest line 119.
Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in use
at /bin/pgbackrest line 122.
Bareword "CFGOPT_LOG_LEVEL_STDERR

Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Ron

On 2/28/19 5:47 AM, Thomas Güttler wrote:

Hi experts,

where would you suggest someone to **not** use PostgreSQL?


1. Small embedded systems.  SQLite is great for that.

2. Easy-to-implement Master-Master replication.  (The Percona fork of MySQL 
does that really well, if you can handle MySQL's limitations.)




--
Angular momentum makes the world go 'round.



Re: Pgbackrest Comparability issue

2019-02-28 Thread Adrian Klaver

On 2/28/19 6:59 AM, chiru r wrote:

Hi All,

I am using Pgbackrest and have an issue while upgrading our PostgreSQL 
9.5.10 to PostgreSQL 10.4 version on same Physical linux server.
As part of testing I have stop the PostgreSQL 9.5 database on server and 
PostgreSQL 10 is up and running and trying to configure and execute the 
backups for postgreSQL 10.



I have change the pgbackrest.conf file and trying to create the stanza, 
it is throwing *compilation errors*.

What version of pgBackRest?

Was it recently updated?

How was it installed?




*Note :* PostgreSQL 95 backups were configured earlier on same server 
and they were successful.


*Details : *
1. instance
postgres  9711     1  0 09:06 pts/1    00:00:00 
/u01/postgres/10/bin/postgres -D /u02/pgdata01/10/data

postgres  9712  9711  0 09:06 ?        00:00:00 postgres: logger process
postgres  9714  9711  0 09:06 ?        00:00:00 postgres: checkpointer 
process

postgres  9715  9711  0 09:06 ?        00:00:00 postgres: writer process
postgres  9716  9711  0 09:06 ?        00:00:00 postgres: wal writer process
postgres  9717  9711  0 09:06 ?        00:00:00 postgres: autovacuum 
launcher process
postgres  9719  9711  0 09:06 ?        00:00:00 postgres: stats 
collector process
postgres  9720  9711  0 09:06 ?        00:00:00 postgres: bgworker: 
logical replication launcher


2. Linux Server

*Redhat 7.2+*

3.  Pgbackrest conf file

[postgres@ITSUSRALSP03974 install]$ cat /etc/pgbackrest.conf
[online_backups10]
db-path=/u02/pgdata01/10/data
backup-user=postgres
db-socket-path=/tmp

#[online_backups]
#db-path=/u02/pgdata01/9.5/data
#backup-user=postgres
#db-socket-path=/tmp

[global]
retention-full=3
repo-path=/pgback
start-fast=y
stop-auto=y

_*Error *:  what we are getting . _
*
*
[postgres@server1 install]$ pgbackrest --stanza=online_backups10 
--log-level-console=info check
Bareword "CFGCMD_HELP" not allowed while "strict subs" in use at 
/bin/pgbackrest line 39.
Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at 
/bin/pgbackrest line 39.
Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at 
/bin/pgbackrest line 46.
Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at 
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at 
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at 
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST_DELAY" not allowed while "strict subs" in use at 
/bin/pgbackrest line 51.
Bareword "CFGOPT_TEST_POINT" not allowed while "strict subs" in use at 
/bin/pgbackrest line 51.
Bareword "CFGCMD_ARCHIVE_PUSH" not allowed while "strict subs" in use at 
/bin/pgbackrest line 59.
Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in use at 
/bin/pgbackrest line 71.
Bareword "CFGCMD_REMOTE" not allowed while "strict subs" in use at 
/bin/pgbackrest line 83.
Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in 
use at /bin/pgbackrest line 86.
Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in 
use at /bin/pgbackrest line 87.
Bareword "CFGOPT_TYPE" not allowed while "strict subs" in use at 
/bin/pgbackrest line 89.
Bareword "CFGOPTVAL_REMOTE_TYPE_BACKUP" not allowed while "strict subs" 
in use at /bin/pgbackrest line 89.
Bareword "CFGOPT_REPO_TYPE" not allowed while "strict subs" in use at 
/bin/pgbackrest line 89.
Bareword "CFGOPTVAL_REPO_TYPE_S3" not allowed while "strict subs" in use 
at /bin/pgbackrest line 89.
Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in use at 
/bin/pgbackrest line 89.
Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in use at 
/bin/pgbackrest line 89.
Bareword "CFGOPT_BUFFER_SIZE" not allowed while "strict subs" in use at 
/bin/pgbackrest line 100.
Bareword "CFGOPT_PROTOCOL_TIMEOUT" not allowed while "strict subs" in 
use at /bin/pgbackrest line 100.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGCMD_INFO" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGCMD_RESTORE" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGCMD_CHECK" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGCMD_LOCAL" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at 
/bin/pgbackrest line 103.
Bareword "CFGCMD_LOCAL" not allowed while "strict subs" i

Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Michel Pelletier
On Thu, Feb 28, 2019 at 4:50 AM Nicolas Grilly 
wrote:

> On Thu, Feb 28, 2019 at 1:24 PM Chris Travers 
> wrote:
>
>> 1.  a) TB-scale full text search systems.
>>  b) PostgreSQL's full text search is quite capable but not so
>> powerful that it can completely replace Lucene-based systems.  So you have
>> to consider complexity vs functionality if you are tying with other data
>> that is already in PostgreSQL.  Note further that my experience with at
>> least ElasticSearch is that it is easier to scale something built on
>> multiple PostgreSQL instances into the PB range than it is to scale
>> ElasticSearch into the PB range.
>>  c) Solr or ElasticSearch
>>
>
> One question about your use of PostgreSQL for a TB-scale full-text search
> system: Did you order search results using ts_rank or ts_rank_cd? I'm
> asking because in my experience, PostgreSQL full-text search is extremely
> efficient, until you need ranking. It's because the indexes don't contain
> the necessary information for ranking, and because of this the heap has to
> be consulted, which implies a lot of random IO.
>
>
Check out the RUM index extension, it adds ranking information to indexes
to speed up exactly the problem you pointed out:

https://github.com/postgrespro/rum


Re: automated refresh of dev from prod

2019-02-28 Thread Stephen Frost
Greetings,

* Julie Nishimura (juliez...@hotmail.com) wrote:
> Hello everybody, I am new to postgresql environment, but trying to get up to 
> speed.
> Can you please share your experience on how you can automate refreshment of 
> dev environment on regular basis (desirably weekly), taking for consideration 
> some of prod dbs can be very large (like 20+ TB
> 
> Any suggestions?

The approach that I like to recommend is to have your backup/restore
solution be involved in this refreshing process, so that you're also
testing that your backup/restore process works correctly.  For dealing
with larger databases, using a backup tool which has parallel backup,
parallel restore, and is able to restore just the files which are
different from the backup can make the restore take much less time (this
is what the 'delta-restore' option in pgbackrest does, and it was
specifically written to support exactly this kind of prod->dev periodic
refresh, though other tools may also support that these days).

As mentioned elsewhere on this thread, using snapshots can also be a
good approach though you have to be sure that the snapshot is completely
atomic across all filesystems that PostgreSQL is using, or you have to
deal with running pg_start/stop_backup and putting a backup_label into
place for the restored snapshot and a recovery.conf to provide a way for
PG to get at any WAL which was generated while the snapshot (or
snapshots) was being taken.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Tom Lane
Thomas Kellerer  writes:
> While testing a query on an integer array with a GIN index, I stumbled over a 
> behaviour which surprised me and which I would consider a bug - but maybe I 
> am wrong. 

It's not a bug --- the problem is that that operator is capturing your
query reference, and it's not a member of the opclass for the index
you have, so no index scan for you.

> The above happens even if the intarray extension is a the end of the search 
> path, e.g. "set search_path = public, intarray".

Yeah, because it's an exact datatype match while the core operator
is anyarray && anyarray which is not.

Ideally, perhaps, the extension could add its operator to the core
gin-arrays opclass, but we lack any reasonable way to manage that.

Something that's maybe more likely to happen is to remove that
operator from the extension altogether; but that will break things
for some people too, no doubt :-(

regards, tom lane



Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Raymond Brinzer
I often avoid PostgreSQL when using software for which PostgreSQL
support is secondary. Usually this is the case where MySQL is the
default, but PostgreSQL is on the "also supported" list.  "Also" is
too often a synonym for "badly", here, and unless I really want to be
able to approach the underlying database *as a database*, the better
choice is to go with the flow.  Even when I do need that, I'll
consider whether the features I need are exclusive to, or much better
on, PostgreSQL.

When developing something myself, I've also chosen MySQL because the
other technical people likely to be involved are at least somewhat
familiar with it.  A person who is marginally competent with databases
doesn't need the added difficulty of learning a new DBMS while
learning whatever I created.

It's always a pleasure when I don't have such issues, and I can use PostgreSQL.

On Thu, Feb 28, 2019 at 6:47 AM Thomas Güttler
 wrote:
>
> Hi experts,
>
> where would you suggest someone to **not** use PostgreSQL?
>
> Why would you do this?
>
> What alternative would you suggest instead?
>
>
> Regards,
>Thomas Güttler
>
>
> --
> Thomas Guettler http://www.thomas-guettler.de/
> I am looking for feedback: https://github.com/guettli/programming-guidelines
>


-- 
Ray Brinzer



Re: Where **not** to use PostgreSQL?

2019-02-28 Thread Michel Pelletier
Doh, sorry I missed your postscript!

On Thu, Feb 28, 2019 at 8:02 AM Nicolas Grilly 
wrote:

> Le jeu. 28 févr. 2019 à 16:48, Michel Pelletier <
> pelletier.mic...@gmail.com> a écrit :
>
>> Check out the RUM index extension, it adds ranking information to indexes
>> to speed up exactly the problem you pointed out:
>>
>> https://github.com/postgrespro/rum
>>
>
> I mentioned it at the end of my message:-)
>
> It would be great to have it integrated in the standard distribution.
>
>>


Re: Pgbackrest Comparability issue

2019-02-28 Thread chiru r
We are suing pgBackRest 1.24 version .

Installing using rpms

yum -y install perl-parent
yum -y install perl-Time-HiRes
yum -y install perl-JSON
yum -y install perl-Digest-SHA
yum -y install perl-Digest
yum -y install perl-DBD-Pg


On Thu, Feb 28, 2019 at 10:15 AM Adrian Klaver 
wrote:

> On 2/28/19 6:59 AM, chiru r wrote:
> > Hi All,
> >
> > I am using Pgbackrest and have an issue while upgrading our PostgreSQL
> > 9.5.10 to PostgreSQL 10.4 version on same Physical linux server.
> > As part of testing I have stop the PostgreSQL 9.5 database on server and
> > PostgreSQL 10 is up and running and trying to configure and execute the
> > backups for postgreSQL 10.
> >
> >
> > I have change the pgbackrest.conf file and trying to create the stanza,
> > it is throwing *compilation errors*.
> What version of pgBackRest?
>
> Was it recently updated?
>
> How was it installed?
>
>
> >
> > *Note :* PostgreSQL 95 backups were configured earlier on same server
> > and they were successful.
> >
> > *Details : *
> > 1. instance
> > postgres  9711 1  0 09:06 pts/100:00:00
> > /u01/postgres/10/bin/postgres -D /u02/pgdata01/10/data
> > postgres  9712  9711  0 09:06 ?00:00:00 postgres: logger process
> > postgres  9714  9711  0 09:06 ?00:00:00 postgres: checkpointer
> > process
> > postgres  9715  9711  0 09:06 ?00:00:00 postgres: writer process
> > postgres  9716  9711  0 09:06 ?00:00:00 postgres: wal writer
> process
> > postgres  9717  9711  0 09:06 ?00:00:00 postgres: autovacuum
> > launcher process
> > postgres  9719  9711  0 09:06 ?00:00:00 postgres: stats
> > collector process
> > postgres  9720  9711  0 09:06 ?00:00:00 postgres: bgworker:
> > logical replication launcher
> >
> > 2. Linux Server
> >
> > *Redhat 7.2+*
> >
> > 3.  Pgbackrest conf file
> >
> > [postgres@ITSUSRALSP03974 install]$ cat /etc/pgbackrest.conf
> > [online_backups10]
> > db-path=/u02/pgdata01/10/data
> > backup-user=postgres
> > db-socket-path=/tmp
> >
> > #[online_backups]
> > #db-path=/u02/pgdata01/9.5/data
> > #backup-user=postgres
> > #db-socket-path=/tmp
> >
> > [global]
> > retention-full=3
> > repo-path=/pgback
> > start-fast=y
> > stop-auto=y
> >
> > _*Error *:  what we are getting . _
> > *
> > *
> > [postgres@server1 install]$ pgbackrest --stanza=online_backups10
> > --log-level-console=info check
> > Bareword "CFGCMD_HELP" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 39.
> > Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 39.
> > Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 46.
> > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 51.
> > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 51.
> > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 51.
> > Bareword "CFGOPT_TEST_DELAY" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 51.
> > Bareword "CFGOPT_TEST_POINT" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 51.
> > Bareword "CFGCMD_ARCHIVE_PUSH" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 59.
> > Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 71.
> > Bareword "CFGCMD_REMOTE" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 83.
> > Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in
> > use at /bin/pgbackrest line 86.
> > Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict subs" in
> > use at /bin/pgbackrest line 87.
> > Bareword "CFGOPT_TYPE" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 89.
> > Bareword "CFGOPTVAL_REMOTE_TYPE_BACKUP" not allowed while "strict subs"
> > in use at /bin/pgbackrest line 89.
> > Bareword "CFGOPT_REPO_TYPE" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 89.
> > Bareword "CFGOPTVAL_REPO_TYPE_S3" not allowed while "strict subs" in use
> > at /bin/pgbackrest line 89.
> > Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 89.
> > Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 89.
> > Bareword "CFGOPT_BUFFER_SIZE" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 100.
> > Bareword "CFGOPT_PROTOCOL_TIMEOUT" not allowed while "strict subs" in
> > use at /bin/pgbackrest line 100.
> > Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 103.
> > Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 103.
> > Bareword "CFGOPT_COMMAND" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 103.
> > Bareword "CFGCMD_INFO" not allowed while "strict subs" in use at
> > /bin/pgbackrest line 103.
> > Bareword "CFGOPT_COMMAND" not al

Re: Pgbackrest Comparability issue

2019-02-28 Thread Adrian Klaver

On 2/28/19 10:06 AM, chiru r wrote:


We are suing pgBackRest 1.24 version .

Installing using rpms

yum -y install perl-parent
yum -y install perl-Time-HiRes
yum -y install perl-JSON
yum -y install perl-Digest-SHA
yum -y install perl-Digest
yum -y install perl-DBD-Pg


FYI, please do not top post. Inline posting is the preferred style on 
this list.


In the above I do not see anything that indicates how pgBackRest was 
installed or the version number.





On Thu, Feb 28, 2019 at 10:15 AM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 2/28/19 6:59 AM, chiru r wrote:
 > Hi All,
 >
 > I am using Pgbackrest and have an issue while upgrading our
PostgreSQL
 > 9.5.10 to PostgreSQL 10.4 version on same Physical linux server.
 > As part of testing I have stop the PostgreSQL 9.5 database on
server and
 > PostgreSQL 10 is up and running and trying to configure and
execute the
 > backups for postgreSQL 10.
 >
 >
 > I have change the pgbackrest.conf file and trying to create the
stanza,
 > it is throwing *compilation errors*.
What version of pgBackRest?

Was it recently updated?

How was it installed?


 >
 > *Note :* PostgreSQL 95 backups were configured earlier on same
server
 > and they were successful.
 >
 > *Details : *
 > 1. instance
 > postgres  9711     1  0 09:06 pts/1    00:00:00
 > /u01/postgres/10/bin/postgres -D /u02/pgdata01/10/data
 > postgres  9712  9711  0 09:06 ?        00:00:00 postgres: logger
process
 > postgres  9714  9711  0 09:06 ?        00:00:00 postgres:
checkpointer
 > process
 > postgres  9715  9711  0 09:06 ?        00:00:00 postgres: writer
process
 > postgres  9716  9711  0 09:06 ?        00:00:00 postgres: wal
writer process
 > postgres  9717  9711  0 09:06 ?        00:00:00 postgres: autovacuum
 > launcher process
 > postgres  9719  9711  0 09:06 ?        00:00:00 postgres: stats
 > collector process
 > postgres  9720  9711  0 09:06 ?        00:00:00 postgres: bgworker:
 > logical replication launcher
 >
 > 2. Linux Server
 >
 > *Redhat 7.2+*
 >
 > 3.  Pgbackrest conf file
 >
 > [postgres@ITSUSRALSP03974 install]$ cat /etc/pgbackrest.conf
 > [online_backups10]
 > db-path=/u02/pgdata01/10/data
 > backup-user=postgres
 > db-socket-path=/tmp
 >
 > #[online_backups]
 > #db-path=/u02/pgdata01/9.5/data
 > #backup-user=postgres
 > #db-socket-path=/tmp
 >
 > [global]
 > retention-full=3
 > repo-path=/pgback
 > start-fast=y
 > stop-auto=y
 >
 > _*Error *:  what we are getting . _
 > *
 > *
 > [postgres@server1 install]$ pgbackrest --stanza=online_backups10
 > --log-level-console=info check
 > Bareword "CFGCMD_HELP" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 39.
 > Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 39.
 > Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 46.
 > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 51.
 > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 51.
 > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 51.
 > Bareword "CFGOPT_TEST_DELAY" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 51.
 > Bareword "CFGOPT_TEST_POINT" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 51.
 > Bareword "CFGCMD_ARCHIVE_PUSH" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 59.
 > Bareword "CFGCMD_ARCHIVE_GET" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 71.
 > Bareword "CFGCMD_REMOTE" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 83.
 > Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict
subs" in
 > use at /bin/pgbackrest line 86.
 > Bareword "CFGOPT_LOG_LEVEL_STDERR" not allowed while "strict
subs" in
 > use at /bin/pgbackrest line 87.
 > Bareword "CFGOPT_TYPE" not allowed while "strict subs" in use at
 > /bin/pgbackrest line 89.
 > Bareword "CFGOPTVAL_REMOTE_TYPE_BACKUP" not allowed while "strict
subs"
 > in use at /bin/pgbackrest line 89.
 > Bareword "CFGOPT_REPO_TYPE" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 89.
 > Bareword "CFGOPTVAL_REPO_TYPE_S3" not allowed while "strict subs"
in use
 > at /bin/pgbackrest line 89.
 > Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 89.
 > Bareword "CFGOPT_REPO_PATH" not allowed while "strict subs" in
use at
 > /bin/pgbackrest line 89.
 > Bareword "CFGOP

specifying table in function args

2019-02-28 Thread Rob Sargent
Using PG10.7, I have a database per investigator with one or more 
identical schemata per project.  So far I've gotten by defining public 
functions (as postgres) which reference tables /without/ schema 
reference.  Each project has a role which sets the search_path such that 
the project specific schema supplies the table definition at function 
call.  This seems to be working quite well.


I've written a function which needs a specific record:

   create or replace function public.pvr(seg segment, plus float
   default 1.0)
   returns float as $$
   declare
  retval float;
   begin
  select
   
((1.0*seg.events_equal)+seg.events_greater+plus)/(seg.events_less+seg.events_equal+seg.events_greater+plus)
   into retval;
  return retval;
   end;
   $$ language plpgsql;

but this fails in one of two ways:  either the create function call 
fails lacking a definition of "segment" or, if I create a public.segment 
table, create the function, set search_path to include a project's 
schema then drop public.segment fails because pvr() relies on it.


This is all to avoid (the existing) pv() function which requires all 
events values as arguments (i.e. much typing). Is there a way through 
this or must I generate a function per schema in this case? It's easy 
enough to do, but has a certain odor to it. I suppose I could leave the 
public.segment table is place (revoke all inserts etc) then let the 
search_path take over.







Re: automated refresh of dev from prod

2019-02-28 Thread Ben Chobot
On Feb 28, 2019, at 8:04 AM, Stephen Frost  wrote:
> 
> Greetings,
> 
> * Julie Nishimura (juliez...@hotmail.com) wrote:
>> Hello everybody, I am new to postgresql environment, but trying to get up to 
>> speed.
>> Can you please share your experience on how you can automate refreshment of 
>> dev environment on regular basis (desirably weekly), taking for 
>> consideration some of prod dbs can be very large (like 20+ TB
>> 
>> Any suggestions?
> 
> The approach that I like to recommend is to have your backup/restore
> solution be involved in this refreshing process, so that you're also
> testing that your backup/restore process works correctly.  For dealing
> with larger databases, using a backup tool which has parallel backup,
> parallel restore, and is able to restore just the files which are
> different from the backup can make the restore take much less time (this
> is what the 'delta-restore' option in pgbackrest does, and it was
> specifically written to support exactly this kind of prod->dev periodic
> refresh, though other tools may also support that these days).
> 
> As mentioned elsewhere on this thread, using snapshots can also be a
> good approach though you have to be sure that the snapshot is completely
> atomic across all filesystems that PostgreSQL is using, or you have to
> deal with running pg_start/stop_backup and putting a backup_label into
> place for the restored snapshot and a recovery.conf to provide a way for
> PG to get at any WAL which was generated while the snapshot (or
> snapshots) was being taken.

Very much yes to everything Stephen says. Regularly refreshing nonprod via your 
normal backup/restore process is an efficient way to test your backups, and 
snapshots are a great way to do backups when your data volume is greater than 
your churn between backups. (And at 20+ TB, I hope that's the case for you.)


Re: specifying table in function args

2019-02-28 Thread David G. Johnston
On Thursday, February 28, 2019, Rob Sargent  wrote:

> but this fails in one of two ways:  either the create function call fails
> lacking a definition of "segment" or, if I create a public.segment table,
> create the function, set search_path to include a project's schema then
> drop public.segment fails because pvr() relies on it.
>

CREATE TYPE?

David J.


Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Michael Lewis
>
> Yeah, because it's an exact datatype match while the core operator
> is anyarray && anyarray which is not.


Can you dumb down how to change the index or column type such that an index
will be used for the && operator while intarray extension is installed? We
have the intarray extension installed and I doubt that I can get it
removed. I've added gin index on a column declared as type "integer array"
and had expected the index to be used, but I did not test it yet since
there is some bad data with NULLs stored as a value in the integer array
and I am waiting on that data cleanup to be done before directly using &&
operator. I expect that when I do test it, I will be impacted by this same
concern. It is simplest to change the column type from integer array to
anyarray? Is there a con to doing so?


Refresh Publication takes hours and doesn´t finish

2019-02-28 Thread PegoraroF10
*We use logical replication from a PG version 10.6 to a 11.2. Both are Ubuntu
16.04.We have a hundred schemas with more or less a hundred tables, so
number of tables is about 10.000. All replication is ok but when we try to
do a REFRESH SUBSCRIPTION because we added a new schema, it takes hours and
doesn´t finish. Then, if I go to our master server and do a select * from
pg_publication_tables it doesn´t respond too. Then, analysing the source of
view pg_publication_tables ...*
create view pg_publication_tables as  SELECT p.pubname, n.nspname AS
schemaname, c.relname AS tablename FROM pg_publication p,  (pg_class c JOIN
pg_namespace n ON ((n.oid = c.relnamespace))) WHERE (c.oid IN (SELECT
pg_get_publication_tables.relid FROM pg_get_publication_tables((p.pubname)
:: text) pg_get_publication_tables (relid)));
If we run both statements of that view separately 
SELECT string_agg(pg_get_publication_tables.relid::text,',') FROM
pg_get_publication_tables(('MyPublication')::text) pg_get_publication_tables
(relid);
*put all those oids retrieved on that IN of the view*
select * from pg_Class c JOIN pg_namespace n ON n.oid = c.relnamespace WHERE
c.oid IN (
*OIDs List*
);
*Then it responds immediatelly*
So, the question is .. can we change this view to select faster ? Just
rewriting that view to a better select will solve ?Is this view used by
REFRESH SUBSCRIPTION ? We think yes because if we run refresh subscription
or select from view it doesn´t respond, so ...



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

Re: Pgbackrest Comparability issue

2019-02-28 Thread chiru r
On Thu, Feb 28, 2019 at 1:28 PM Adrian Klaver 
wrote:

> On 2/28/19 10:06 AM, chiru r wrote:
> >
> > We are suing pgBackRest 1.24 version .
> >
> > Installing using rpms
> >
> > yum -y install perl-parent
> > yum -y install perl-Time-HiRes
> > yum -y install perl-JSON
> > yum -y install perl-Digest-SHA
> > yum -y install perl-Digest
> > yum -y install perl-DBD-Pg
>
> FYI, please do not top post. Inline posting is the preferred style on
> this list.
>
> In the above I do not see anything that indicates how pgBackRest was
> installed or the version number.
>
> Hi Adrian,

We installed  pgBackRest *1.24* version  initially with PostgreSQL 9.5.10
version.
Now we have installed postgresql-10 on server and we are trying to run
backups to PostgreSQL-10 also using pgbackrest. and we are getting
compilation errors.

The below steps we followed.
yum -y install perl-parent
yum -y install perl-Time-HiRes
yum -y install perl-JSON
yum -y install perl-Digest-SHA
yum -y install perl-Digest
yum -y install perl-DBD-Pg

Downloaded pgbackrest-release-1.24.zip from github and unpacked.
https://github.com/pgbackrest/pgbackrest

cp -r pgbackrest-release-1.24/lib/pgBackRest  /usr/share/perl5/
cp -r pgbackrest-release-1.24/bin/pgbackrest  /usr/bin/pgbackrest

echo "Restarting Cluster..."
su -c "cd ${PGHOME}/bin; pg_ctl restart -D ${PGDATA}" -m "$DBUSER"
sleep 20

su -c "cd /usr/bin; pgbackrest --stanza=online_backups10
--log-level-console=info stanza-create" -m "$DBUSER"
su -c "cd /usr/bin; pgbackrest --stanza=online_backups10
--log-level-console=info check" -m "$DBUSER"

 Thanks,
chiru


> >
> >
> > On Thu, Feb 28, 2019 at 10:15 AM Adrian Klaver
> > mailto:adrian.kla...@aklaver.com>> wrote:
> >
> > On 2/28/19 6:59 AM, chiru r wrote:
> >  > Hi All,
> >  >
> >  > I am using Pgbackrest and have an issue while upgrading our
> > PostgreSQL
> >  > 9.5.10 to PostgreSQL 10.4 version on same Physical linux server.
> >  > As part of testing I have stop the PostgreSQL 9.5 database on
> > server and
> >  > PostgreSQL 10 is up and running and trying to configure and
> > execute the
> >  > backups for postgreSQL 10.
> >  >
> >  >
> >  > I have change the pgbackrest.conf file and trying to create the
> > stanza,
> >  > it is throwing *compilation errors*.
> > What version of pgBackRest?
> >
> > Was it recently updated?
> >
> > How was it installed?
> >
> >
> >  >
> >  > *Note :* PostgreSQL 95 backups were configured earlier on same
> > server
> >  > and they were successful.
> >  >
> >  > *Details : *
> >  > 1. instance
> >  > postgres  9711 1  0 09:06 pts/100:00:00
> >  > /u01/postgres/10/bin/postgres -D /u02/pgdata01/10/data
> >  > postgres  9712  9711  0 09:06 ?00:00:00 postgres: logger
> > process
> >  > postgres  9714  9711  0 09:06 ?00:00:00 postgres:
> > checkpointer
> >  > process
> >  > postgres  9715  9711  0 09:06 ?00:00:00 postgres: writer
> > process
> >  > postgres  9716  9711  0 09:06 ?00:00:00 postgres: wal
> > writer process
> >  > postgres  9717  9711  0 09:06 ?00:00:00 postgres:
> autovacuum
> >  > launcher process
> >  > postgres  9719  9711  0 09:06 ?00:00:00 postgres: stats
> >  > collector process
> >  > postgres  9720  9711  0 09:06 ?00:00:00 postgres:
> bgworker:
> >  > logical replication launcher
> >  >
> >  > 2. Linux Server
> >  >
> >  > *Redhat 7.2+*
> >  >
> >  > 3.  Pgbackrest conf file
> >  >
> >  > [postgres@ITSUSRALSP03974 install]$ cat /etc/pgbackrest.conf
> >  > [online_backups10]
> >  > db-path=/u02/pgdata01/10/data
> >  > backup-user=postgres
> >  > db-socket-path=/tmp
> >  >
> >  > #[online_backups]
> >  > #db-path=/u02/pgdata01/9.5/data
> >  > #backup-user=postgres
> >  > #db-socket-path=/tmp
> >  >
> >  > [global]
> >  > retention-full=3
> >  > repo-path=/pgback
> >  > start-fast=y
> >  > stop-auto=y
> >  >
> >  > _*Error *:  what we are getting . _
> >  > *
> >  > *
> >  > [postgres@server1 install]$ pgbackrest --stanza=online_backups10
> >  > --log-level-console=info check
> >  > Bareword "CFGCMD_HELP" not allowed while "strict subs" in use at
> >  > /bin/pgbackrest line 39.
> >  > Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use
> at
> >  > /bin/pgbackrest line 39.
> >  > Bareword "CFGCMD_VERSION" not allowed while "strict subs" in use
> at
> >  > /bin/pgbackrest line 46.
> >  > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
> >  > /bin/pgbackrest line 51.
> >  > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
> >  > /bin/pgbackrest line 51.
> >  > Bareword "CFGOPT_TEST" not allowed while "strict subs" in use at
> >  > /bin/pgbackrest line 51.
> >  >

Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Thomas Kellerer

Michael Lewis schrieb am 28.02.2019 um 21:23:

Yeah, because it's an exact datatype match while the core operator
is anyarray && anyarray which is not.


Can you dumb down how to change the index or column type such that an
index will be used for the && operator while intarray extension is
installed? We have the intarray extension installed and I doubt that
I can get it removed. I've added gin index on a column declared as
type "integer array" and had expected the index to be used, but I did
not test it yet since there is some bad data with NULLs stored as a
value in the integer array and I am waiting on that data cleanup to
be done before directly using && operator. I expect that when I do
test it, I will be impacted by this same concern. It is simplest to
change the column type from integer array to anyarray? Is there a con
to doing so?


I see two options: create the gin index with the opclass provided by intarray:
   create index ... using gin (... gin__int_ops);

Or if you use the regular opclass for the GIN index, make sure to use the 
built-in operator,
e.g.: OPERATOR(pg_catalog.&&) instead of && or OPERATOR(pg_catalog.@>) instead of 
@>

Thomas



Re: specifying table in function args

2019-02-28 Thread Rob Sargent


On 2/28/19 12:27 PM, David G. Johnston wrote:
On Thursday, February 28, 2019, Rob Sargent > wrote:


but this fails in one of two ways:  either the create function
call fails lacking a definition of "segment" or, if I create a
public.segment table, create the function, set search_path to
include a project's schema then drop public.segment fails because
pvr() relies on it.


CREATE TYPE?

David J.


Not yet.

Using just a shell type I from create or replace function public.pvr(seg 
segment, plus float default 1.0) I get


   psql:functions/pvr.sql:19: NOTICE:  argument type segment is only a
   shell
   psql:functions/pvr.sql:19: ERROR:  PL/pgSQL functions cannot accept
   type segment (same for SQL function)

Using a fleshed-out segment type I get

 \df public.pvr
 List of functions
 Schema | Name | Result data type |  Argument data
   types  |  Type
   
+--+--+---+
 public | pvr  | double precision | seg public.segment, plus double
   precision DEFAULT 1.0 | normal
   (1 row)

and the schema qualifier on the argument is a killer

   camp=# set search_path = base,mm,public;
   SET
   Time: 0.810 ms
   camp=# select pvr(s.*) from mm.segment s limit 5;
   ERROR:  function pvr(segment) does not exist
   LINE 1: select pvr(s.*) from mm.segment s limit 5;
   ^
   HINT:  No function matches the given name and argument types. You
   might need to add explicit type casts.





race condition when checking uniqueness between two tables

2019-02-28 Thread Győző Papp
Hi all,

First and foremost I admit what follows is quite awkward and a bit
long. Sorry for both.

However I still would really like to understand what I am missing in
regards of transaction isolation levels and MVCC if CTE involved.

Given a sort of scheduler application with two tables:

CREATE TABLE available_jobs (id UUID PRIMARY KEY, url TEXT NOT NULL);
CREATE TABLE assigned_jobs (id UUID PRIMARY KEY, url TEXT NOT NULL);

New jobs are always inserted into `available_jobs` but only if its id
(primary key) does not exist in neither of the two tables, so for the
time being we used the INSERT below:

INSERT INTO available_jobs(id, url)
SELECT :id, :url
WHERE NOT EXISTS (
  SELECT id FROM assigned_jobs WHERE id = :id
)
ON CONFLICT DO NOTHING

A periodic task moves rows form `available_jobs` to `assigned_jobs`
the jobs that has been assigned:

WITH jobs_to_assign AS (
  DELETE FROM available_jobs
USING v_assignable_jobs
WHERE available_jobs.id = v_assignable_jobs.id
RETURNING available_jobs.*
)
INSERT
  INTO assigned_jobs(id, url)
  SELECT id, url
FROM jobs_to_assign
RETURNING *;

And a similar one that “unassigns” failed assigned jobs, that is,
pushes back rows from `assigned_jobs` to `available_jobs`. And it
sometimes fails with:

ERROR: duplicate key value violates unique constraint "jobs_pkey"
DETAIL: Key (id)=(1fd0626c-f953-3278-82a1-8e4320d28914) already
exists.

>From the application logs we suspect that "duplicates" sneak in when
assignment takes a bit longer and overlaps with new job insertion with
a primary key already in use.

My first question is: how could it happen, which phenomenon are we
risking/experiencing in these cases?

Moreover, to my biggest surprise when we added at first blush a
superfluous condition to the INSERT statement (partly for fun partly
because of being clueless):

INSERT INTO available_jobs(id, url)
SELECT :id, :url
WHERE NOT EXISTS (
  SELECT id FROM assigned_jobs WHERE id = :id
) AND NOT EXISTS (
  SELECT id FROM available_jobs WHERE id = :id
)
ON CONFLICT DO NOTHING

we did never spot such duplicates  that always otherwise with the same
load and data distribution with a fairly long test period. My next
question would be what it adds to the full picture that prevents
creating a duplicate that ON CONFLICT DO NOTHING could not?

Many thanks for any hints, insights or suggestions. Sorry again for
being so long.

Notes:
* We are using 10.6 in AWS RDS.
* Each transaction is in READ_COMMITTED.
* I simplified the original table layout a lot for simplicity and
unfortunately had no time to reproduce the original phenomenon.
* `v_assignable_jobs` view does the job selection based on other
tables including `assigned_jobs` as well.
* Long time back we had one single jobs table only but job selection
had poor performance at high load we decided to split the table into
two. This is the explanation of the primary key name `jobs_pkey`.

--
Gyozo Papp

-- 
The
 information contained in this email may be confidential. It has been 

sent for the sole use of the intended recipient(s). If the
reader of this 
email is not an intended recipient, you are hereby 
notified that any 
unauthorized review, use, disclosure, dissemination, 
distribution, or 
copying of this message is strictly prohibited. If you 
have received this 
email in error, please notify
the sender immediately and destroy all copies 
of the message.



Re: specifying table in function args

2019-02-28 Thread David G. Johnston
On Thu, Feb 28, 2019 at 1:56 PM Rob Sargent  wrote:
> HINT:  No function matches the given name and argument types. You might need 
> to add explicit type casts.

Pretty sure you need to heed the advice to make it work.

select pvr(s::public.segment) from mm.segment AS s limit 5;

David J.



Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Tom Lane
Michael Lewis  writes:
> Can you dumb down how to change the index or column type such that an index
> will be used for the && operator while intarray extension is installed? We
> have the intarray extension installed and I doubt that I can get it
> removed.

There's no magic nice solution to this, or we'd have told you about it.

Possible options:

1. Remove intarray extension.
2. Move intarray extension to a schema that's not in your search path.
3. Create an index using intarray's opclass, instead of or in addition
   to the core-opclass index.
4. Rename intarray's && operator to something else (will bite you at
   next dump/reload, where the renaming will be lost).
5. Always schema-qualify references to the core && operator.

All of these have obvious downsides, especially if you're actively
using the intarray extension for other purposes.

regards, tom lane



Re: race condition when checking uniqueness between two tables

2019-02-28 Thread Tom Lane
=?UTF-8?B?R3nFkXrFkSBQYXBw?=  writes:
> A periodic task moves rows form `available_jobs` to `assigned_jobs`
> the jobs that has been assigned:

> WITH jobs_to_assign AS (
>   DELETE FROM available_jobs
> USING v_assignable_jobs
> WHERE available_jobs.id = v_assignable_jobs.id
> RETURNING available_jobs.*
> )
> INSERT
>   INTO assigned_jobs(id, url)
>   SELECT id, url
> FROM jobs_to_assign
> RETURNING *;

> And a similar one that “unassigns” failed assigned jobs, that is,
> pushes back rows from `assigned_jobs` to `available_jobs`. And it
> sometimes fails with:
> ERROR: duplicate key value violates unique constraint "jobs_pkey"
> DETAIL: Key (id)=(1fd0626c-f953-3278-82a1-8e4320d28914) already
> exists.

What is v_assignable_jobs?

If, as I suspect, it's a view involving these same tables, then
likely your issue is that you aren't reading that view with suitable
locking, so that sometimes it will return stale rows that describe
no-longer-assignable jobs.

DELETE doesn't have an option to apply FOR UPDATE to USING tables,
AFAIR, but maybe you could fix it along this line:

WITH jobs_to_assign AS (
  SELECT id FROM v_assignable_jobs
FOR UPDATE
), deleted_jobs AS (
  DELETE FROM available_jobs
USING jobs_to_assign
WHERE available_jobs.id = jobs_to_assign.id
RETURNING available_jobs.*
)
INSERT ... about as before, but use deleted_jobs ...


> * Each transaction is in READ_COMMITTED.

Another line of thought is to use SERIALIZABLE mode and just retry
the inevitable serialization failures.  However, if there are a lot
of concurrent processes doing this, you'd probably get so many
serialization failures that it'd be quite inefficient.

regards, tom lane



Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Michael Lewis
On Thu, Feb 28, 2019 at 3:34 PM Tom Lane  wrote:

> Michael Lewis  writes:
> > Can you dumb down how to change the index or column type such that an
> index
> > will be used for the && operator while intarray extension is installed?
> We
> > have the intarray extension installed and I doubt that I can get it
> > removed.
>
> There's no magic nice solution to this, or we'd have told you about it.
>
> Possible options:
>
> 1. Remove intarray extension.
> 2. Move intarray extension to a schema that's not in your search path.
> 3. Create an index using intarray's opclass, instead of or in addition
>to the core-opclass index.
>

Thank you so much for you time in enumerating the options. What's the
concern/problem/cost to re-creating the index with the intarray's opclass?
If that's what will be used by && with the extension installed, then what's
the downside?

I see significant code refactor for option 1 and 2 as it will have system
wide impact rather than specific to the use of this particular column which
is limited.



> 4. Rename intarray's && operator to something else (will bite you at
>next dump/reload, where the renaming will be lost).
> 5. Always schema-qualify references to the core && operator.
>

Would a sixth option be to re-create the column as array type and keep the
index as is and ensure that in queries, I am using conditions like ARRAY[1]
&& table.column_name rather than '{1}'::integer[] && table.column_name? Or
would I still need to schema qualify references to the core && operator for
it to be used?

If I created a table and disabled sequential scan and such, I suppose I
could easily test that the index get used or not.


>
> All of these have obvious downsides, especially if you're actively
> using the intarray extension for other purposes.
>
> regards, tom lane
>


PostgreSQL (linux) configuration with GSSAPI to a Windows domain

2019-02-28 Thread Jean-Philippe Chenel
I'm trying to configure authentication between PostgreSQL database server on 
linux and Windows Active Directory.

First part of configuration is working but when I'm trying to authenticate from 
Windows client, it is not working with message: Can't obtain database list from 
the server. SSPI continuation error. The specified target is unknown or 
unreachable (80090303)

On Windows:

Domain is AD.CORP.COM

Host is: WIN.AD.CORP.COM, IP is 192.168.1.173

On Linux (Ubuntu 16.04)

hostname is UBUNTU.ad.corp.com, IP is 192.168.1.143

DNS are configured to reach the AD sytem (.173)

PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1), compiled 
by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit

I've created à service user called POSTGRES and a normal user in AD called 
ubuntupg.

Finally I've created the SPN:

setspn -A POSTGRES/UBUNTU.ad.corp.com POSTGRES

Generated the keytab to put on the linux server:

ktpass -out postgres.keytab -princ POSTGRES/ubuntu.ad.corp@ad.corp.com 
-mapUser POSTGRES -pass 'thepassword' -crypto all -ptype KRB5_NT_PRINCIPAL

On the linux /etc/krb5.conf:

[libdefaults]
  debug=true
  default_realm = AD.CORP.COM
  dns_lookup_realm = false
  dns_lookup_kdc = false
  ticket_lifetime = 24h
  renew_lifetime = 7d
  forwardable = true

[realms]
  AD.CORP.COM = {

kdc = WIN.AD.CORP.COM
  }

[domain_realm]
  ad.corp.com = AD.CORP.COM

  .ad.corp.com = AD.CORP.COM

Making this command work and klist return a ticket:

kinit -V -k -t /etc/postgresql/9.6/main/postgres.keytab 
POSTGRES/ubuntu.ad.corp@ad.corp.com

klist -k /etc/postgresql/9.6/main/postgres.keytab

POSTGRES/ubuntu.ad.corp@ad.corp.com

Here is the added onfiguration to postgresql.conf

krb_server_keyfile = '/etc/postgresql/9.6/main/postgres.keytab'

Here is the configuration of pg_hba.conf

hostall  all0.0.0.0/0 gss

Up to here, all is working as expected, kinit with ubuntupg is also working 
well. ubuntupg and ubunt...@ad.corp.com is also created on the database. The 
probleme is when I try, from a Windows client, connecting to the DB.

psql.exe -h 192.168.1.143 -U ubuntupg

Can't obtain database list from the server. SSPI continuation error. The 
specified target is unknown or unreachable (80090303)

PostgreSQL log file show:

2019-02-28 14:02:54.178 EST [6747] [unknown]@[unknown] LOG:  0: connection 
received: host=192.168.1.176 port=57254
2019-02-28 14:02:54.178 EST [6747] [unknown]@[unknown] LOCATION:  
BackendInitialize, postmaster.c:4188
2019-02-28 14:02:54.331 EST [6747] ubuntupg@ubuntupg FATAL:  28000: GSSAPI 
authentication failed for user "ubuntupg"
2019-02-28 14:02:54.331 EST [6747] ubuntupg@ubuntupg DETAIL:  Connection 
matched pg_hba.conf line 92: "hostall  all0.0.0.0/0 
gss"
2019-02-28 14:02:54.331 EST [6747] ubuntupg@ubuntupg LOCATION:  auth_failed, 
auth.c:307

psql.exe -h 192.168.1.143 -U ubunt...@ad.corp.com

2019-02-28 14:06:35.992 EST [6866] [unknown]@[unknown] LOG:  0: connection 
received: host=192.168.1.176 port=57282

2019-02-28 14:06:35.992 EST [6866] [unknown]@[unknown] LOCATION:  
BackendInitialize, postmaster.c:4188

2019-02-28 14:06:36.148 EST [6866] ubunt...@ad.corp.com@ubunt...@ad.corp.com 
FATAL:  28000: GSSAPI authentication failed for user "ubunt...@ad.corp.com"

2019-02-28 14:06:36.148 EST [6866] ubunt...@ad.corp.com@ubunt...@ad.corp.com 
DETAIL:  Connection matched pg_hba.conf line 96: "hostall  all  
  0.0.0.0/0 gss"

2019-02-28 14:06:36.148 EST [6866] ubunt...@ad.corp.com@ubunt...@ad.corp.com 
LOCATION:  auth_failed, auth.c:307

Thank you very much for your help.

Best regards,


Re: Pgbackrest Comparability issue

2019-02-28 Thread Adrian Klaver

On 2/28/19 12:28 PM, chiru r wrote:



On Thu, Feb 28, 2019 at 1:28 PM Adrian Klaver > wrote:


On 2/28/19 10:06 AM, chiru r wrote:
 >
 > We are suing pgBackRest 1.24 version .


Hmm, that cut off in the previous post, sorry for the noise. Seems email 
client issue on my end.



 >
 > Installing using rpms
 >
 > yum -y install perl-parent
 > yum -y install perl-Time-HiRes
 > yum -y install perl-JSON
 > yum -y install perl-Digest-SHA
 > yum -y install perl-Digest
 > yum -y install perl-DBD-Pg

FYI, please do not top post. Inline posting is the preferred style on
this list.

In the above I do not see anything that indicates how pgBackRest was
installed or the version number.

Hi Adrian,
We installed  pgBackRest *1.24* version  initially with PostgreSQL 
9.5.10 version.


In the 1.x series the latest is 1.28. The 1.x series has been superseded 
by 2.x it would seem.


Now we have installed postgresql-10 on server and we are trying to run 
backups to PostgreSQL-10 also using pgbackrest. and we are getting 
compilation errors.


Is this the same server as the one that has the 9.5 instance?



The below steps we followed.
yum -y install perl-parent
yum -y install perl-Time-HiRes
yum -y install perl-JSON
yum -y install perl-Digest-SHA
yum -y install perl-Digest
yum -y install perl-DBD-Pg

Downloaded pgbackrest-release-1.24.zip from github and unpacked.
https://github.com/pgbackrest/pgbackrest

cp -r pgbackrest-release-1.24/lib/pgBackRest  /usr/share/perl5/
cp -r pgbackrest-release-1.24/bin/pgbackrest  /usr/bin/pgbackrest


Have you tried this:

https://pgbackrest.org/user-guide.html#build



echo "Restarting Cluster..."
su -c "cd ${PGHOME}/bin; pg_ctl restart -D ${PGDATA}" -m "$DBUSER"
sleep 20

su -c "cd /usr/bin; pgbackrest --stanza=online_backups10 
--log-level-console=info stanza-create" -m "$DBUSER"
su -c "cd /usr/bin; pgbackrest --stanza=online_backups10 
--log-level-console=info check" -m "$DBUSER"


  Thanks,
chiru




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Ron

On 2/28/19 4:53 PM, Michael Lewis wrote:
[snip]

Would a sixth option be to re-create the column as array type


Codd is spinning in his grave...

--
Angular momentum makes the world go 'round.


Mind of its own?

2019-02-28 Thread s400t
Hopefully not, but I feel so helpless.

What happened?

I am using Postgresql 9.6, and the latest phppgadmin. OS is Debian 9.6.

1. Created a database.
2. Created tables inside that database.
3. Dropped that database.


Now, what happens is when I create another database with the same name or 
different name, all those tables are already inside that database!!! I have not 
run the table create script. 
This is a cruel joke.

What is happening?




Re: Mind of its own?

2019-02-28 Thread Ian Barwick

On 3/1/19 9:10 AM, s4...@yahoo.co.jp wrote:

Hopefully not, but I feel so helpless.

What happened?

I am using Postgresql 9.6, and the latest phppgadmin. OS is Debian 9.6.

1. Created a database.
2. Created tables inside that database.
3. Dropped that database.


Now, what happens is when I create another database with the same name or 
different name, all those tables are already inside that database!!! I have not 
run the table create script.
This is a cruel joke.

What is happening?


Have you tried using psql to connect directly to PostgreSQL to verify
what phppgadmin is telling you?


Regards

Ian Barwick


--
 Ian Barwick   https://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



RE: Barman disaster recovery solution

2019-02-28 Thread Ahmed, Nawaz
Thanks for clarifying guys.

Best Regards,

Nawaz Ahmed
Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9027
na...@fast.au.fujitsu.com
fastware.com.au

-Original Message-
From: David Steele 
Sent: Thursday, 28 February 2019 7:41 PM
To: Achilleas Mantzios ; Ahmed, Nawaz 
; pgsql-general@lists.postgresql.org
Subject: Re: Barman disaster recovery solution

On 2/28/19 9:21 AM, Achilleas Mantzios wrote:
> On 28/2/19 1:08 π.μ., Ahmed, Nawaz wrote:
>>
>> Hi,
>>
>> I believe the "file copy" method (listed in the table) in pgbackrest
>> is based on pg_basebackup, so i think it should be "pg_basebackup
>> over ssh" as pgbackrest internally calls pg_basebackup. David Steele
>> can correct me.
>>
> No, apparently pgbackrest does not rely on pg_basebackup.

pgBackRest implements file copy internally and does not rely on any 
command-line tools such as rsync, tar, pg_basebackup, etc.

Regards,
--
-David
da...@pgmasters.net
Disclaimer

The information in this e-mail is confidential and may contain content that is 
subject to copyright and/or is commercial-in-confidence and is intended only 
for the use of the above named addressee. If you are not the intended 
recipient, you are hereby notified that dissemination, copying or use of the 
information is strictly prohibited. If you have received this e-mail in error, 
please telephone Fujitsu Australia Software Technology Pty Ltd on + 61 2 9452 
9000 or by reply e-mail to the sender and delete the document and all copies 
thereof.


Whereas Fujitsu Australia Software Technology Pty Ltd would not knowingly 
transmit a virus within an email communication, it is the receiver’s 
responsibility to scan all communication and any files attached for computer 
viruses and other defects. Fujitsu Australia Software Technology Pty Ltd does 
not accept liability for any loss or damage (whether direct, indirect, 
consequential or economic) however caused, and whether by negligence or 
otherwise, which may result directly or indirectly from this communication or 
any files attached.


If you do not wish to receive commercial and/or marketing email messages from 
Fujitsu Australia Software Technology Pty Ltd, please email 
unsubscr...@fast.au.fujitsu.com


Re: Mind of its own?

2019-02-28 Thread Tom Lane
 writes:
> I am using Postgresql 9.6, and the latest phppgadmin. OS is Debian 9.6.

> 1. Created a database.
> 2. Created tables inside that database.
> 3. Dropped that database.

> Now, what happens is when I create another database with the same name or 
> different name, all those tables are already inside that database!!! I have 
> not run the table create script. 

I think you created some tables in the "template1" database, which is
(by default) what CREATE DATABASE clones to make a new DB.  Duplicating
its contents is considered a feature not a bug.

You could manually clean out template1, or if that seems painful,
you could drop it and recreate it by cloning template0 (see
CREATE DATABASE's TEMPLATE option).  template0 exists precisely
to be a virgin empty database for this sort of purpose.

(There is an interlock to prevent you from dropping either
of these template databases accidentally.  Overriding that is
left as an exercise for the student.)

regards, tom lane



Re: PostgreSQL (linux) configuration with GSSAPI to a Windows domain

2019-02-28 Thread Andre Piwoni
I think setting up PAM authentication with AD on Linux server joined to
domain via realm SSSD was much easier and transparent.

Something like this worked for me to create SPN mapping and keytab in one
command without need to use UPPERCASE for POSTGRES:
ktpass -out postgres.keytab -princ POSTGRES/ubuntu.ad.corp@ad.corp.com
-mapUser AD\POSTGRES -pass 'thepassword' -mapOp add -crypto ALL -ptype
KRB5_NT_PRINCIPAL

pg_hba.conf
host all all 0.0.0.0/0 gss gss include_realm=0 krb_realm=AD.CORP.COM
ktb_realm should not be needed since you have one in your krb5.conf

postgresql.conf
krb_server_keyfile = '/etc/postgresql/9.6/main/postgres.keytab'
#krb_caseins_users = off

kinit ubunt...@ad.corp.com
psql.exe -h 192.168.1.143 -U ubuntupg

klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: ubunt...@ad.corp.com

Valid starting   Expires  Service principal
08/03/2018 22:28:47  08/04/2018 08:28:47  krbtgt/ad.corp@ad.corp.com
renew until 08/10/2018 22:28:42
08/03/2018 22:29:00  08/04/2018 08:28:47  POSTGRES/
ubuntu.ad.corp@ad.corp.com
renew until 08/10/2018 22:28:42

On Thu, Feb 28, 2019 at 2:54 PM Jean-Philippe Chenel 
wrote:

> I'm trying to configure authentication between PostgreSQL database server
> on linux and Windows Active Directory.
>
> *First part of configuration is working but when I'm trying to
> authenticate from Windows client, it is not working with message: Can't
> obtain database list from the server. SSPI continuation error. The
> specified target is unknown or unreachable (80090303)*
>
> *On Windows:*
>
> Domain is AD.CORP.COM
>
> Host is: WIN.AD.CORP.COM, IP is 192.168.1.173
>
> *On Linux (Ubuntu 16.04)*
>
> hostname is UBUNTU.ad.corp.com, IP is 192.168.1.143
>
> DNS are configured to reach the AD sytem (.173)
>
> PostgreSQL 9.6.9 on x86_64-pc-linux-gnu (Ubuntu 9.6.9-2.pgdg16.04+1),
> compiled by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.9) 5.4.0 20160609, 64-bit
>
> I've created à service user called POSTGRES and a normal user in AD called
> ubuntupg.
>
> Finally I've created the SPN:
>
> setspn -A POSTGRES/UBUNTU.ad.corp.com POSTGRES
>
> Generated the keytab to put on the linux server:
>
> ktpass -out postgres.keytab -princ POSTGRES/ubuntu.ad.corp@ad.corp.com 
> -mapUser POSTGRES -pass 'thepassword' -crypto all -ptype KRB5_NT_PRINCIPAL
>
> On the linux /etc/krb5.conf:
>
> [libdefaults]
>   debug=true
>   default_realm = AD.CORP.COM
>   dns_lookup_realm = false
>   dns_lookup_kdc = false
>   ticket_lifetime = 24h
>   renew_lifetime = 7d
>   forwardable = true
>
> [realms]
>   AD.CORP.COM = {
> kdc = WIN.AD.CORP.COM
>   }
>
> [domain_realm]
>   ad.corp.com = AD.CORP.COM
>   .ad.corp.com = AD.CORP.COM
>
> Making this command work and klist return a ticket:
>
> kinit -V -k -t /etc/postgresql/9.6/main/postgres.keytab 
> POSTGRES/ubuntu.ad.corp@ad.corp.com
>
> klist -k /etc/postgresql/9.6/main/postgres.keytab
>
> POSTGRES/ubuntu.ad.corp@ad.corp.com
>
> Here is the added onfiguration to postgresql.conf
>
> krb_server_keyfile = '/etc/postgresql/9.6/main/postgres.keytab'
>
> Here is the configuration of pg_hba.conf
>
> hostall  all0.0.0.0/0 gss
>
> Up to here, all is working as expected, kinit with ubuntupg is also
> working well. ubuntupg and ubunt...@ad.corp.com is also created on the
> database. The probleme is when I try, from a Windows client, connecting to
> the DB.
>
> psql.exe -h 192.168.1.143 -U ubuntupg
>
> *Can't obtain database list from the server. SSPI continuation error. The
> specified target is unknown or unreachable (80090303)*
>
> PostgreSQL log file show:
>
> 2019-02-28 14:02:54.178 EST [6747] [unknown]@[unknown] LOG:  0: 
> connection received: host=192.168.1.176 port=57254
> 2019-02-28 14:02:54.178 EST [6747] [unknown]@[unknown] LOCATION:  
> BackendInitialize, postmaster.c:4188
> 2019-02-28 14:02:54.331 EST [6747] ubuntupg@ubuntupg FATAL:  28000: GSSAPI 
> authentication failed for user "ubuntupg"
> 2019-02-28 14:02:54.331 EST [6747] ubuntupg@ubuntupg DETAIL:  Connection 
> matched pg_hba.conf line 92: "hostall  all
> 0.0.0.0/0 gss"
> 2019-02-28 14:02:54.331 EST [6747] ubuntupg@ubuntupg LOCATION:  auth_failed, 
> auth.c:307
>
> psql.exe -h 192.168.1.143 -U ubunt...@ad.corp.com
>
> 2019-02-28 14:06:35.992 EST [6866] [unknown]@[unknown] LOG:  0: 
> connection received: host=192.168.1.176 port=57282
> 2019-02-28 14:06:35.992 EST [6866] [unknown]@[unknown] LOCATION:  
> BackendInitialize, postmaster.c:4188
> 2019-02-28 14:06:36.148 EST [6866] ubunt...@ad.corp.com@ubunt...@ad.corp.com 
> FATAL:  28000: GSSAPI authentication failed for user "ubunt...@ad.corp.com"
> 2019-02-28 14:06:36.148 EST [6866] ubunt...@ad.corp.com@ubunt...@ad.corp.com 
> DETAIL:  Connection matched pg_hba.conf line 96: "hostall  
> all0.0.0.0/0 gss"
> 2019-02-28 14:06:36.148 EST [6866] ubunt...@ad.corp.com@ubunt...@ad.corp.com 
> LOCATION:  auth_failed, auth.c:307
>
> Thank you v

Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Michael Lewis
On Thu, Feb 28, 2019 at 4:57 PM Ron  wrote:

> On 2/28/19 4:53 PM, Michael Lewis wrote:
> [snip]
>
> Would a sixth option be to re-create the column as array type
>
>
> Codd is spinning in his grave...
>

I'd hope he would be fine with people asking questions to learn. I'm open
to studying any suggested resources. I also love to learn from those with
experience who are further down this path, just as I love to share my
experience with people who don't from a WHERE from a HAVING. If there is
anything specific you can point me to, please do chime in with something
constructive.


validation of hot standby

2019-02-28 Thread Julie Nishimura
Hello,
We set up our hot standby by putting master into archive mode and issuing 
pg_basebackup command, and we are about to start our hot standby instance. What 
are the things you suggest for validation (assuming it will come up cleanly) 
and replication caught up?

What do I need to check in addition, making sure the data is clean and not 
corrupted?

thanks for your help


Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Ron

On 2/28/19 7:53 PM, Michael Lewis wrote:



On Thu, Feb 28, 2019 at 4:57 PM Ron > wrote:


On 2/28/19 4:53 PM, Michael Lewis wrote:
[snip]

Would a sixth option be to re-create the column as array type


Codd is spinning in his grave...


I'd hope he would be fine with people asking questions to learn. I'm open 
to studying any suggested resources. I also love to learn from those with 
experience who are further down this path, just as I love to share my 
experience with people who don't from a WHERE from a HAVING. If there is 
anything specific you can point me to, please do chime in with something 
constructive.


Michael,

E. F. Codd developed the relational model, and Normal Forms to structure the 
attributes (columns), tuples (rows) and relations (tables).  They work to 
prevent insert, update and delete anomalies.


Quoting him in /The Relational Model for Database Management Version 2/ from 
1990: "values in the domains on which each relation is defined are required 
to be atomic with respect to the DBMS."


Arrays are -- by definition -- not atomic, and so they fundamentally break 
the model that relational databases are founded upon.  If you want to be a 
good database designer, don't use arrays.


(Darwen and Date deconstruct "atomic value" into meaninglessness by claiming 
that, for example, strings are arrays of characters and thus arrays are ok.  
I think that's bollocks.)


--
Angular momentum makes the world go 'round.


Re: validation of hot standby

2019-02-28 Thread Ron

On 2/28/19 8:00 PM, Julie Nishimura wrote:

Hello,
We set up our hot standby by putting master into archive mode and issuing 
pg_basebackup command, and we are about to start our hot standby instance. 
What are the things you suggest for validation (assuming it will come up 
cleanly) and replication caught up?


Run the program pg_controldata on the secondary server. Specifically look 
for the last checkpoint timestamp.  It will lag some, depending on how 
checkpoint frequency is configured on the master.




What do I need to check in addition, making sure the data is clean and not 
corrupted?


thanks for your help


--
Angular momentum makes the world go 'round.


Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Michael Lewis
>
> Arrays are -- by definition -- not atomic, and so they fundamentally break
> the model that relational databases are founded upon.  If you want to be a
> good database designer, don't use arrays.
>

Thanks. I was reading about Codd after your last email, but couldn't guess
at which point was objectionable. I'll have to keep that in mind and always
question when I come across an array or if I am ever tempted to use one.

While I can make more minor modifications to schema and suggest best
practices, I am not in the position to set or enforce policy within my
organization. The table and field in question are absolutely a mess and
this field in particular is actually an array of IDs of other records on
the same table (that actually represent another object that is similar but
not the same as the one containing the array of IDs).

So, I am just looking to make the best of a bad situation currently and
understand how best to use the various tools available to me, with little
impact on the existing codebase but still getting performance benefits from
proper indexing. Thanks for sharing the additional context.

>


Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Ron

On 2/28/19 10:26 PM, Michael Lewis wrote:


Arrays are -- by definition -- not atomic, and so they fundamentally
break the model that relational databases are founded upon.  If you
want to be a good database designer, don't use arrays.


Thanks. I was reading about Codd after your last email, but couldn't guess 
at which point was objectionable. I'll have to keep that in mind and 
always question when I come across an array or if I am ever tempted to use 
one.


While I can make more minor modifications to schema and suggest best 
practices, I am not in the position to set or enforce policy within my 
organization. The table and field in question are absolutely a mess and 
this field in particular is actually an array of IDs of other records on 
the same table (that actually represent another object that is similar but 
not the same as the one containing the array of IDs).


So, I am just looking to make the best of a bad situation currently and 
understand how best to use the various tools available to me, with little 
impact on the existing codebase but still getting performance benefits 
from proper indexing. Thanks for sharing the additional context.




Don't worry.  I'm just in an especially "geezer" mood today.

--
Angular momentum makes the world go 'round.


Re: Overloaded && operator from intarray module prevents index usage.

2019-02-28 Thread Michael Lewis
I'll try to stay off your lawn.

>