Re: Table Partitioning: Sequence jump issue 10 in 10 with serial datatype

2018-02-14 Thread Alban Hertroys

> On 14 Feb 2018, at 2:48, DrakoRod  wrote:

> CREATE OR REPLACE RULE inserts_customer_part1
> AS ON INSERT TO customers
> WHERE new.id < 1
> DO INSTEAD  INSERT INTO customers_part1 SELECT NEW.*;
> 
> CREATE OR REPLACE RULE inserts_customer_part2
> AS ON INSERT TO customers
> WHERE new.id >= 1 AND new.id < 2
> DO INSTEAD  INSERT INTO customers_part2 SELECT NEW.*;

Here's your problem. Rules substitute values. Since you didn't provide an id in 
your insert, the id column gets substituted by the default value, which happens 
to call nextval. You have 3 references to new.id in your rules, so the sequence 
increments by 3.

That's one of the reasons people usually advise to use triggers & procedures 
instead of rules.

> dd=# SELECT * FROM customers; 
> id |  name   | other_data 
> +-+
>  3 | XXx | YY
>  7 | XXx | YY
> 11 | XXx | YY
> 15 | XXx | YY
> 19 | XXx | YY
> 23 | XXx | YY
> (6 rows)

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




Using standby for read-only queries in production and DML operations on primary.

2018-02-14 Thread Konstantin Evteev
Hello!

There is a problem connected with using standby for read-only queries in
production and DML operations on primary.
On primary we use alter table command with statement_timeout and
deadlock_timeout values about 10 - 50 ms with rertry until it would
successfully be executed.
But the same locks would be replayed on standby - without timeouts.
So there we see locks and our standby pools are overflowed - it is an
incident/lsr/problem for users.

There is a workaround to have 2 standbies:
Before altering table on primary pause 1st standby and switch read only
queries to it.
Then alter table and wait until it would be replicated to 2-nd standby.
Switch read only queries to 2-nd standby and remove pause from 1-st.

--
Konstantin Evteev


Re: execute block like Firebird does

2018-02-14 Thread Edson Carlos Ericksson Richter

Em 11/02/2018 03:57, PegoraroF10 escreveu:

We are migrating our databases from Firebird to PostGres. A useful feature
Firebird has is Execute Block.
What it does is just return a record set from that dynamic SQL, just like a
PostGres function, but without creating it.
It sound like ...
execute block returns(ID Integer, Name varchar(50), LastInvoice Date, ...)
as
begin
   select bla, bla, bla into ...;
   select bla, bla into ...;
   suspend;
end
I know we could create a function but we have several hundred of these
blocks running, so ... it would be a huge work to do.
So, there is a way to run a dynamic sql which returns a set of records ?


Can't you use "with ... select ..."?

Like:

with qry1 as (select bla, bla, bla from xyz), qry2 as (select bla, bla 
from ...)

select * from qry1
union all
select * from qry2


?

Regards,

Edson

--

Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html







RE: Barman 2.3 errors

2018-02-14 Thread GALLIANO Nicolas
Hi,

Just to say that i cannot using replication streaming but ssh shipping wal 
works …
I’m trying barman product like that.
Thanks
nico

De : GALLIANO Nicolas
Envoyé : mardi 13 février 2018 10:15
À : Ahmed, Nawaz ; pgsql-gene...@postgresql.org
Objet : RE: Barman 2.3 errors

Hi

1/ thanks for your answer
2/ it’ right … the barman support list is better … sorry
3/ i’ve verified the pg replication slot :

postgres@postgres# SELECT slot_name, active, restart_lsn FROM 
pg_replication_slots WHERE slot_type = 'physical' AND slot_name = 'barman';
slot_name | active | restart_lsn
---++-
barman| t  | 1/7B000840


4/ i’ve tried this :


-  Receive-wal process not running :
barman@tcgipocgresql:/var/log/barman [2018/02/13-09:23:53]
$ps -ef |grep receive-wal
barman   48067  4398  0 09:58 pts/000:00:00 grep --color=auto receive-wal


-  Receive-wal process failed start :
$barman receive-wal tcgepg96ddm
Starting receive-wal for server tcgepg96ddm
ERROR: ArchiverFailure:replication slot 'barman' is already in use


-  Receive-wal reset :
$barman receive-wal --reset tcgepg96ddm
Starting receive-wal for server tcgepg96ddm
Resetting receive-wal directory status
Removing status file 
/var/lib/barman/tcgepg96ddm/streaming/00010001007B.partial


-  Switch-xlog :
$barman switch-xlog tcgepg96ddm
The WAL file 00010001007B has been closed on server 'tcgepg96ddm'

Then if i try when the replication slot is not active the receive-wal crash and 
i have this error :

$barman receive-wal tcgepg96ddm
Starting receive-wal for server tcgepg96ddm
EXCEPTION: 'utf8' codec can't decode byte 0xe0 in position 47: invalid 
continuation byte
See log file for more details.

And in the log i still have the UnicodeDecodeError …

On a post (https://sourceforge.net/p/pgbarman/tickets/88/)  i’ve found that 
it’s a barman bug with a workaround … but the solution don’t work for me ...
I’m still looking for a solution ☹

Thanks for your help
Have a good day
nicolas





De : Ahmed, Nawaz [mailto:na...@fast.au.fujitsu.com]
Envoyé : mardi 13 février 2018 08:07
À : GALLIANO Nicolas ; 
pgsql-gene...@postgresql.org
Objet : RE: Barman 2.3 errors


Hi Nicolas,

I would like to take stab at this one, as i had recently worked on a demo of 
barman. But like Michael Paquier said, it is better to check the information i 
provide and the situation you are facing with the maintainers of the project. 
Here is what i found.

The first thing is to check if you can see the replication slot named "barman" 
created on the target database using the below command. It should return a slot 
named "barman" with the slot_type as "physical". The "replication slot: OK" 
line of the check command shows the slot is available, however, please double 
check if it is true with the below query.


select * from pg_replication_slots;


Now let us target the line "WAL archive: FAILED", I faced this issue when i had 
killed the "receive-wal" process and restarted it.  First look for the 
receive-wal process with the ps command

ps -ef|grep receive-wal

if it is not running then start it up in the background using the command

$ barman receive-wal tcgepg96ddm &

If the receive-wal process is running but you still face that error in the 
check command, then I suggest you switch the xlog using the below command.

$ barman switch-xlog tcgepg96ddm

if the above command fails to switch the xlog, then try to force it with the 
below command.

$ barman switch-xlog --force tcgepg96ddm

if you still cannot get it to work, then try to reset the status of the 
receive-wal process using the --reset option as below.

$ barman receive-wal --reset tcgepg96ddm


If you can successfully run the above reset command, then try to switch the log 
file  and run the check command again to see if everything looks fine.

$ barman switch-xlog tcgepg96ddm


Hope that helps, again, these are the steps i had taken to resolve a similar 
issue. You might still want to get in touch with the project maintainers about 
the validity of the above commands.


Best Regards,

Nawaz Ahmed
Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9027
na...@fast.au.fujitsu.com
fastware.com.au

[cid:image001.jpg@01D3A5A7.569875B0]
[cid:image002.jpg@01D3A5A7.569875B0]

From: GALLIANO Nicolas [mailto:nicolas.galli...@dsi.cnrs.fr]
Sent: Tuesday, 13 February 2018 2:23 AM
To: pgsql-gene...@postgresql.org
Subject: Barman 2.3 errors

Hi,

I’m trying to backup a remote DB (9.6.6) using barman 2.3 but backup failed 
start.
In barman.log i’ve such errors :

2018-02-12 16:18:23,852 [57691] barman.server ERROR: Check 'replication slot' 
failed for server 'tcgepg96ddm'
2018-02-12 16:18:23,857 [57691] barman.server ERROR: Check 'receive-wal 
running' failed for server 'tcge

Re: Split read/write queries between 2 servers (one master and one slave with streaming replication)?

2018-02-14 Thread Venkateswaran
Hi,

I am also trying to split read and write queries, where reads should go to
the slave server (streaming replication) and writes to the master server.
Any tool available to perform this activity instead of two connection pool?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread pavan95
Hi all,

Is it possible to upgrade an existing postgresql 9.1 production system to
latest Postgres 10.0 version?

The main requirement is to get rid of downtime. Please help me out!

Thanks in Advance.

Regards,
Pavan



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread David G. Johnston
On Wednesday, February 14, 2018, pavan95 
wrote:

> Hi all,
>
> Is it possible to upgrade an existing postgresql 9.1 production system to
> latest Postgres 10.0 version?
>
> The main requirement is to get rid of downtime. Please help me out!
>

Zero downtime is only possible by standing up a hot-standby then failing
over to it.  Same-server upgrade you can do via pg_upgrade but it does
involve downtime.  There are lots of material and options online, including
the docs, for setting up hot-standby replication.

David. J.


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread Melvin Davidson
On Wed, Feb 14, 2018 at 9:42 AM, pavan95 
wrote:

> Hi all,
>
> Is it possible to upgrade an existing postgresql 9.1 production system to
> latest Postgres 10.0 version?
>
> The main requirement is to get rid of downtime. Please help me out!
>
> Thanks in Advance.
>
> Regards,
> Pavan
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
> > Is it possible to upgrade an existing postgresql 9.1 production system
to latest Postgres 10.0 version?
This is specifically covered in the documention

8.6. Upgrading a PostgreSQL Cluster

*https://www.postgresql.org/docs/current/static/upgrading.html
*--
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread Pavan Teja
Thank you for your timely response 😊

On Feb 14, 2018 8:18 PM, "Melvin Davidson"  wrote:

>
>
> On Wed, Feb 14, 2018 at 9:42 AM, pavan95 
> wrote:
>
>> Hi all,
>>
>> Is it possible to upgrade an existing postgresql 9.1 production system to
>> latest Postgres 10.0 version?
>>
>> The main requirement is to get rid of downtime. Please help me out!
>>
>> Thanks in Advance.
>>
>> Regards,
>> Pavan
>>
>>
>>
>> --
>> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f184378
>> 0.html
>>
>> > Is it possible to upgrade an existing postgresql 9.1 production system
> to latest Postgres 10.0 version?
> This is specifically covered in the documention
>
> 8.6. Upgrading a PostgreSQL Cluster
>
> *https://www.postgresql.org/docs/current/static/upgrading.html
> *--
> *Melvin Davidson*
> I reserve the right to fantasize.  Whether or not you
> wish to share my fantasy is entirely up to you.
>


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread David G. Johnston
On Wednesday, February 14, 2018, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wednesday, February 14, 2018, pavan95 
> wrote:
>
>> Hi all,
>>
>> Is it possible to upgrade an existing postgresql 9.1 production system to
>> latest Postgres 10.0 version?
>>
>> The main requirement is to get rid of downtime. Please help me out!
>>
>
> Zero downtime is only possible by standing up a hot-standby then failing
> over to it.  Same-server upgrade you can do via pg_upgrade but it does
> involve downtime.  There are lots of material and options online, including
> the docs, for setting up hot-standby replication.
>
>
To clarify, you need to use logical replication here since the WAL format
is not usable across versions.

pg_upgrade is your simplest option if you can handle its downtime.

David J.


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread Pavan Teja
Yeah David,

Even I'm thinking the same

Regards,
Pavan

On Feb 14, 2018 8:34 PM, "David G. Johnston" 
wrote:

>
>
> On Wednesday, February 14, 2018, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wednesday, February 14, 2018, pavan95 
>> wrote:
>>
>>> Hi all,
>>>
>>> Is it possible to upgrade an existing postgresql 9.1 production system to
>>> latest Postgres 10.0 version?
>>>
>>> The main requirement is to get rid of downtime. Please help me out!
>>>
>>
>> Zero downtime is only possible by standing up a hot-standby then failing
>> over to it.  Same-server upgrade you can do via pg_upgrade but it does
>> involve downtime.  There are lots of material and options online, including
>> the docs, for setting up hot-standby replication.
>>
>>
> To clarify, you need to use logical replication here since the WAL format
> is not usable across versions.
>
> pg_upgrade is your simplest option if you can handle its downtime.
>
> David J.
>


Re: Upgrading from Postgresql 9.1 to 10

2018-02-14 Thread Melvin Davidson
On Wed, Feb 14, 2018 at 10:04 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:

>
>
> On Wednesday, February 14, 2018, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wednesday, February 14, 2018, pavan95 
>> wrote:
>>
>>> Hi all,
>>>
>>> Is it possible to upgrade an existing postgresql 9.1 production system to
>>> latest Postgres 10.0 version?
>>>
>>> The main requirement is to get rid of downtime. Please help me out!
>>>
>>
>> Zero downtime is only possible by standing up a hot-standby then failing
>> over to it.  Same-server upgrade you can do via pg_upgrade but it does
>> involve downtime.  There are lots of material and options online, including
>> the docs, for setting up hot-standby replication.
>>
>>
> To clarify, you need to use logical replication here since the WAL format
> is not usable across versions.
>
> pg_upgrade is your simplest option if you can handle its downtime.
>
> David J.
>

> To clarify, you need to use logical replication here since the WAL format
is not usable across versions.
Slony replication also allows upgrading between versions without downtime.

http://www.slony.info/
-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


postgres connection with port option in shell script

2018-02-14 Thread Abhra Kar
Hi,

  I want to get postgres connection in script file. I am executing
below command and successfully getting connected ---


psql postgresql://$USER:$PASSWORD@$HOST/$DATABASE <

Re: postgres connection with port option in shell script

2018-02-14 Thread Andreas Kretschmer



Am 14.02.2018 um 16:21 schrieb Abhra Kar:


Hi,

      I want to get postgres connection in script file. I am executing 
below command and successfully getting connected ---



psql postgresql://$USER:$PASSWORD@$HOST/$DATABASE 

Re: postgres connection with port option in shell script

2018-02-14 Thread David G. Johnston
On Wed, Feb 14, 2018 at 8:21 AM, Abhra Kar  wrote:

> Hi,
>
>   I want to get postgres connection in script file. I am executing
> below command and successfully getting connected ---
>
>
> psql postgresql://$USER:$PASSWORD@$HOST/$DATABASE <
>
>
If all you are going to do is substitute environment variables into a URI
why not just identify the environment variables that psql uses directly,
set those, and call "psql" directly.​

select * from abc;
>
>
> This is executing with default 5432 port.If postgres is installed to any
> other port then port option need to be include here[Don't want to take port
> from .pgpass file].
>

​You couldn't even if you wanted to.​  Did you maybe mean the
.pg_service.conf file?

You should strongly consider using .pgpass instead of "$PASSWORD" -
especially depending upon where perform the export.


> How I can modify this command with PORT option.
>

​
https://www.postgresql.org/docs/10/static/libpq-connect.html#LIBPQ-CONNSTRING

​David J.


Regarding pg_multixact/members folder size cleanup in postgres 9.3.6.

2018-02-14 Thread Yogesh Sharma
Dear All,

Thanks for your support and guidance.
I am using postgres 9.3.6 version and i am using multiple INSERT/UPDATE SQL
commands with explicit share lock.
I know pg_multixact/members folder contains transactions of share lock.

But these files are continuously increases.
I have checked PostgreSQL conf file and by default parameters are used in
my system.
When i have checked same behaviour in older versions of  PostgreSQL, this
folder size was not increased and continuously cleanup in older versions.
Some one please let me know below points. It will really helpful to
understand postgres behavior.
1. When this folder pg_multixact/members folder size will cleanup?
2. How can i identify  that at that time it will start cleanup?
3. Does it depends on some postgres config parameters? What is config
parameters and what will be value of that parameter?
4. I Want older postgres behavior in newer versions. So how to set this
behavior through postgres parameters?

Thanks in advance.

Regards,
Yogesh


Re: Regarding pg_multixact/members folder size cleanup in postgres 9.3.6.

2018-02-14 Thread David G. Johnston
On Wed, Feb 14, 2018 at 9:06 AM, Yogesh Sharma 
wrote:

> I am using postgres 9.3.6 version and i am using multiple INSERT/UPDATE
> SQL commands with explicit share lock.
>
​[...]​


> 4. I Want older postgres behavior in newer versions. So how to set this
> behavior through postgres parameters?
>

​You need to upgrade to 9.3.21.  Also, given the issues surrounding early
9.3 releases, did this cluster/database begin its life at 9.3.6 or was
there an upgrade path to get there?

I don't know whether this is configurable but I would suspect not - your
likely issue is software bugs that have since been fixed in one or more of
the 15 releases since 9.3.6

David J.


Re: Table Partitioning: Sequence jump issue 10 in 10 with serial datatype

2018-02-14 Thread DrakoRod
Yep!!

Today I tested with triggers instead rules and the sequence goings well.

Thanks for your help!! 



-
Dame un poco de fe, eso me bastará.
Rozvo Ware Solutions 
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Windows 10 Pro issue

2018-02-14 Thread Adrian Klaver

On 02/14/2018 09:28 AM, Dale Seaburg wrote:

CCing list so more eyes can see this



Is the pg_hba.conf file actually there?


Yes, the pg_hba.conf is in the "proper path" - C:\Program Files 
(x86)\PostgreSQL\8.4\data
Footnote: at the user level or system level I do not see any environment 
variables (EV) pointing to the above path.

Not sure whether an EV is even needed.



If it is there have the permissions changed on it or the directories 
above?




As far as I can tell permissions are OK to the above directory, and 
.conf files.


You might to take a look at this post if you have not already:

https://www.postgresql.org/message-id/14113c03-cbd9-584a-9ec1-6412d5606404%40tpg.com.au



More importantly, how do I go about building a backup of the data, so I 
can do an upgrade to a much later release, like 9.6 or so without having 
the postgres service running?  Hate to sound so ignorant, but I've not 
had to "travel down this road" before.
What would be the best approach to upgrading, without the '.\postgres' 
service running?


Important, before you do any of this for real I would plan on creating a 
test area or doing this on another similar machine to get the process down.


Migrating from 8.4 to 9.6 is a major version to major version upgrade. 
It also represents a sizable number of changes. Not sure how much you 
know about what is going on in the database. Still it would not hurt to 
go here:


https://www.postgresql.org/docs/10/static/release.html

A versioning number note, prior to latest version 10 Postgres used a 
three number system X.Y.z where X and Y represented a major upgrade and 
z represented bug fix/security fix releases. With 10 the system changed 
to X.y where X is the major version and y is bug fix/security fix. I 
mention that because when going through the release notes you really 
only need to concentrate on the notes for X.Y(<10) and X(>=10). So for 
the next version after 8.4, which is 9.0:


https://www.postgresql.org/docs/10/static/release-9-0.html

You want to concentrate on:

E.136.2. Migration to Version 9.0
https://www.postgresql.org/docs/10/static/release-9-0.html#id-1.11.6.140.4

E.136.3. Changes
https://www.postgresql.org/docs/10/static/release-9-0.html#id-1.11.6.140.5

You will not actually be migrating to 9.0 but the above will tell what 
changed relative to 8.4. Repeat for the other major releases to see what 
changed relative to the prior release. What is important to remember is 
that the changes are cumulative so the current latest version 10.2 will 
have the preceding changes. What you are looking for is any changes that 
may impact your code in the server or in client programs using the server.





BTW, I have done backups before using pgAdmin, but, not the psql tool.


Now this is where someone that is more familiar with running Postgres on 
Windows will need to chime in. The best practices is to install the new 
version in parallel with the old and use the pg_dump from the new 
version to dump the database from the old version, as pg_dump is 
backwards compatible not forward compatible. I just have not done that 
on Windows so I am not going to be of much help. To help out with 
answering this it would be helpful know where you got the Postgres 
program from and how you installed it. It would be also good to know 
what amount of data you are dealing with.





Dale Seaburg



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Split read/write queries between 2 servers (one master and one slave with streaming replication)?

2018-02-14 Thread Pierre Timmermans
pgpool does just that: it enables you to use the standby database as a read 
only while other queries are sent to the primary database only 
(http://www.pgpool.net/). Good product and good support (on the list°
This looks a very interesting possibility, although it is more related to 
automated failover than to load balancing of read only queries : 
http://paquier.xyz/postgresql-2/postgres-10-libpq-read-write/, depending on 
your client it might be supported. It is not supported - yet ? - by the 
node-postgres driver.
Regards, 
Pierre 

On Wednesday, February 14, 2018, 3:29:03 PM GMT+1, Venkateswaran 
 wrote:  
 
 Hi,

I am also trying to split read and write queries, where reads should go to
the slave server (streaming replication) and writes to the master server.
Any tool available to perform this activity instead of two connection pool?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

  

Re: Split read/write queries between 2 servers (one master and one slave with streaming replication)?

2018-02-14 Thread Melvin Davidson
On Wed, Feb 14, 2018 at 2:27 PM, Pierre Timmermans 
wrote:

> pgpool does just that: it enables you to use the standby database as a
> read only while other queries are sent to the primary database only (
> http://www.pgpool.net/). Good product and good support (on the list°
>
> This looks a very interesting possibility, although it is more related to
> automated failover than to load balancing of read only queries :
> http://paquier.xyz/postgresql-2/postgres-10-libpq-read-write/, depending
> on your client it might be supported. It is not supported - yet ? - by the
> node-postgres driver.
>
> Regards,
>
> Pierre
>
>
> On Wednesday, February 14, 2018, 3:29:03 PM GMT+1, Venkateswaran <
> venkateswaran.s...@gmail.com> wrote:
>
>
> Hi,
>
> I am also trying to split read and write queries, where reads should go to
> the slave server (streaming replication) and writes to the master server.
> Any tool available to perform this activity instead of two connection pool?
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-general-
> f1843780.html
>
>



*Actually, if you are looking for load balancing PgBouncer is better than
PgPool. There is a nice article about
that.https://www.last.fm/user/Russ/journal/2008/02/21/zd_postgres_connection_pools:_pgpool_vs._pgbouncer
*

*I used PgBouncer in a few of ny previous positions and found it to be very
fast and efficient.*

-- 
*Melvin Davidson*
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Re: Split read/write queries between 2 servers (one master and one slave with streaming replication)?

2018-02-14 Thread Pierre Timmermans
Thanks. Indeed pgBouncer is the usual candidate in front of postgres streaming 
replication, together with pgpool. Take care that your link on pgBouncer dates 
back from 2008 !. 
I had a look at pgBouncer in the past, but it was not actively maintained at 
that time. So I settled on a combination of pgpool and repmgr. People on pgpool 
mailing list are very reactive and helpful and the doc on the project is 
continuously improving.
Pierre 

On Wednesday, February 14, 2018, 8:44:47 PM GMT+1, Melvin Davidson 
 wrote:  
 
 

On Wed, Feb 14, 2018 at 2:27 PM, Pierre Timmermans  wrote:

pgpool does just that: it enables you to use the standby database as a read 
only while other queries are sent to the primary database only 
(http://www.pgpool.net/). Good product and good support (on the list°
This looks a very interesting possibility, although it is more related to 
automated failover than to load balancing of read only queries : 
http://paquier.xyz/ postgresql-2/postgres-10- libpq-read-write/, depending on 
your client it might be supported. It is not supported - yet ? - by the 
node-postgres driver.
Regards, 
Pierre 

On Wednesday, February 14, 2018, 3:29:03 PM GMT+1, Venkateswaran 
 wrote:  
 
 Hi,

I am also trying to split read and write queries, where reads should go to
the slave server (streaming replication) and writes to the master server.
Any tool available to perform this activity instead of two connection pool?



--
Sent from: http://www.postgresql-archive. org/PostgreSQL-general- f1843780.html

  

Actually, if you are looking for load balancing PgBouncer is better than 
PgPool. There is a nice article about that.
https://www.last.fm/user/Russ/journal/2008/02/21/zd_postgres_connection_pools:_pgpool_vs._pgbouncer

I used PgBouncer in a few of ny previous positions and found it to be very fast 
and efficient.

-- 
Melvin Davidson
I reserve the right to fantasize.  Whether or not you 
 wish to share my fantasy is entirely up to you. 
  

Re: Regarding pg_multixact/members folder size cleanup in postgres 9.3.6.

2018-02-14 Thread Yogesh Sharma
Dear David,

Thanks for update.
I have also checked in postgres 9.3.21 and 9.5.x version. But this is same
as 9.3.6 postres version.



Regards,
Yogesh

On Wednesday, February 14, 2018, David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Feb 14, 2018 at 9:06 AM, Yogesh Sharma 
> wrote:
>
>> I am using postgres 9.3.6 version and i am using multiple INSERT/UPDATE
>> SQL commands with explicit share lock.
>>
> ​[...]​
>
>
>> 4. I Want older postgres behavior in newer versions. So how to set this
>> behavior through postgres parameters?
>>
>
> ​You need to upgrade to 9.3.21.  Also, given the issues surrounding early
> 9.3 releases, did this cluster/database begin its life at 9.3.6 or was
> there an upgrade path to get there?
>
> I don't know whether this is configurable but I would suspect not - your
> likely issue is software bugs that have since been fixed in one or more of
> the 15 releases since 9.3.6
>
> David J.
>
>


Re: Regarding pg_multixact/members folder size cleanup in postgres 9.3.6.

2018-02-14 Thread Yogesh Sharma
Dear David,
>
>
> Thanks for update.
> I have also checked in postgres 9.3.21 and 9.5.x version. But this is same
> as 9.3.6 postres version.
> Some one please let me know below points. It will really helpful to
> understand postgres behavior.
> 1. When this folder pg_multixact/members folder size will cleanup?
> 2. How can i identify  that at that time it will start cleanup?
> 3. Does it depends on some postgres config parameters? What is config
> parameters and what will be value of that parameter?
> 4. I Want older postgres behavior in newer versions. So how to set this
> behavior through postgres parameters?
>
>
> Regards,
> Yogesh
>
> On Wednesday, February 14, 2018, David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Wed, Feb 14, 2018 at 9:06 AM, Yogesh Sharma 
>> wrote:
>>
>>> I am using postgres 9.3.6 version and i am using multiple INSERT/UPDATE
>>> SQL commands with explicit share lock.
>>>
>> ​[...]​
>>
>>
>>> 4. I Want older postgres behavior in newer versions. So how to set this
>>> behavior through postgres parameters?
>>>
>>
>> ​You need to upgrade to 9.3.21.  Also, given the issues surrounding early
>> 9.3 releases, did this cluster/database begin its life at 9.3.6 or was
>> there an upgrade path to get there?
>>
>> I don't know whether this is configurable but I would suspect not - your
>> likely issue is software bugs that have since been fixed in one or more of
>> the 15 releases since 9.3.6
>>
>> David J.
>>
>>


Re: I do not get the point of the information_schema

2018-02-14 Thread Eric Hanson
You might find Aquameta's meta module helpful, it reimplements
information_schema in a more normalized layout, as updatable views:

http://blog.aquameta.com/2015/08/29/intro-meta/

https://github.com/aquametalabs/aquameta/tree/master/core/000-meta

Best,
Eric

On Mon, Feb 12, 2018 at 2:02 PM Thiemo Kellner 
wrote:

> I try to implement SCD2 on trigger level and try to generated needed
> code on the fly. Therefore I need to read data about the objects in the
> database. So far so good. I know of the information_schema and the
> pg_catalog. The documentation for the information_schema states that it
> 'is defined in the SQL standard and can therefore be expected to be
> portable and remain stable'. I can think of a sensible meaning of
> portable. One cannot port it to MariaDB, can one? Maybe different
> PostreSQL version but then a one fits all implementation would mean only
> parts of the catalogue that never ever change can be exposed by the
> information_schema. Coming from Oracle I consider the information_schema
> the analogy to Oracles data dictionary views giving a stable interface
> on the database metadata hiding catalogue structure changes. But I
> dearly miss some information therein. I created following query to get
> the index columns of an index. I fear breakage when not run on the
> specific version I developed it against. Is there a more elegant way by
> the information_schema?
>
> with INDEX_COLUMN_VECTOR as(
>select
>  i.indkey
>from
>  pg_catalog.pg_index i
>inner join pg_catalog.pg_class c on
>  i.indexrelid = c.oid
>where
>  c.relname = 'idiom_hist'
> ),
> COLUMNS as(
>select
>  a.attname,
>  a.attnum
>from
>  pg_catalog.pg_attribute a
>inner join pg_catalog.pg_class c on
>  a.attrelid = c.oid
>where
>  c.relname = 'idiom'
> ) select
>c.attname
> from
>COLUMNS c
> inner join INDEX_COLUMN_VECTOR v on
>c.attnum = any(v.indkey)
> order by
>c.attnum asc;
>
> An other simpler case.
>
>  select
>indexname
>  from
>pg_catalog.pg_indexes
>  where
>schemaname = 'act'
>and tablename = i_table_name
>and indexname = i_table_name || '_hist';
>
>
> --
> Öffentlicher PGP-Schlüssel:
> http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
>
-- 
-- Eric Hanson CEO, Aquameta 503-929-1073


Slowly Changing Dimension implementation

2018-02-14 Thread Thiemo Kellner

Hi all

I am not aware of a database based solution for the implementation of  
SCDs (https://en.wikipedia.org/wiki/Slowly_changing_dimension) be it  
0, 1 or 2 - I consider all other only as implementation specifications  
of type 2. However, I am considering to implement a trigger based  
solution and make it opensource. Is there a consense about how such  
PostgreSQL add-ons are handled best? Where to store code and  
documentation (with PostgreSQL modules, with SourceForge, github ...),  
where to publish (mailing lists), when to publish code and  
announcements (only beta ware and more stable as opposed to from the  
beginning, ...?


Kind regards

Thiemo

--
+49 (0)1578-772 37 37
+41 (0)78 947 36 21
Öffentlicher PGP-Schlüssel:  
http://pgp.mit.edu/pks/lookup?op=get&search=0x8F70EFD2D972CBEF



This message was sent using IMP, the Internet Messaging Program.