Using RETURNING with INSERT TRIGGER on 9.6 partitioned table

2018-10-04 Thread Satoshi Nagayasu
Hi,

I'm using partitioned tables with INSERT triggers on PostgreSQL 9.6,
and I have a question about using "RETURNING NEW" in those triggers.

I have found a note mentioning consideration and workaround
on using INSERT ... RETURNING on the partitioned table [1].

[1] INSERT RETURNING vs Partitioning - PostgreSQL wiki
https://wiki.postgresql.org/wiki/INSERT_RETURNING_vs_Partitioning

However, according to my experience, using "RETURNING NEW"
in my insert trigger function looks working fine without the workadound.

So, here is a question.
Do we still need the workaround mentioned in the note?
If so, why?

Regards,
--
Satoshi Nagayasu 



We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Raghavendra Rao J S V
Hi All,

*archive_mode *is turned *on *unfortunately in my Postgres 9.2 database.

Due to that disk space is full 100%. We have removed few old xlog files.
Now space is available.But still we are facing below problem when we try to
start the database.

*PANIC: could not read from control file:Success*

Please help me to resolve the above error.

-- 
Regards,
Raghavendra Rao J S V


Re: We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Andreas Kretschmer




Am 04.10.2018 um 17:29 schrieb Raghavendra Rao J S V:

Hi All,

*archive_mode *is turned *on *unfortunately in my Postgres 9.2 database.

Due to that disk space is full 100%. We have removed few old xlog 
files. Now space is available.But still we are facing below problem 
when we try to start the database.




Don't delete WAL-files!

If you don't want the archive_mode you can set it to off (restart 
required) or set archive_command to, for instance: "cd ." or 
"/bin/true", that requires only a reload.
After some time (checkpoint) the files will disappear and the space 
should be back.




Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Postgres 11, partitioning with a custom hash function

2018-10-04 Thread Harry B
Thank you David! These helped me create an operator class.
However, there still seems to be a 'off-by-a-fixed-N' difference between
the hash value returned and how PG selects the partition.

http://dpaste.com/381E6CF

Am I overlooking some endianness difference!!??

For this setup, values are always off by 1 - whatever I calculate, pg takes
the "next" partition

For a similar setup of 32 partitions, I get the offset (between expected
and selected) as 3 http://dpaste.com/382NDBG


On Wed, Oct 3, 2018 at 8:42 PM David Rowley 
wrote:

> On 4 October 2018 at 16:22, Harry B  wrote:
> > I am still having trouble reconciling what happens under the HASH
> > partitioning!. If I have text column forming the basis of PARTITIONED BY
> > HASH, the HASH value used in the partitioning setup does not seem to
> match
> > to `hashtext()` of that value
>
> It won't match. The hash partition hash is seeded with a special const
> (HASH_PARTITION_SEED) see [1].
>
> You could likely roll your own hash ops. See [2] for an example. This
> can then be used to create a hash partitioned table like [3].
>
> [1]
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/partitioning/partbounds.c#l2056
> [2]
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/insert.sql#l241
> [3]
> https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/test/regress/sql/hash_part.sql#l10
>
> --
>  David Rowley   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


-- 
Harry


Re: Postgres 11, partitioning with a custom hash function

2018-10-04 Thread David Rowley
On 5 October 2018 at 06:18, Harry B  wrote:
>
> Thank you David! These helped me create an operator class.
> However, there still seems to be a 'off-by-a-fixed-N' difference between the
> hash value returned and how PG selects the partition.

hmm, actually, this is probably due to the hash_combine64() call in
compute_partition_hash_value(). This seems to combine the hash value
with 0 regardless of if there's another partition key column to hash.
If you could somehow do the reverse of what hash_combine64() will do
to you hash before returning it to the function then you might get
somewhere, but that does not look possible since it appears to be
throwing away some bits.

It may not have been a great choice to decide to have
compute_partition_hash_value() do this unnecessary combine, but it's
likely a few months too late to change that now.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Postgres 11, partitioning with a custom hash function

2018-10-04 Thread Harry B
Thanks for the quick response David! this has been really helpful.

Looking at the code, this step wasn't totally unnecessary - if I had
multi-column hash you would have had to do this step anyways - because pg
hashes each column separately and combines them. True, unnecessary for
single column hashes. It would have been better for the custom function to
handle all columns at the same time, but then entire API surface would have
had to change. At least it makes sense to me why it is this way

All hope is not lost, at least for my case... because the bitshifting
you have was on 'a', which was zero. So the expression

a ^= b + UINT64CONST(0x49a0f4dd15e5a8e3) + (a << 54) + (a >> 7);

becomes

a = b + UINT64CONST(0x49a0f4dd15e5a8e3)

This also explains why I noticed a constant-offset from the desired value
regardless of the actual key being hashed.

It also works for the 32 partition example I showed
https://play.golang.org/p/kcD-JhyLYD6
(original session/data in http://dpaste.com/382NDBG )

Now the big question: How scared should I be relying on this? I don't mind
it breaking on major version upgrades (which would mean I need to dump &
restore my entire set), but how likely is it to change unannounced in a
minor/security release? Unless of course, you break it in a way that makes
custom-hash function impossible.

Thanks
--
Harry



On Thu, Oct 4, 2018 at 12:39 PM David Rowley 
wrote:

> On 5 October 2018 at 06:18, Harry B  wrote:
> >
> > Thank you David! These helped me create an operator class.
> > However, there still seems to be a 'off-by-a-fixed-N' difference between
> the
> > hash value returned and how PG selects the partition.
>
> hmm, actually, this is probably due to the hash_combine64() call in
> compute_partition_hash_value(). This seems to combine the hash value
> with 0 regardless of if there's another partition key column to hash.
> If you could somehow do the reverse of what hash_combine64() will do
> to you hash before returning it to the function then you might get
> somewhere, but that does not look possible since it appears to be
> throwing away some bits.
>
> It may not have been a great choice to decide to have
> compute_partition_hash_value() do this unnecessary combine, but it's
> likely a few months too late to change that now.
>
> --
>  David Rowley   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services
>


-- 
Harry


Re: Postgres 11, partitioning with a custom hash function

2018-10-04 Thread David Rowley
On 5 October 2018 at 09:43, Harry B  wrote:
> Now the big question: How scared should I be relying on this? I don't mind
> it breaking on major version upgrades (which would mean I need to dump &
> restore my entire set), but how likely is it to change unannounced in a
> minor/security release? Unless of course, you break it in a way that makes
> custom-hash function impossible.

I don't see how we could possibly change it once v11 is out the door.
Such a change would break pg_upgrade and I imagine we want that to
work for a long time to come yet, at least until there is some other
reason that is worthy of breaking it. The bar is likely set pretty
high for that.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Thomas Munro
On Fri, Oct 5, 2018 at 4:29 AM Raghavendra Rao J S V
 wrote:
> PANIC: could not read from control file:Success

That means that the pg_control file is the wrong size.  What size is
it?  What filesystem is this, that allowed an out-of-space condition
to result in a file being truncated?  Normally we only overwrite that
file, so after creation it should stay the same size.

-- 
Thomas Munro
http://www.enterprisedb.com



pg_pass and pg_service

2018-10-04 Thread Tim Uckun
can I refer to a pg_service entry in the pgpass file?  It seems silly to
repeat all the information in the pgpass just to add the password.

Alternatively can I put the user password in the pg_service file?


Re: pg_pass and pg_service

2018-10-04 Thread David G. Johnston
On Thursday, October 4, 2018, Tim Uckun  wrote:

> can I refer to a pg_service entry in the pgpass file?  It seems silly to
> repeat all the information in the pgpass just to add the password.
>

No


> Alternatively can I put the user password in the pg_service file?
>

Yes

David J.


Re: We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Raghavendra Rao J S V
On Fri, 5 Oct 2018 at 07:06, Thomas Munro 
wrote:

> On Fri, Oct 5, 2018 at 4:29 AM Raghavendra Rao J S V
>  wrote:
> > PANIC: could not read from control file:Success
>
> That means that the pg_control file is the wrong size.  What size is
> it?  What filesystem is this, that allowed an out-of-space condition
> to result in a file being truncated?  Normally we only overwrite that
> file, so after creation it should stay the same size.
>

Size of the pg_control file is 42kb. We are using "CentOS Linux release
7.3.1611 (Core)".

Permissions and size of the file are present as expected.

Kindly guide me how to handle this kind of error?

Log file showing errors as below.



   Kindly guide me how to handle this kind of error?


> --
> Thomas Munro
> http://www.enterprisedb.com
>


-- 
Regards,
Raghavendra Rao J S V
Mobile- 8861161425


Re: We are facing "PANIC: could not read from control file:Success error while starting the database.

2018-10-04 Thread Laurenz Albe
Raghavendra Rao J S V wrote:
> archive_mode is turned on unfortunately in my Postgres 9.2 database. 
> 
> Due to that disk space is full 100%. We have removed few old xlog files. Now 
> space is available.But still we are facing below problem when we try to start 
> the database.
> 
> PANIC: could not read from control file:Success
> 
> Please help me to resolve the above error.

Time to restore from backup, and then upgrade to a more recent
PostgreSQL version.

Yours,
Laurenz Albe