Re: Create a logical and physical replication

2019-11-05 Thread Andreas Joseph Krogh

På tirsdag 05. november 2019 kl. 12:15:20, skrev Deepak Pahuja . <
deepakpah...@hotmail.com >: 
Yes it is possible. 


No it isn't. I think maybe this will address it for v13: 
https://www.postgresql.org/message-id/flat/20181212204154.nsxf3gzqv3gesl32%40alap3.anarazel.de
 



-- 
Andreas Joseph Krogh 


Hunspell as filtering dictionary

2019-11-05 Thread Bibi Mansione
Hi,
I am trying to create a ts_vector from a French text. Here are the
operations that seem logical to perform in that order:

1. remove stopwords
2. use hunspell to find words roots
3. unaccent

I first tried:

CREATE TEXT SEARCH CONFIGURATION fr_conf (copy='simple');

ALTER TEXT SEARCH CONFIGURATION fr_conf

ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,

 word, hword, hword_part

WITH unaccent, french_hunspell;


select * from to_tsvector('fr_conf', E'Pour découvrir et rencontrer
l\'aventure.');

-- 'aventure':5 'aventurer':5 'rencontrer':3


But the verb "découvrir" is missing :(


If I try with french_hunspell only, I get it, but with the accent:


select * from to_tsvector('french_hunspell', E'Pour découvrir et rencontrer
l\'aventure.');

-- 'aventure':6 'aventurer':6 'découvrir':2 'rencontrer':4

I also tried:

CREATE TEXT SEARCH CONFIGURATION fr_conf2 (copy='simple');

ALTER TEXT SEARCH CONFIGURATION fr_conf2

ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,

 word, hword, hword_part

WITH french_hunspell, unaccent;


select * from to_tsvector('fr_conf2', E'Pour découvrir et rencontrer
l\'aventure.');

-- 'aventure':5 'aventurer':5 'rencontrer':3


But I guess unaccent is never called.

I believe this is because french_hunspell is not a filtering dictionary,
but I might be wrong. So is there a way to get this result from any FTS
configuration (existing or :

-- 'aventure':6 'aventurer':6 'decouvrir':2 'rencontrer':4

Thanks,

Bertrand


Re: select view definition from pg_views feature request

2019-11-05 Thread Michael Shapiro
Can I set search_path='' (ie to a string that does not match any existing
schema)? Would that be the proper way to guarantee that the definition for
any view will always be fully-qualified?

On Sun, Nov 3, 2019 at 3:15 PM Tom Lane  wrote:

> Michael Shapiro  writes:
> > It seems that the definition of a view from pg_catalog.pg_views does not
> > qualify the tables used in the view if the tables are in the current
> search
> > path.
>
> > Is it possible to either have the definition always qualify all tables
> > independent of the search_path (or else provide a new column that does
> > that)?
>
> Why don't you just change the search path to empty before selecting?
>
> regards, tom lane
>


How to import Apache parquet files?

2019-11-05 Thread Softwarelimits
Hi, I need to come and ask here, I did not find enough information so I
hope I am just having a bad day or somebody is censoring my search results
for fun... :)

I would like to import (lots of) Apache parquet files to a PostgreSQL 11
cluster - yes, I believe it should be done with the Python pyarrow module,
but before digging into the possible traps I would like to ask here if
there is some common, well understood and documented tool that may be
helpful with that process?

It seems that the COPY command can import binary data, but I am not able to
allocate enough resources to understand how to implement a parquet file
import with that.

I really would like follow a person with much more knowledge than me about
either PostgreSQL or Apache parquet format instead of inventing a bad
wheel.

Any hints very welcome,
thank you very much for your attention!
John


Re: select view definition from pg_views feature request

2019-11-05 Thread Adrian Klaver

On 11/5/19 6:43 AM, Michael Shapiro wrote:
Can I set search_path='' (ie to a string that does not match any 
existing schema)? Would that be the proper way to guarantee that the 
definition for any view will always be fully-qualified?


test=# show search_path;
   search_path
--
 public,accounting,history,main,utility,timeclock,table_templates

test=# select * from pg_views where viewname = 'up_view';;
 schemaname | viewname | viewowner |  definition
+--+---+---
 public | up_view  | aklaver   |  SELECT up_test.id AS up_id, +
|  |   | up_test.col1 AS bool_col,+
|  |   | up_test.col_2 AS col2+
|  |   |FROM up_test;
(1 row)


test=# set search_path = '';
SET
test=# show search_path;
 search_path
-
 ""
(1 row)

test=# select * from pg_views where viewname = 'up_view';;
 schemaname | viewname | viewowner |  definition
+--+---+---
 public | up_view  | aklaver   |  SELECT up_test.id AS up_id, +
|  |   | up_test.col1 AS bool_col,+
|  |   | up_test.col_2 AS col2+
|  |   |FROM public.up_test;
(1 row)



On Sun, Nov 3, 2019 at 3:15 PM Tom Lane > wrote:


Michael Shapiro mailto:mshapir...@gmail.com>>
writes:
 > It seems that the definition of a view from pg_catalog.pg_views
does not
 > qualify the tables used in the view if the tables are in the
current search
 > path.

 > Is it possible to either have the definition always qualify all
tables
 > independent of the search_path (or else provide a new column that
does
 > that)?

Why don't you just change the search path to empty before selecting?

                         regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: logical replication - negative bitmapset member not allowed

2019-11-05 Thread Peter Eisentraut

On 2019-10-25 17:38, Jehan-Guillaume de Rorthais wrote:

On Thu, 10 Oct 2019 15:15:46 +0200
Jehan-Guillaume de Rorthais  wrote:

[...]

Here is a script to reproduce it under version 10, 11 and 12:


I investigated on this bug while coming back from pgconf.eu. Bellow what I found
so far.


I have simplified your reproduction steps from the previous message to a 
test case, and I can confirm that your proposed fix addresses the issue. 
 A patch is attached.  Maybe someone can look it over.  I target next 
week's minor releases.


--
Peter Eisentraut  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 12c3021110a1b30afbc5fddd1b3dc78f2010fb4e Mon Sep 17 00:00:00 2001
From: Peter Eisentraut 
Date: Tue, 5 Nov 2019 15:49:56 +0100
Subject: [PATCH] Fix negative bitmapset member not allowed error in logical
 replication

Reported-by: Tim Clarke 
Analyzed-by: Jehan-Guillaume de Rorthais 
Discussion: 
https://www.postgresql.org/message-id/flat/a9139c29-7ddd-973b-aa7f-71fed9c38d75%40minerva.info
---
 src/backend/replication/logical/relation.c |  3 +-
 src/test/subscription/t/100_bugs.pl| 54 +-
 2 files changed, 55 insertions(+), 2 deletions(-)

diff --git a/src/backend/replication/logical/relation.c 
b/src/backend/replication/logical/relation.c
index 85269c037d..ab80d4b4e0 100644
--- a/src/backend/replication/logical/relation.c
+++ b/src/backend/replication/logical/relation.c
@@ -340,7 +340,8 @@ logicalrep_rel_open(LogicalRepRelId remoteid, LOCKMODE 
lockmode)
 
attnum = AttrNumberGetAttrOffset(attnum);
 
-   if (!bms_is_member(entry->attrmap[attnum], 
remoterel->attkeys))
+   if (entry->attrmap[attnum] < 0 ||
+   !bms_is_member(entry->attrmap[attnum], 
remoterel->attkeys))
{
entry->updatable = false;
break;
diff --git a/src/test/subscription/t/100_bugs.pl 
b/src/test/subscription/t/100_bugs.pl
index 366a7a9435..2bd07b1cf6 100644
--- a/src/test/subscription/t/100_bugs.pl
+++ b/src/test/subscription/t/100_bugs.pl
@@ -3,7 +3,7 @@
 use warnings;
 use PostgresNode;
 use TestLib;
-use Test::More tests => 3;
+use Test::More tests => 4;
 
 # Bug #15114
 
@@ -100,3 +100,55 @@
 );
 
 $node_publisher->stop('fast');
+
+
+# TODO: 
https://www.postgresql.org/message-id/flat/a9139c29-7ddd-973b-aa7f-71fed9c38d75%40minerva.info
+
+$node_publisher = get_new_node('publisher3');
+$node_publisher->init(allows_streaming => 'logical');
+$node_publisher->start;
+
+$node_subscriber = get_new_node('subscriber3');
+$node_subscriber->init(allows_streaming => 'logical');
+$node_subscriber->start;
+
+$publisher_connstr = $node_publisher->connstr . ' dbname=postgres';
+
+$node_publisher->safe_psql('postgres',
+   "CREATE TABLE tab1 (a int)");
+
+$node_subscriber->safe_psql('postgres',
+   "CREATE TABLE tab1 (a int)");
+
+$node_publisher->safe_psql('postgres',
+   "CREATE PUBLICATION pub1 FOR ALL TABLES");
+
+$node_subscriber->safe_psql('postgres',
+   "CREATE SUBSCRIPTION sub1 CONNECTION '$publisher_connstr' PUBLICATION 
pub1");
+
+$node_publisher->wait_for_catchup('sub1');
+
+$node_subscriber->safe_psql('postgres',
+   "ALTER SUBSCRIPTION sub1 DISABLE");
+
+$node_subscriber->safe_psql('postgres',
+   "ALTER TABLE tab1 ADD COLUMN b serial PRIMARY KEY");
+
+$node_publisher->safe_psql('postgres',
+   "INSERT INTO tab1 VALUES (1)");
+
+$node_publisher->safe_psql('postgres',
+   "ALTER TABLE tab1 ADD COLUMN b serial PRIMARY KEY");
+
+$node_subscriber->safe_psql('postgres',
+   "ALTER SUBSCRIPTION sub1 ENABLE");
+
+$node_publisher->wait_for_catchup('sub1');
+
+is($node_subscriber->safe_psql('postgres',
+  "SELECT count(*), 
min(a), max(a) FROM tab1"),
+   qq(1|1|1),
+   'check replicated inserts on subscriber');
+
+$node_publisher->stop('fast');
+$node_subscriber->stop('fast');
-- 
2.23.0



Re: How to import Apache parquet files?

2019-11-05 Thread Imre Samu
>I would like to import (lots of) Apache parquet files to a PostgreSQL 11
cluster

imho: You have to check and test the Parquet FDW ( Parquet File Wrapper )
- https://github.com/adjust/parquet_fdw

Imre




Softwarelimits  ezt írta (időpont: 2019. nov. 5.,
K, 15:57):

> Hi, I need to come and ask here, I did not find enough information so I
> hope I am just having a bad day or somebody is censoring my search results
> for fun... :)
>
> I would like to import (lots of) Apache parquet files to a PostgreSQL 11
> cluster - yes, I believe it should be done with the Python pyarrow module,
> but before digging into the possible traps I would like to ask here if
> there is some common, well understood and documented tool that may be
> helpful with that process?
>
> It seems that the COPY command can import binary data, but I am not able
> to allocate enough resources to understand how to implement a parquet file
> import with that.
>
> I really would like follow a person with much more knowledge than me about
> either PostgreSQL or Apache parquet format instead of inventing a bad
> wheel.
>
> Any hints very welcome,
> thank you very much for your attention!
> John
>


Re: How to import Apache parquet files?

2019-11-05 Thread Softwarelimits
Hi Imre, thanks for the quick response - yes, I found that, but I was not
sure if it is already production ready - also I would like to use the data
with the timescale extension, that is why I need a full import.

Have  nice day!

On Tue, Nov 5, 2019 at 4:09 PM Imre Samu  wrote:

> >I would like to import (lots of) Apache parquet files to a PostgreSQL 11
> cluster
>
> imho: You have to check and test the Parquet FDW ( Parquet File Wrapper )
> - https://github.com/adjust/parquet_fdw
>
> Imre
>
>
>
>
> Softwarelimits  ezt írta (időpont: 2019. nov.
> 5., K, 15:57):
>
>> Hi, I need to come and ask here, I did not find enough information so I
>> hope I am just having a bad day or somebody is censoring my search results
>> for fun... :)
>>
>> I would like to import (lots of) Apache parquet files to a PostgreSQL 11
>> cluster - yes, I believe it should be done with the Python pyarrow module,
>> but before digging into the possible traps I would like to ask here if
>> there is some common, well understood and documented tool that may be
>> helpful with that process?
>>
>> It seems that the COPY command can import binary data, but I am not able
>> to allocate enough resources to understand how to implement a parquet file
>> import with that.
>>
>> I really would like follow a person with much more knowledge than me
>> about either PostgreSQL or Apache parquet format instead of inventing a bad
>> wheel.
>>
>> Any hints very welcome,
>> thank you very much for your attention!
>> John
>>
>


Re: select view definition from pg_views feature request

2019-11-05 Thread Tom Lane
Adrian Klaver  writes:
> On 11/5/19 6:43 AM, Michael Shapiro wrote:
>> Can I set search_path='' (ie to a string that does not match any 
>> existing schema)? Would that be the proper way to guarantee that the 
>> definition for any view will always be fully-qualified?

> [ example ]

If you read the documentation about search_path, you'll find out that
setting it to empty means that only the pg_catalog schema is present
in the effective search path (and maybe your pg_temp schema, if you
have created any temp tables).  So system catalog references,
references to built-in functions and operators, and temp table names
will not be qualified.  Everything else will be.

regards, tom lane




Re: How to import Apache parquet files?

2019-11-05 Thread Tomas Vondra

On Tue, Nov 05, 2019 at 04:21:45PM +0100, Softwarelimits wrote:

Hi Imre, thanks for the quick response - yes, I found that, but I was not
sure if it is already production ready - also I would like to use the data
with the timescale extension, that is why I need a full import.



Well, we're not in the position to decide if parquet_fdw is production
ready, that's something you need to ask author of the extension (and
then also judge yourself).

That being said, I think FDW is probably the best way to do this. It's
explicitly designed to work with foreign data, so using it to access
parquet files seems somewhat natural.

The alternative is probably transforming the data into COPY format, and
then load it into Postgres using COPY (either as a file, or stdin).

Which of these options is the right one depends on your requirements.
FDW is more convenient, but row-based and probably significantly less
efficient than COPY. So if you have a lot of these parquet files, I'd
probably use the COPY. But maybe the ability to query the parquet files
directly (with FDW) is useful for you.

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: logical replication - negative bitmapset member not allowed

2019-11-05 Thread Jehan-Guillaume de Rorthais
On Tue, 5 Nov 2019 16:02:51 +0100
Peter Eisentraut  wrote:

> On 2019-10-25 17:38, Jehan-Guillaume de Rorthais wrote:
> > On Thu, 10 Oct 2019 15:15:46 +0200
> > Jehan-Guillaume de Rorthais  wrote:
> > 
> > [...]
> >> Here is a script to reproduce it under version 10, 11 and 12:
> > 
> > I investigated on this bug while coming back from pgconf.eu. Bellow what I
> > found so far.
> 
> I have simplified your reproduction steps from the previous message to a 
> test case, and I can confirm that your proposed fix addresses the issue. 

Thanks for the feedback and the test case. I wonder if ALTER SUBSCRIPTION
DISABLE/ENABLE is useful in the test case?

Is it something recommended during DDL on logically replicated relation? If
yes, I suppose we should update the first point of the restriction chapter in
documentation:
https://www.postgresql.org/docs/11/logical-replication-restrictions

Regards,




Re: logical replication - negative bitmapset member not allowed

2019-11-05 Thread Andres Freund
Hi,

On 2019-11-05 16:02:51 +0100, Peter Eisentraut wrote:
>  $node_publisher->stop('fast');
> +
> +
> +# TODO: 
> https://www.postgresql.org/message-id/flat/a9139c29-7ddd-973b-aa7f-71fed9c38d75%40minerva.info
> +
> +$node_publisher = get_new_node('publisher3');
> +$node_publisher->init(allows_streaming => 'logical');
> +$node_publisher->start;
> +
> +$node_subscriber = get_new_node('subscriber3');
> +$node_subscriber->init(allows_streaming => 'logical');
> +$node_subscriber->start;

Do we really have to create a new subscriber for this test? The creation
of one isn't free. Nor is the amount of test code duplication
neglegible.

Greetings,

Andres Freund




RE: Upgrade procedure

2019-11-05 Thread Kevin Brannen
>> >From: rihad 
>>
>>> Hi, all. Why is it normally suggested to stop the server, upgrade it,
>>> then start it? Wouldn't it be easier & quicker to simply upgrade the
>>> package in-place and restart the service? On OSen that allow
>>> modification of currently running binaries, which is most Unix OS, M$
>>> Windows being a notable exception )
>>>
>> That might be possible on a minor upgrade, but quite probably not on a
>> major version upgrade. I'm reasonably sure I've read that a major
>> upgrade *can* change underlying data/structures for tables and other
>> things. I don't think you want version-X writing to the tables on disk
>> while version-Y writes a new layout to the same files at the same
>> time. ??
>>
>>
>
>Why would that matter if the server gets restarted after replacing the 
>binaries? Aren't previous version's binaries "hard-wired" into memory while 
>they are running? AFAIK on FreeBSD at least no attempt is made to stop the 
>corresponding server or restart it when a package is upgraded by pkg(8).

We may be talking past each other here a bit...

After you do an upgrade, of course you have to restart the *PG* server or
you won't be using the new code, will you? :)

The manual or others here are more knowledgeable than I, but I believe that
for a "minor" upgrade, you can just swap out the code and restart PG. For
major upgrades, the PG server is going to have to come down as the underlying
files might be changed/transformed during the upgrade, then you start the
PG server when that's done. Check out the -k option as it can significantly
speed up pg_upgrade. You might find it safer to do a "pg_upgrade -c" before
the real upgrade; something to look at. As always on things like this, test
on a non-production machine first.

For us, we always use pg_upgrade even for minor updates because it feels
safer to me. That being said, we rarely do minor updates and just do majors
because upgrading is just hard enough (lots of testing!) we tend to wait and
then jump further. Upgrading is known to take a maintenance window; we just
plan things that way. Your organization may have different needs.

HTH,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


RE: Upgrade procedure

2019-11-05 Thread Rich Shepard

On Tue, 5 Nov 2019, Kevin Brannen wrote:


The manual or others here are more knowledgeable than I, but I believe that
for a "minor" upgrade, you can just swap out the code and restart PG. For
major upgrades, the PG server is going to have to come down as the underlying
files might be changed/transformed during the upgrade, ...


Files might change even with a minor upgrade. The few seconds it takes to
stop, upgrade, and restart the system prevents all errors due to
someone/something accessing the database while it's being upgraded.

Rich




Re: Upgrade procedure

2019-11-05 Thread rihad

On 11/05/2019 10:05 PM, Kevin Brannen wrote:

From: rihad 
Hi, all. Why is it normally suggested to stop the server, upgrade it,
then start it? Wouldn't it be easier & quicker to simply upgrade the
package in-place and restart the service? On OSen that allow
modification of currently running binaries, which is most Unix OS, M$
Windows being a notable exception )


That might be possible on a minor upgrade, but quite probably not on a
major version upgrade. I'm reasonably sure I've read that a major
upgrade *can* change underlying data/structures for tables and other
things. I don't think you want version-X writing to the tables on disk
while version-Y writes a new layout to the same files at the same
time. ??



Why would that matter if the server gets restarted after replacing the binaries? Aren't 
previous version's binaries "hard-wired" into memory while they are running? 
AFAIK on FreeBSD at least no attempt is made to stop the corresponding server or restart 
it when a package is upgraded by pkg(8).

We may be talking past each other here a bit...

After you do an upgrade, of course you have to restart the *PG* server or
you won't be using the new code, will you? :)

The manual or others here are more knowledgeable than I, but I believe that
for a "minor" upgrade, you can just swap out the code and restart PG. For
major upgrades, the PG server is going to have to come down as the underlying
files might be changed/transformed during the upgrade, then you start the
PG server when that's done. Check out the -k option as it can significantly
speed up pg_upgrade. You might find it safer to do a "pg_upgrade -c" before
the real upgrade; something to look at. As always on things like this, test
on a non-production machine first.

For us, we always use pg_upgrade even for minor updates because it feels
safer to me. That being said, we rarely do minor updates and just do majors
because upgrading is just hard enough (lots of testing!) we tend to wait and
then jump further. Upgrading is known to take a maintenance window; we just
plan things that way. Your organization may have different needs.

Yeah, but that way you're almost guaranteed to run an unsupported & 
vulnerable release for quite some time, until the next major one is ready )





Re: select view definition from pg_views feature request

2019-11-05 Thread George Neuner
On Tue, 5 Nov 2019 14:29:00 +1300, David Rowley
 wrote:


>See https://www.postgresql.org/docs/current/sql-set.html
>
>"SET only affects the value used by the current session."
>
>Also:
>
>"The effects of SET LOCAL last only till the end of the current transaction"
>
>Neither affects other sessions.


Ok, so you need to "ALTER DATABASE ..." for the change to affect
everyone.  Thanks for the clarification.

George





RE: Upgrade procedure

2019-11-05 Thread Kevin Brannen
>> For us, we always use pg_upgrade even for minor updates because it
>> feels safer to me. That being said, we rarely do minor updates and
>> just do majors because upgrading is just hard enough (lots of
>> testing!) we tend to wait and then jump further. Upgrading is known to
>> take a maintenance window; we just plan things that way. Your organization 
>> may have different needs.
>
>Yeah, but that way you're almost guaranteed to run an unsupported & vulnerable 
>release for quite some time, until the next major one is ready )

If we ran into a true bug that affected us, we'd upgrade sooner. Thankfully, the
PG team is great about putting out quality software with a low bug rate. 
Running a
few minor versions back from the current is OK for us; we're always on a 
supported
major version (so we could upgrade to the current minor version if really 
required).
This is an organizational decision based on where it's better to spend time and 
effort.

OTOH, the better partitioning of v12 is a feature that will get us to upgrade
sooner. :)

As for security, we run in a very protected environment. If we are compromised,
it'll be by an inside person and there's really no tech defense against that.

Upgrading the Pg software isn't all that hard, we even have it automated.
The upgrade process for our application is what's so hard -- again, lots of
testing/man-hours required.

HTH,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


here does postgres take its timezone information from?

2019-11-05 Thread Chris Withers

Hi All,

Brazil recently abolished daylight savings time, resulting in updates to 
system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to see 
the updated zone info?


If not, how does postgres store/obtain its timezone zone information and 
how would this be updated?


cheers,

Chris




Re: here does postgres take its timezone information from?

2019-11-05 Thread Adrian Klaver

On 11/5/19 2:46 PM, Chris Withers wrote:

Hi All,

Brazil recently abolished daylight savings time, resulting in updates to 
system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to see 
the updated zone info?


If not, how does postgres store/obtain its timezone zone information and 
how would this be updated?


https://www.postgresql.org/about/news/1960/
PostgreSQL 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, and 12 Beta 3 Released!

"This update also contains tzdata release 2019b for DST law changes in 
Brazil, plus historical corrections for Hong Kong, Italy, and Palestine. 
This update also adds support for zic's new -b slim option to reduce the 
size of the installed zone files, though it is not currently being used 
by PostgreSQL."




cheers,

Chris





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: here does postgres take its timezone information from?

2019-11-05 Thread Adrian Klaver

On 11/5/19 2:46 PM, Chris Withers wrote:

Hi All,

Brazil recently abolished daylight savings time, resulting in updates to 
system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to see 
the updated zone info?


If not, how does postgres store/obtain its timezone zone information and 
how would this be updated?




As to where it gets its timezone info:

https://www.postgresql.org/docs/11/datetime-config-files.html


cheers,

Chris





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: here does postgres take its timezone information from?

2019-11-05 Thread Chris Withers

On 05/11/2019 22:54, Adrian Klaver wrote:

On 11/5/19 2:46 PM, Chris Withers wrote:

Hi All,

Brazil recently abolished daylight savings time, resulting in updates 
to system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to 
see the updated zone info?


If not, how does postgres store/obtain its timezone zone information 
and how would this be updated?


https://www.postgresql.org/about/news/1960/
PostgreSQL 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, and 12 Beta 3 Released!

"This update also contains tzdata release 2019b for DST law changes in 
Brazil, plus historical corrections for Hong Kong, Italy, and Palestine. 
This update also adds support for zic's new -b slim option to reduce the 
size of the installed zone files, though it is not currently being used 
by PostgreSQL."


Hmm. Is there any option to use the system timezone packages?

If not, why not?

Chris




Re: here does postgres take its timezone information from?

2019-11-05 Thread Adrian Klaver

On 11/5/19 3:00 PM, Chris Withers wrote:

On 05/11/2019 22:54, Adrian Klaver wrote:

On 11/5/19 2:46 PM, Chris Withers wrote:

Hi All,

Brazil recently abolished daylight savings time, resulting in updates 
to system timezone information packages.
Does postgres use these? If so, does it need a reload or restart to 
see the updated zone info?


If not, how does postgres store/obtain its timezone zone information 
and how would this be updated?


https://www.postgresql.org/about/news/1960/
PostgreSQL 11.5, 10.10, 9.6.15, 9.5.19, 9.4.24, and 12 Beta 3 Released!

"This update also contains tzdata release 2019b for DST law changes in 
Brazil, plus historical corrections for Hong Kong, Italy, and 
Palestine. This update also adds support for zic's new -b slim option 
to reduce the size of the installed zone files, though it is not 
currently being used by PostgreSQL."


Hmm. Is there any option to use the system timezone packages?



https://www.postgresql.org/docs/11/install-procedure.html

--with-system-tzdata=DIRECTORY



If not, why not?

Chris



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: here does postgres take its timezone information from?

2019-11-05 Thread Thomas Munro
On Wed, Nov 6, 2019 at 12:02 PM Adrian Klaver  wrote:
> On 11/5/19 3:00 PM, Chris Withers wrote:
> > Hmm. Is there any option to use the system timezone packages?
>
> https://www.postgresql.org/docs/11/install-procedure.html
>
> --with-system-tzdata=DIRECTORY

By the way, you can see if your installation of PostgreSQL was built
to use system-provided tzdata by running the pg_config program that
was installed alongside it.  That could be useful if it was built by a
package maintainer (Debian etc) and you want to see how they
configured it.  You'll see something like CONFIGURE = '...
--with-system-tzdata=/usr/share/... ' if it's using OS vendor tzdata
files.  I hope that most distributions do that*, because otherwise you
could finish up with lots of out-of-sync copies of the tzdata database
inside your database, your JVM, your libc, etc etc, and you want a
single source of truth for that stuff.

Once I was involved in rolling out a last minute DST rule change that
happened in Australia due to politicians and an international sporting
event, and we had to go hunting for copies of tzdata hiding on our
servers that had to agree on when the financial markets were
opening... we found many copies, and ever since then I complain
wherever I see packages shipping their own copies of this stuff...

Assuming you are using system tzdata, your other question was what you
need to do after the tzdata files have been updated.  I suspect that
new PostgreSQL database sessions (processes) will see the new rules,
but existing sessions may continue to see the old rules if they had
loaded them already, because we cache them in per-process memory (see
pg_tzset()).  It would probably be safest to restart the PostgreSQL
cluster.

If you're using PostgreSQL's build-in tzdata, then you'll need to
restart your cluster anyway once you install the version that shipped
with the new tzdata rules, and depending on your package manager, that
might happen automatically when you upgrade.

*It looks like FreeBSD's port uses the copy of tzdata from the
PostgreSQL source tree by default and thus that is what you get if you
install PostgreSQL with "pkg".  That's not a great default IMHO and
should be changed.




Re: here does postgres take its timezone information from?

2019-11-05 Thread Tom Lane
Thomas Munro  writes:
> On Wed, Nov 6, 2019 at 12:02 PM Adrian Klaver  
> wrote:
>> On 11/5/19 3:00 PM, Chris Withers wrote:
>>> Hmm. Is there any option to use the system timezone packages?

>> --with-system-tzdata=DIRECTORY

> I hope that most distributions do that*, because otherwise you
> could finish up with lots of out-of-sync copies of the tzdata database
> inside your database, your JVM, your libc, etc etc, and you want a
> single source of truth for that stuff.

Right.  Our recommendation is to use --with-system-tzdata if you're on
a platform where the vendor updates their copy of tzdata regularly.
The fact that we supply a copy of tzdata at all is really just a fallback
for folks on poorly-supported platforms.  (Naming no names here ...)

> Assuming you are using system tzdata, your other question was what you
> need to do after the tzdata files have been updated.  I suspect that
> new PostgreSQL database sessions (processes) will see the new rules,
> but existing sessions may continue to see the old rules if they had
> loaded them already, because we cache them in per-process memory (see
> pg_tzset()).  It would probably be safest to restart the PostgreSQL
> cluster.

Yeah, I think you need a restart typically.  The postmaster process will
not absorb any update to timezone data it's already loaded, and child
processes will mostly inherit that data via fork().  You might be able
to finagle it by hacks like changing postgresql.conf to a different
timezone name that happens to be equivalent, but on the whole a quick
restart after updating the zone data is the best bet.

(Really, if your active zone's rules have changed, you'd be well advised
to just reboot the whole darn box.  Postgres is *very far* from being
the only daemon that is going to give you issues with this.)

> *It looks like FreeBSD's port uses the copy of tzdata from the
> PostgreSQL source tree by default and thus that is what you get if you
> install PostgreSQL with "pkg".  That's not a great default IMHO and
> should be changed.

Ugh.  Who can we poke there?

regards, tom lane




Re: here does postgres take its timezone information from?

2019-11-05 Thread Thomas Munro
On Wed, Nov 6, 2019 at 2:20 PM Tom Lane  wrote:
> Thomas Munro  writes:
> > On Wed, Nov 6, 2019 at 12:02 PM Adrian Klaver  
> > wrote:
> >> On 11/5/19 3:00 PM, Chris Withers wrote:
> >>> Hmm. Is there any option to use the system timezone packages?
>
> >> --with-system-tzdata=DIRECTORY
>
> > I hope that most distributions do that*, because otherwise you
> > could finish up with lots of out-of-sync copies of the tzdata database
> > inside your database, your JVM, your libc, etc etc, and you want a
> > single source of truth for that stuff.
>
> Right.  Our recommendation is to use --with-system-tzdata if you're on
> a platform where the vendor updates their copy of tzdata regularly.
> The fact that we supply a copy of tzdata at all is really just a fallback
> for folks on poorly-supported platforms.  (Naming no names here ...)

Incidentally, that's also why I don't want to give up on libc
collations quite as easily as some, despite their limitations.  It
should be possible to get all the software on your system to agree on
the ordering of two strings and the current time!

> > *It looks like FreeBSD's port uses the copy of tzdata from the
> > PostgreSQL source tree by default and thus that is what you get if you
> > install PostgreSQL with "pkg".  That's not a great default IMHO and
> > should be changed.
>
> Ugh.  Who can we poke there?

Maybe Palle?  (CCed).




PostgreSQL && data types in ESQL/C

2019-11-05 Thread Matthias Apitz

Hello,

On our project roadmap to port our LMS (Library Management System) from
Sybase/Oracle to PostgreSQL we are now in the phase of addressing the
ESQL/C and C++ code parts (some million lines of code).

I wrote a small ESQL/C test code to see how the various data types are
handled.

In general: Is there any good manual about ESQL/C in PostgreSQL?
Because, even if there are standards any implementation has its details.

In detail:

I've created a table with the most used data types:

$ cat mytypes.sql

create table mytypes (
  myint  integer,
  mychar char (4),
  mydate date,
  myvchar varchar(81),
  myblob bytea
  ) ;

and have loaded a row with some data which is shown in pgsql as:

$ printf "select * from mytypes;\n" | psql -Usisis -d newsisis
 myint | mychar |   mydate   |   myvchar|myblob
---+++--+--
 1 | char   | 08.05.1945 | освобождение | 
\xd0bed181d0b2d0bed0b1d0bed0b6d0b4d0b5d0bdd0b8d0b50a
(1 Zeile)

in the ESQL/C code the host variables are declared as:

EXEC SQL BEGIN DECLARE SECTION;
...
int  myint;
char mychar[8];
char mydate[10+1];
char myvchar[81];
char myblob[1024];
...
EXEC SQL END DECLARE SECTION;

and the FETCH into these is done with:

EXEC SQL FETCH IN c_statename INTO :myint, :mychar, :mydate, :myvchar, 
:myblob;

which gives with an ESQL/C test pgm which prints the above host
variables:

$ /usr/local/sisis-pap/pgsql/bin/ecpg embedded.pgc
$ gcc -m64 -o embedded embedded.c -I/usr/local/sisis-pap/pgsql/include 
-L/usr/local/sisis-pap/pgsql/lib/ -lpq -lecpg

$ ./embedded
stmt: SELECT myint, mychar, mydate, myvchar, myblob FROM mytypes;
myint   [1]
mychar  [char]
mydate  [08.05.1945]
myvchar [освобождение]
myblob  [\xd0bed181d0b2d0bed0b1d0bed0b6d0b4d0b5d0bdd0b8d0b50a]

It seems(!):

- an int appears as binary integer
- all others types (even the column type 'date') appear as C type char*
- 'date', 'char' and  'varchar' are delivered as '\0' terminated strings
- 'bytea' appears as '\0' terminated string coded in hex with "\x" in front

Our DBCALL layer must convert these char strings in the data form the
application layer is expecting, for example a BLOB ('bytea') into a C struct

struct {

long blobLength;
char blobBytes[MAX_EXPECTED_BLOB];
} blob;

For example Sybase handles 'date' and 'bytea' in another way: 'date' is
a struct of two long and for 'bytea' an additional host variable
for the length must be used in FETCH, INSERT, ...

The above is a bit by try and error. Is there any good manual which
describes the ESQL/C details for PostgreSQL.

Thanks

matthias

-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub

"Glaube wenig, hinterfrage alles, denke selbst: Wie man Manipulationen 
durchschaut"
"Believe little, scrutinise all, think by your own: How see through 
manipulations"
ISBN-10: 386489218X


signature.asc
Description: PGP signature


Re: PostgreSQL && data types in ESQL/C

2019-11-05 Thread Pavel Stehule
Hi


> The above is a bit by try and error. Is there any good manual which
> describes the ESQL/C details for PostgreSQL.
>

I newer used ESQL/C so I cannot to evaluate a quality of this part of this
doc. I expect so there will not be any other.

https://www.postgresql.org/docs/12/ecpg.html

Regards

Pavel


> Thanks
>
> matthias
>
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
> "Glaube wenig, hinterfrage alles, denke selbst: Wie man Manipulationen
> durchschaut"
> "Believe little, scrutinise all, think by your own: How see through
> manipulations"
> ISBN-10: 386489218X
>