Re: Handling time series data with PostgreSQL

2020-10-07 Thread Jayaram
Hi Adalberto,

Awesome.!! Thanks for your reply.

So, Do we need the timescaleDB as mandatory to handle time series data? Is
there any way to handle hourly to days,months,yearly data with PGSQL alone
without timescale addon?
Ours is a new project and we are unsure about whether we should have both
timescaleDB and PGSQL or PGSQL alone is capable of handling this time
series data by tuning the right indexes.etc..

What we are planning is to start with PGSQL alone (For handling
hourly,daily,monthly,weekly,yearly calculations) and later when we add
seconds and minutes, we will include the timescaleDB. Is this the right
approach?
Is PGSQL alone capable of meeting this requirement?

Thank you in advance.

Regards,
Jayaram S.

On Wed, Oct 7, 2020 at 1:34 AM Adalberto Caccia  wrote:

> Hi,
> TimescaleDB as a Postgresql extension has been used in my firm for two
> years now, I've recently managed to upgrade it from pg10 to pg12 and from
> discrete VM's to Kubernetes as well.
> Frankly speaking, being new to TimescaleDB at that time, I've found it
> easy to manage, easy to scale (it's 100% compatible with pg replication,
> unfortunately not the logical one, yet...), easy to install, easy to
> upgrade... what else?
> From a developer's perspective, it just adds "superpowers" to ordinary PG
> tables, all under the hood. On disk, it features a "chunked" layout, where
> each chunk belongs to a definite "time" range; and of course the "time"
> column on which to index time data is just passed as a parameter to the
> call to TimescaleDB, for each table on which you need such power.
> At the moment, we're also using it for time aggregate calculations, but
> only for the coarse ones (30m --> 1h and 1h --> 1 day), while we're still
> handling the finer ones (1s --> 1m and so on) in Kafka+Flink, which is a
> common scenario for a streaming data platform, anyway.
>
> Regards,
> Adalberto
>
>
> Il giorno mar 6 ott 2020 alle ore 11:47 Jayaram  ha
> scritto:
>
>> Dear All,
>>
>> I'm Jayaram S, oracle DBA. Currently we are planning to develop a stock
>> market based application which deals 80% of data with time data. We are in
>> the process of choosing the right database for the requirement especially
>> for time series data. After all multiple investigations, I found PostgreSQL
>> with timescaleDB works better than other DBs.
>>
>> But still I'm new to PGSQL and we wanted only open source technology to
>> handle our requirements. It will be helpful to me if anyone can suggest
>> implementing the time series concepts in PostgreSQL database.
>> It's better if I can get proper docs or links with explanation.
>>
>> Thanks in advance.,
>>
>> --
>>
>>
>>
>>
>> *Thanks & Regards,Jayaram S,Banglore.India.*
>>
>

-- 




*Thanks & Regards,Jayaram S,Banglore.Mobile: 91-7760951366.*


Which is the setup with lowest resources you know Postgres is used in?

2020-10-07 Thread Thorsten Schöning
Hi all,

I'm regularly reading that Postgres is often used with containers and
in cloud environments these days, even on some not too powerful NAS.

What are the lowest resource setups you know of or even host Postgres
successfully with yourself? It's especially about RAM and CPU, if you
needed to e.g. configure anything special to make things somewhat work
in your stripped down environment etc.

Is there any point at which one is most likely forced to switch to
more specialized embedded databases like SQLite? E.g. because
Postgres requires a higher amount of resources because of it's
architecture? Or could Postgres in theory be used everywhere where
SQLite is used as well, as long as one is allowed to e.g. start an
additional process?

I would like to know if there's any realistic chance to use Postgres
in a low resources environment with little amount of RAM and somewhat
slow CPU like the following:

http://ww1.microchip.com/downloads/en/DeviceDoc/ATSAMA5D27-WLSOM1-Datasheet-60001590b.pdf
http://ww1.microchip.com/downloads/en/DeviceDoc/SAMA5D2-Series-Data-sheet-ds60001476F.pdf

One point is that I most likely need somewhat concurrent access to the
data, because of having web services exposing that data to clients,
daemons storing data locally only etc. OTOH, the number of concurrent
accessed won't be too high, there won't be too much load most of the
time. Things heavily depend on actual users of the device. Postgres'
architecture seems to better fit that use case than e.g. SQLite.

Thanks for sharing you experiences and suggestions!

Mit freundlichen Grüßen,

Thorsten Schöning

-- 
Thorsten Schöning   E-Mail: thorsten.schoen...@am-soft.de
AM-SoFT IT-Systeme  http://www.AM-SoFT.de/

Telefon...05151-  9468- 55
Fax...05151-  9468- 88
Mobil..0178-8 9468- 04

AM-SoFT GmbH IT-Systeme, Brandenburger Str. 7c, 31789 Hameln
AG Hannover HRB 207 694 - Geschäftsführer: Andreas Muchow





What version specification used by PG community developers?

2020-10-07 Thread WanCheng
Is same to the SemVer?(https://semver.org/)





Re: What version specification used by PG community developers?

2020-10-07 Thread Pavel Stehule
st 7. 10. 2020 v 9:52 odesílatel WanCheng  napsal:

> Is same to the SemVer?(https://semver.org/)
>

no

https://www.postgresql.org/support/versioning/

Regards

Pavel


Re: Which is the setup with lowest resources you know Postgres is used in?

2020-10-07 Thread Dmitry Igrishin
ср, 7 окт. 2020 г. в 10:51, Thorsten Schöning :
>
> Hi all,
>
> I'm regularly reading that Postgres is often used with containers and
> in cloud environments these days, even on some not too powerful NAS.
>
> What are the lowest resource setups you know of or even host Postgres
> successfully with yourself? It's especially about RAM and CPU, if you
> needed to e.g. configure anything special to make things somewhat work
> in your stripped down environment etc.
AFAIK the default configuration of Postgres is fairly conservative and
may be a good starting point for such cases.

>
> Is there any point at which one is most likely forced to switch to
> more specialized embedded databases like SQLite? E.g. because
> Postgres requires a higher amount of resources because of it's
> architecture? Or could Postgres in theory be used everywhere where
> SQLite is used as well, as long as one is allowed to e.g. start an
> additional process?
For example, when you need to INSERT tens of thousands rows per second
on your low-cost device SQLite is a choice. Postgres is a
client-server with related overheads. Postgres requires deployment and
configuration while SQLite just works with zero-configuration (which
is a big advantage in case of IoT).
Sure, in theory Postgres can be used instead of SQLite (and vice-versa).

>
> I would like to know if there's any realistic chance to use Postgres
> in a low resources environment with little amount of RAM and somewhat
> slow CPU like the following:
>
> http://ww1.microchip.com/downloads/en/DeviceDoc/ATSAMA5D27-WLSOM1-Datasheet-60001590b.pdf
> http://ww1.microchip.com/downloads/en/DeviceDoc/SAMA5D2-Series-Data-sheet-ds60001476F.pdf
>
> One point is that I most likely need somewhat concurrent access to the
> data, because of having web services exposing that data to clients,
> daemons storing data locally only etc. OTOH, the number of concurrent
> accessed won't be too high, there won't be too much load most of the
> time. Things heavily depend on actual users of the device. Postgres'
> architecture seems to better fit that use case than e.g. SQLite.
In many cases concurrency is not a problem and in fact SQLite may
handle concurrent requests faster than Postgres. Since SQLite is
server-less and access overhead is near to zero (compared to Postgres)
each writer does its work quickly and no lock lasts for more than a
few dozen milliseconds.
On the other hand, Postgres is better in cases of really high concurrency.




How to migrate column type from uuid to serial

2020-10-07 Thread Hemil Ruparel
I was integrating a payment gateway for my app when I noticed its maximum
length of customer id string is 32. SIze of UUID is 36 (32 characters and 4
dashes). So I want to change the type of customer id to serial. The problem
is by now, the column is being used at many places. How to migrate the
column to serial without dropping the data?


Re: What version specification used by PG community developers?

2020-10-07 Thread Ron

On 10/7/20 2:52 AM, WanCheng wrote:

Is same to the SemVer?(https://semver.org/)


It used to be, but starting with v10 it's
MAJOR
PATCH

--
Angular momentum makes the world go 'round.




Re: What version specification used by PG community developers?

2020-10-07 Thread Adrian Klaver

On 10/7/20 6:01 AM, Ron wrote:

On 10/7/20 2:52 AM, WanCheng wrote:

Is same to the SemVer?(https://semver.org/)


It used to be, but starting with v10 it's
MAJOR
PATCH


Was it?

Pre-10 it was:

MAJOR.MAJOR.PATCH


--
Adrian Klaver
adrian.kla...@aklaver.com




undefined reference to `pg_snprintf when we upgraded libpq version from 10.3 to 12.3

2020-10-07 Thread M Tarkeshwar Rao
Hi all,

We upgraded the libpq version from 10.3 to 12.3. Now we are getting following 
linker error in compilation.
linux64/lib/libPostgreSQLClient.so: undefined reference to `pg_snprintf(char*, 
unsigned long, char const*, ...)'

When we analyzed it we found following diff in libpq. What could be the 
solution of this error.

%/postgres/10.3/lib
>nm libpq.a | grep sprint
U sprintf
U sprintf
U sprintf
U sprintf
U sprintf
U sprintf

%:/postgres/12.3/lib >nm libpq.a | grep sprint
U pg_sprintf
U pg_sprintf
U pg_sprintf
U pg_sprintf

Regards
Tarkeshwar


Re: What version specification used by PG community developers?

2020-10-07 Thread Magnus Hagander
On Wed, Oct 7, 2020 at 3:47 PM Adrian Klaver 
wrote:

> On 10/7/20 6:01 AM, Ron wrote:
> > On 10/7/20 2:52 AM, WanCheng wrote:
> >> Is same to the SemVer?(https://semver.org/)
> >
> > It used to be, but starting with v10 it's
> > MAJOR
> > PATCH
>
> Was it?
>
> Pre-10 it was:
>
> MAJOR.MAJOR.PATCH
>


Yeah the fact that it kind of looked like semver, but *wasn't* semver, is
probably one of the (many) things that confused people. It definitely
wasn't semver.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: How to migrate column type from uuid to serial

2020-10-07 Thread Adrian Klaver

On 10/7/20 5:48 AM, Hemil Ruparel wrote:
I was integrating a payment gateway for my app when I noticed its 
maximum length of customer id string is 32. SIze of UUID is 36 (32 
characters and 4 dashes). So I want to change the type of customer id to 
serial. The problem is by now, the column is being used at many places. 
How to migrate the column to serial without dropping the data?


Changing the size of the column is not an option?

Your description of the customer id column above is somewhat confusing 
to me. Is the id actually stored as a UUID?


Why is the max length an issue?

If you where to migrate I would say create an independent 
serial/identity column. Then point the dependent objects at that. After 
all have been converted drop the old column.






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to migrate column type from uuid to serial

2020-10-07 Thread Adrian Klaver

On 10/7/20 6:58 AM, Hemil Ruparel wrote:

Please reply to list also.
Ccing list


Yes. The id is stored as uuid. Thanks for the suggestion. Should work


On Wed, Oct 7, 2020 at 7:27 PM Adrian Klaver > wrote:


On 10/7/20 5:48 AM, Hemil Ruparel wrote:
 > I was integrating a payment gateway for my app when I noticed its
 > maximum length of customer id string is 32. SIze of UUID is 36 (32
 > characters and 4 dashes). So I want to change the type of
customer id to
 > serial. The problem is by now, the column is being used at many
places.
 > How to migrate the column to serial without dropping the data?

Changing the size of the column is not an option?

Your description of the customer id column above is somewhat confusing
to me. Is the id actually stored as a UUID?

Why is the max length an issue?

If you where to migrate I would say create an independent
serial/identity column. Then point the dependent objects at that. After
all have been converted drop the old column.





-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: How to migrate column type from uuid to serial

2020-10-07 Thread Hemil Ruparel
Yes the id is stored as a uuid. Thanks for the suggestion. Should work


On Wed, Oct 7, 2020 at 7:29 PM Adrian Klaver 
wrote:

> On 10/7/20 6:58 AM, Hemil Ruparel wrote:
>
> Please reply to list also.
> Ccing list
>
> > Yes. The id is stored as uuid. Thanks for the suggestion. Should work
> >
> >
> > On Wed, Oct 7, 2020 at 7:27 PM Adrian Klaver  > > wrote:
> >
> > On 10/7/20 5:48 AM, Hemil Ruparel wrote:
> >  > I was integrating a payment gateway for my app when I noticed its
> >  > maximum length of customer id string is 32. SIze of UUID is 36 (32
> >  > characters and 4 dashes). So I want to change the type of
> > customer id to
> >  > serial. The problem is by now, the column is being used at many
> > places.
> >  > How to migrate the column to serial without dropping the data?
> >
> > Changing the size of the column is not an option?
> >
> > Your description of the customer id column above is somewhat
> confusing
> > to me. Is the id actually stored as a UUID?
> >
> > Why is the max length an issue?
> >
> > If you where to migrate I would say create an independent
> > serial/identity column. Then point the dependent objects at that.
> After
> > all have been converted drop the old column.
> >
> >
> >
> >
> >
> > --
> > Adrian Klaver
> > adrian.kla...@aklaver.com 
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: How to migrate column type from uuid to serial

2020-10-07 Thread Thomas Kellerer
Hemil Ruparel schrieb am 07.10.2020 um 16:02:
> Yes the id is stored as a uuid.

Then it should be declared with the data type uuid, which only needs 16 bytes.







Re: How to migrate column type from uuid to serial

2020-10-07 Thread Hemil Ruparel
umm it is declared as uuid. But how does it occupy only 16 bytes? Even if
we remove those 4 dashes thats 32 bytes of text right? I am not concerned
about the size at all. How do i send it as a string below 32 bytes?

On Wed, Oct 7, 2020 at 7:37 PM Thomas Kellerer  wrote:

> Hemil Ruparel schrieb am 07.10.2020 um 16:02:
> > Yes the id is stored as a uuid.
>
> Then it should be declared with the data type uuid, which only needs 16
> bytes.
>
>
>
>
>
>


Re: How to migrate column type from uuid to serial

2020-10-07 Thread Thomas Kellerer
Hemil Ruparel schrieb am 07.10.2020 um 16:21:
> it is declared as uuid. But how does it occupy only 16 bytes?

Because a UUID is internally simply a 128bit number - the dashes you see are 
just formatting.

But if you can only send the text represnation, then yes 32 characters aren't 
enough.




Re: Handling time series data with PostgreSQL

2020-10-07 Thread Stephen Frost
Greetings,

* Jayaram (jairam...@gmail.com) wrote:
> So, Do we need the timescaleDB as mandatory to handle time series data? Is
> there any way to handle hourly to days,months,yearly data with PGSQL alone
> without timescale addon?

Certainly there is and a lot of people do it- what isn't clear is what
it is you feel is missing from PG when it comes to handling time series
data..?  Generally speaking there's concerns about PG's ability to
handle lots of partitions (which comes from there being very large
amounts of data being stored), but v12 and v13 have made great
improvements in that area and it's not nearly an issue any longer (and
performs better in quite a few cases than extensions).

> Ours is a new project and we are unsure about whether we should have both
> timescaleDB and PGSQL or PGSQL alone is capable of handling this time
> series data by tuning the right indexes.etc..

Partitioning and index tuning in PG (look at using BRIN if you haven't
already...) is important when you get to larger data volumes.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: How to migrate column type from uuid to serial

2020-10-07 Thread Hemil Ruparel
Sorry if this is silly but if it is a 128 bit number, why do we need 32
characters to represent it? Isn't 8 bits one byte?

On Wed, Oct 7, 2020 at 8:08 PM Thomas Kellerer  wrote:

> Hemil Ruparel schrieb am 07.10.2020 um 16:21:
> > it is declared as uuid. But how does it occupy only 16 bytes?
>
> Because a UUID is internally simply a 128bit number - the dashes you see
> are just formatting.
>
> But if you can only send the text represnation, then yes 32 characters
> aren't enough.
>
>
>


Re: How to migrate column type from uuid to serial

2020-10-07 Thread Thomas Kellerer
>>> it is declared as uuid. But how does it occupy only 16 bytes?
>> Because a UUID is internally simply a 128bit number - the dashes you see are 
>> just formatting.

> Sorry if this is silly but if it is a 128 bit number, why do we need 32 
> characters to represent it?

The 36 (or 32 without the dashes) characters are just the default hex 
representation.

If you wanted to, you could convert it to a bigint (or a numeric).

See e.g. here for an example: https://stackoverflow.com/a/27286610





Re: How to migrate column type from uuid to serial

2020-10-07 Thread Hemil Ruparel
Is it because they are hex characters and hence only need 4 bit to store
per character but we display each of those 4 bits as a character as a hex
value (0 to 9 and a-f) all of which in ASCII and UTF-8 require a byte to
represent? Hence the length of 32 (or 36 with dashes)?

On Wed, Oct 7, 2020 at 8:10 PM Hemil Ruparel 
wrote:

> Sorry if this is silly but if it is a 128 bit number, why do we need 32
> characters to represent it? Isn't 8 bits one byte?
>
> On Wed, Oct 7, 2020 at 8:08 PM Thomas Kellerer  wrote:
>
>> Hemil Ruparel schrieb am 07.10.2020 um 16:21:
>> > it is declared as uuid. But how does it occupy only 16 bytes?
>>
>> Because a UUID is internally simply a 128bit number - the dashes you see
>> are just formatting.
>>
>> But if you can only send the text represnation, then yes 32 characters
>> aren't enough.
>>
>>
>>


Re: What version specification used by PG community developers?

2020-10-07 Thread Adrian Klaver

On 10/7/20 6:53 AM, Magnus Hagander wrote:



On Wed, Oct 7, 2020 at 3:47 PM Adrian Klaver > wrote:


On 10/7/20 6:01 AM, Ron wrote:
 > On 10/7/20 2:52 AM, WanCheng wrote:
 >> Is same to the SemVer?(https://semver.org/)
 >
 > It used to be, but starting with v10 it's
 > MAJOR
 > PATCH

Was it?

Pre-10 it was:

MAJOR.MAJOR.PATCH



Yeah the fact that it kind of looked like semver, but *wasn't* semver, 
is probably one of the (many) things that confused people. It definitely 
wasn't semver.



Hmm, wonder how people manage in the Python, RH, Linux kernel, etc worlds?



--
  Magnus Hagander
  Me: https://www.hagander.net/ 
  Work: https://www.redpill-linpro.com/ 



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Handling time series data with PostgreSQL

2020-10-07 Thread Mark Johnson
I think the OP may be referring to Oracle's Temporal Validity feature.
This type of feature has yet to be implemented in PostgreSQL (see
https://www.postgresql.org/docs/13/unsupported-features-sql-standard.html
item T181).

Temporal Validity allows you to add a time dimension to any table, and only
display rows of data that are valid for the requested time period.
Oracle's implementation of Temporal Validity uses the PERIOD FOR clause in
CREATE TABLE, ALTER TABLE, and SELECT statements as illustrated below:

CREATE TABLE EMPLOYEE
(
   IDNUMBER PRIMARY KEY,
   TAX_IDVARCHAR2(10),
   HIRE_DATE TIMESTAMP,
   TERM_DATE TIMESTAMP,
   PERIOD FOR EMP_VALID_TIME (HIRE_DATE, TERM_DATE)
);

SELECT * FROM EMPLOYEE
   VERSIONS PERIOD FOR EMP_VALID_TIME
   BETWEEN TO_TIMESTAMP('06-OCT-2013', 'DD-MON-')
   AND TO_TIMESTAMP('31-OCT-2013', 'DD-MON-');

   ID TAX_ID HIRE_DATETERM_DATE
- -- --
--
1 123456789  06-OCT-13 12.00.00.00 AM   07-NOV-15 12.00.00.00 AM
2 222456789  07-OCT-13 12.00.00.00 AM
4 44400
5 505050505  30-OCT-13 12.00.00.00 AM   31-OCT-13 12.00.00.00 AM
6 666999666  30-SEP-13 12.00.00.00 AM   31-DEC-13 12.00.00.00 AM

The above feature requires Oracle 12 or higher.  SQL Server 2016 and later
also support it.  In earlier releases of each DBMS we tried to accomplish
the same by adding pairs of timestamp columns to each table and then
writing our own code to handle row filtering.  Partitioning isn't needed.
Certainly partitioning by range could be used, but it would still require
some manual efforts.

-Mark

On Wed, Oct 7, 2020 at 10:41 AM Stephen Frost  wrote:

> Greetings,
>
> * Jayaram (jairam...@gmail.com) wrote:
> > So, Do we need the timescaleDB as mandatory to handle time series data?
> Is
> > there any way to handle hourly to days,months,yearly data with PGSQL
> alone
> > without timescale addon?
>
> Certainly there is and a lot of people do it- what isn't clear is what
> it is you feel is missing from PG when it comes to handling time series
> data..?  Generally speaking there's concerns about PG's ability to
> handle lots of partitions (which comes from there being very large
> amounts of data being stored), but v12 and v13 have made great
> improvements in that area and it's not nearly an issue any longer (and
> performs better in quite a few cases than extensions).
>
> > Ours is a new project and we are unsure about whether we should have both
> > timescaleDB and PGSQL or PGSQL alone is capable of handling this time
> > series data by tuning the right indexes.etc..
>
> Partitioning and index tuning in PG (look at using BRIN if you haven't
> already...) is important when you get to larger data volumes.
>
> Thanks,
>
> Stephen
>


Re: How to migrate column type from uuid to serial

2020-10-07 Thread Francisco Olarte
Hemil:

On Wed, Oct 7, 2020 at 2:49 PM Hemil Ruparel  wrote:
> I was integrating a payment gateway for my app when I noticed its maximum 
> length of customer id string is 32. SIze of UUID is 36 (32 characters and 4 
> dashes). So I want to change the type of customer id to serial. The problem 
> is by now, the column is being used at many places. How to migrate the column 
> to serial without dropping the data?

An uuid is just a 128 bit number, as noted by many. Your problem is
the default encoding is 32 hex digits plus 4 hyphens. If your payment
gateway uses a 32 CHARs string  ( "maximum length of customer id
string is 32" ) you can just use other encodings. Passing to integer
and using decimal wont cut it ( you'll need 39 digits ), but using
just hex (without hyphens) will drop the string representation to 32
characters ( you can encode with a subst and postgres accepts it
without hyphens ).

If you want to buy a little more space for your own purposes you can
even fit 128 bits in 22 base64 chars with a couple pairs of bits to
spare, and IIRC you can do it with a creative decode/encode step after
killing the dashes.

And if your payment gateway uses unicode codepoints instead of ASCII
chars as units you could probably use more creative encodings ;-) ,
but probably using  a "drop the dashes" subst in the interface will be
your simpler option.

Francisco Olarte.




Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Nick Aldwin
Hi folks,

Did something change recently with what versions of libpq-dev are published
to buster-pgdg?  We have a dockerfile based on postgres:12.2 (which is
based on buster-slim) that installs "libpq-dev=$PG_MAJOR.*"  and it just
recently (this week) started failing.  running a brand new postgres
container and checking the apt cache, I only see versions for 11 (coming
from debian sources) and 13 (coming from buster-pgdg main).  I have
verified that the sources list includes both main and 12 -- was it removed
from 12?

$ docker run --rm -it --entrypoint bash
postgres:12.2root@fb7c949f82a0:/# apt update && apt-cache policy
libpq-devGet:1 http://deb.debian.org/debian buster InRelease [121 kB]
Get:2 http://security.debian.org/debian-security buster/updates
InRelease [65.4 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://apt.postgresql.org/pub/repos/apt buster-pgdg InRelease [103 kB]
Get:5 http://security.debian.org/debian-security buster/updates/main
amd64 Packages [233 kB]
Get:6 http://deb.debian.org/debian buster/main amd64 Packages [7,906 kB]
Get:7 http://deb.debian.org/debian buster-updates/main amd64 Packages [7,868 B]
Get:8 http://apt.postgresql.org/pub/repos/apt buster-pgdg/12 amd64
Packages [861 B]
Get:9 http://apt.postgresql.org/pub/repos/apt buster-pgdg/main amd64
Packages [203 kB]
Fetched 8,694 kB in 2s (3,782 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
19 packages can be upgraded. Run 'apt list --upgradable' to see them.
libpq-dev:
  Installed: (none)
  Candidate: 13.0-1.pgdg100+1
  Version table:
 13.0-1.pgdg100+1 500
500 http://apt.postgresql.org/pub/repos/apt buster-pgdg/main
amd64 Packages
 11.9-0+deb10u1 500
500 http://deb.debian.org/debian buster/main amd64 Packages
 11.7-0+deb10u1 500
500 http://security.debian.org/debian-security
buster/updates/main amd64 Packagesroot@fb7c949f82a0:/# cat
/etc/apt/sources.list.d/pgdg.listdeb
http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main 12


Thanks,
Nick


Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Adrian Klaver

On 10/7/20 11:01 AM, Nick Aldwin wrote:

Hi folks,

Did something change recently with what versions of libpq-dev are 
published to buster-pgdg?  We have a dockerfile based on 
|postgres:12.2| (which is based on |buster-slim|) that installs 
|"libpq-dev=$PG_MAJOR.*"|  and it just recently (this week) started 
failing.  running a brand new postgres container and checking the apt 
cache, I only see versions for 11 (coming from debian sources) and 13 
(coming from buster-pgdg main).  I have verified that the sources list 
includes both |main| and |12| -- was it removed from |12|?


See this FAQ item:

https://wiki.postgresql.org/wiki/Apt/FAQ#I_want_libpq5_for_version_X.2C_but_there_is_only_version_Y_in_the_repository

for how it works in the PGDG repos.

On the Debian side V11 is the supported version for Buster.



$ docker run --rm -it --entrypoint bash postgres:12.2root@fb7c949f82a0:/# apt update 
&& apt-cache policy libpq-devGet:1http://deb.debian.org/debian  buster 
InRelease [121 kB]
Get:2http://security.debian.org/debian-security  buster/updates InRelease [65.4 
kB]
Get:3http://deb.debian.org/debian  buster-updates InRelease [51.9 kB]
Get:4http://apt.postgresql.org/pub/repos/apt  buster-pgdg InRelease [103 kB]
Get:5http://security.debian.org/debian-security  buster/updates/main amd64 
Packages [233 kB]
Get:6http://deb.debian.org/debian  buster/main amd64 Packages [7,906 kB]
Get:7http://deb.debian.org/debian  buster-updates/main amd64 Packages [7,868 B]
Get:8http://apt.postgresql.org/pub/repos/apt  buster-pgdg/12 amd64 Packages 
[861 B]
Get:9http://apt.postgresql.org/pub/repos/apt  buster-pgdg/main amd64 Packages 
[203 kB]
Fetched 8,694 kB in 2s (3,782 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
19 packages can be upgraded. Run 'apt list --upgradable' to see them.
libpq-dev:
   Installed: (none)
   Candidate: 13.0-1.pgdg100+1
   Version table:
  13.0-1.pgdg100+1 500
 500http://apt.postgresql.org/pub/repos/apt  buster-pgdg/main amd64 
Packages
  11.9-0+deb10u1 500
 500http://deb.debian.org/debian  buster/main amd64 Packages
  11.7-0+deb10u1 500
 500http://security.debian.org/debian-security  buster/updates/main 
amd64 Packagesroot@fb7c949f82a0:/# cat 
/etc/apt/sources.list.d/pgdg.listdebhttp://apt.postgresql.org/pub/repos/apt/  
buster-pgdg main 12


Thanks,
Nick



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Nick Aldwin
Hi Adrian,

The FAQ you linked to says the following:

> If you really want to use a different version, the packages are available
in separate archive components named after the PostgreSQL major version.
Append that version after "main" in your sources.list. For example, if you
wanted 9.0's libpq5 on Debian Squeeze, use this: deb
http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main *9.0*

In the postgres dockerfile, it _is_ appending the version 12 to the sources
list:

root@fb7c949f82a0:/# cat /etc/apt/sources.list.d/pgdg.list

deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main 12


However I am still not seeing that version show up.  If I remove 'main',
leaving just 12, no versions show up as coming from the postgres repo at
all.  Am I missing something else here?

-Nick


On Wed, Oct 7, 2020 at 2:56 PM Adrian Klaver 
wrote:

> On 10/7/20 11:01 AM, Nick Aldwin wrote:
> > Hi folks,
> >
> > Did something change recently with what versions of libpq-dev are
> > published to buster-pgdg?  We have a dockerfile based on
> > |postgres:12.2| (which is based on |buster-slim|) that installs
> > |"libpq-dev=$PG_MAJOR.*"|  and it just recently (this week) started
> > failing.  running a brand new postgres container and checking the apt
> > cache, I only see versions for 11 (coming from debian sources) and 13
> > (coming from buster-pgdg main).  I have verified that the sources list
> > includes both |main| and |12| -- was it removed from |12|?
>
> See this FAQ item:
>
>
> https://wiki.postgresql.org/wiki/Apt/FAQ#I_want_libpq5_for_version_X.2C_but_there_is_only_version_Y_in_the_repository
>
> for how it works in the PGDG repos.
>
> On the Debian side V11 is the supported version for Buster.
>
> >
> > $ docker run --rm -it --entrypoint bash postgres:12.2root@fb7c949f82a0:/#
> apt update && apt-cache policy libpq-devGet:1
> http://deb.debian.org/debian
> buster InRelease [121 kB]
> > Get:2
> http://security.debian.org/debian-security
> buster/updates InRelease [65.4 kB]
> > Get:3
> http://deb.debian.org/debian
> buster-updates InRelease [51.9 kB]
> > Get:4
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg InRelease [103 kB]
> > Get:5
> http://security.debian.org/debian-security
> buster/updates/main amd64 Packages [233 kB]
> > Get:6
> http://deb.debian.org/debian
> buster/main amd64 Packages [7,906 kB]
> > Get:7
> http://deb.debian.org/debian
> buster-updates/main amd64 Packages [7,868 B]
> > Get:8
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg/12 amd64 Packages [861 B]
> > Get:9
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg/main amd64 Packages [203 kB]
> > Fetched 8,694 kB in 2s (3,782 kB/s)
> > Reading package lists... Done
> > Building dependency tree
> > Reading state information... Done
> > 19 packages can be upgraded. Run 'apt list --upgradable' to see them.
> > libpq-dev:
> >Installed: (none)
> >Candidate: 13.0-1.pgdg100+1
> >Version table:
> >   13.0-1.pgdg100+1 500
> >  500
> http://apt.postgresql.org/pub/repos/apt
> buster-pgdg/main amd64 Packages
> >   11.9-0+deb10u1 500
> >  500
> http://deb.debian.org/debian
> buster/main amd64 Packages
> >   11.7-0+deb10u1 500
> >  500
> http://security.debian.org/debian-security
> buster/updates/main amd64 Packagesroot@fb7c949f82a0:/# cat
> /etc/apt/sources.list.d/pgdg.listdebhttp://
> apt.postgresql.org/pub/repos/apt/  buster-pgdg main 12
> >
> >
> > Thanks,
> > Nick
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>
>


Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Adrian Klaver

On 10/7/20 12:02 PM, Nick Aldwin wrote:

Hi Adrian,

The FAQ you linked to says the following:

 > If you really want to use a different version, the packages are 
available in separate archive components named after the PostgreSQL 
major version. Append that version after "main" in your sources.list. 
For example, if you wanted 9.0's libpq5 on Debian Squeeze, use this: deb 
http://apt.postgresql.org/pub/repos/apt/ squeeze-pgdg main *9.0*


In the postgres dockerfile, it _is_ appending the version 12 to the 
sources list:


root@fb7c949f82a0:/# cat /etc/apt/sources.list.d/pgdg.list

debhttp://apt.postgresql.org/pub/repos/apt/  buster-pgdg main 12


However I am still not seeing that version show up.  If I remove 'main', 
leaving just 12, no versions show up as coming from the postgres repo at 
all.  Am I missing something else here?


Yeah not working for me either. Probably means one of the packagers will 
need to chime in.


-Nick




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Nick Aldwin
Thanks for the reply.  Should I post to the separate hackers list, or wait
for someone to chime in here?

FWIW, I am able to access older v12 libpq-dev by using the archive apt
list: https://apt-archive.postgresql.org/ -- so we will do that going
forward until this is resolved.

-Nick

On Wed, Oct 7, 2020 at 3:23 PM Adrian Klaver 
wrote:

> On 10/7/20 12:02 PM, Nick Aldwin wrote:
> > Hi Adrian,
> >
> > The FAQ you linked to says the following:
> >
> >  > If you really want to use a different version, the packages are
> > available in separate archive components named after the PostgreSQL
> > major version. Append that version after "main" in your sources.list.
> > For example, if you wanted 9.0's libpq5 on Debian Squeeze, use this: deb
> >
> http://apt.postgresql.org/pub/repos/apt
> squeeze-pgdg main *9.0*
> >
> > In the postgres dockerfile, it _is_ appending the version 12 to the
> > sources list:
> >
> > root@fb7c949f82a0:/# cat /etc/apt/sources.list.d/pgdg.list
> >
> > debhttp://apt.postgresql.org/pub/repos/apt/  buster-pgdg main 12
> >
> >
> > However I am still not seeing that version show up.  If I remove 'main',
> > leaving just 12, no versions show up as coming from the postgres repo at
> > all.  Am I missing something else here?
>
> Yeah not working for me either. Probably means one of the packagers will
> need to chime in.
> >
> > -Nick
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>
>


Re: Missing libpq-dev version in buster-pgdg?

2020-10-07 Thread Adrian Klaver

On 10/7/20 12:34 PM, Nick Aldwin wrote:
Thanks for the reply.  Should I post to the separate hackers list, or 
wait for someone to chime in here?


There is the APT issue tracker(you will need community account to access):

https://redmine.postgresql.org/projects/pgapt/issues

or the APT packaging list:

https://www.postgresql.org/list/pgsql-pkg-debian/



FWIW, I am able to access older v12 libpq-dev by using the archive apt 
list: https://apt-archive.postgresql.org/ -- so we will do that going 
forward until this is resolved.


-Nick

On Wed, Oct 7, 2020 at 3:23 PM Adrian Klaver > wrote:


On 10/7/20 12:02 PM, Nick Aldwin wrote:
 > Hi Adrian,
 >
 > The FAQ you linked to says the following:
 >
 >  > If you really want to use a different version, the packages are
 > available in separate archive components named after the PostgreSQL
 > major version. Append that version after "main" in your
sources.list.
 > For example, if you wanted 9.0's libpq5 on Debian Squeeze, use
this: deb
 > http://apt.postgresql.org/pub/repos/apt squeeze-pgdg main *9.0*
 >
 > In the postgres dockerfile, it _is_ appending the version 12 to the
 > sources list:
 >
 > root@fb7c949f82a0:/# cat /etc/apt/sources.list.d/pgdg.list
 >
 > debhttp://apt.postgresql.org/pub/repos/apt/
  buster-pgdg main 12
 >
 >
 > However I am still not seeing that version show up.  If I remove
'main',
 > leaving just 12, no versions show up as coming from the postgres
repo at
 > all.  Am I missing something else here?

Yeah not working for me either. Probably means one of the packagers
will
need to chime in.
 >
 > -Nick
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Handling time series data with PostgreSQL

2020-10-07 Thread Stephen Frost
Greetings,

* Mark Johnson (remi9...@gmail.com) wrote:
> I think the OP may be referring to Oracle's Temporal Validity feature.

Perhaps, but that's not the only way to manage time series data.

> [ ... ] In earlier releases of each DBMS we tried to accomplish
> the same by adding pairs of timestamp columns to each table and then
> writing our own code to handle row filtering.  Partitioning isn't needed.
> Certainly partitioning by range could be used, but it would still require
> some manual efforts.

I've found that using the range data types can work quite will, with
overlaps queries, to manage time-series data instead of using pairs of
timestamp columns.  With range data types you can also create exclusion
constraints to ensure that you don't end up introducing overlapping
ranges.

Either way require adjusting your queries though, no?  And inserting and
maintaining the data..?  I can appreciate wanting to be standards
compliant but this specific use-case doesn't really provide much
justification for using this particular feature.  Perhaps there are
better ones.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: Which is the setup with lowest resources you know Postgres is used in?

2020-10-07 Thread raf
On Wed, Oct 07, 2020 at 01:53:44PM +0300, Dmitry Igrishin  
wrote:

> In many cases concurrency is not a problem and in fact SQLite may
> handle concurrent requests faster than Postgres. Since SQLite is
> server-less and access overhead is near to zero (compared to Postgres)
> each writer does its work quickly and no lock lasts for more than a
> few dozen milliseconds.
> On the other hand, Postgres is better in cases of really high concurrency.

Presumably, this is no longer a problem, but many years
ago (between 14 and 10 years ago) I was using sqlite
for a low traffic website (probably no more than 40
users at a time), and the database became corrupted so
often that I had had to automate rebuilding it from the
latest backup and my own sql logs. I was very silly.
Switching to postgres was the real solution.

cheers,
raf





Re: [SOLVED] Re: UUID generation problem

2020-10-07 Thread Rob Sargent




On 10/6/20 9:35 AM, James B. Byrne wrote:

Thank you all for the help.  This is what ultimate resolved the issue for me:


[root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere --username=postgres
--host=localhost
Password for user postgres:
psql (11.8)
Type "help" for help.

idempiere(5432)=#  select current_schemas(true);
current_schemas
-
  {pg_catalog,public}
(1 row)

idempiere(5432)=# ALTER ROLE idempiere_dbadmin SET search_path TO
adempiere,public;
ALTER ROLE

idempiere(5432)=# \q

[root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere
--username=idempiere_dbadmin --host=localhost
Password for user idempiere_dbadmin:
psql (11.8)
Type "help" for help.

idempiere(5432)=#  select current_schemas(true);
 current_schemas
---
  {pg_catalog,adempiere,public}


I wonder what affect installing uuid-ossp in template1 /before/ starting 
with the idempiere installation might have had.  Such that 'create 
database idempiere;' would have put all the related functions in place 
immediately?





Re: [SOLVED] Re: UUID generation problem

2020-10-07 Thread Adrian Klaver

On 10/7/20 2:24 PM, Rob Sargent wrote:



On 10/6/20 9:35 AM, James B. Byrne wrote:
Thank you all for the help.  This is what ultimate resolved the issue 
for me:



[root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere 
--username=postgres

--host=localhost
Password for user postgres:
psql (11.8)
Type "help" for help.

idempiere(5432)=#  select current_schemas(true);
    current_schemas
-
  {pg_catalog,public}
(1 row)

idempiere(5432)=# ALTER ROLE idempiere_dbadmin SET search_path TO
adempiere,public;
ALTER ROLE

idempiere(5432)=# \q

[root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere
--username=idempiere_dbadmin --host=localhost
Password for user idempiere_dbadmin:
psql (11.8)
Type "help" for help.

idempiere(5432)=#  select current_schemas(true);
 current_schemas
---
  {pg_catalog,adempiere,public}


I wonder what affect installing uuid-ossp in template1 /before/ starting 
with the idempiere installation might have had.  Such that 'create 
database idempiere;' would have put all the related functions in place 
immediately?


Well the issue was not the extension install. It was there. The problem 
was the hide and seek with the search_path.  The idempiere_dbadmin role 
had a database setting that overrode the default search_path and 
prevented non-schema qualified calls to the functions to fail for that role.




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: [SOLVED] Re: UUID generation problem

2020-10-07 Thread Rob Sargent




On 10/7/20 3:28 PM, Adrian Klaver wrote:

On 10/7/20 2:24 PM, Rob Sargent wrote:



On 10/6/20 9:35 AM, James B. Byrne wrote:
Thank you all for the help.  This is what ultimate resolved the issue 
for me:



[root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere 
--username=postgres

--host=localhost
Password for user postgres:
psql (11.8)
Type "help" for help.

idempiere(5432)=#  select current_schemas(true);
    current_schemas
-
  {pg_catalog,public}
(1 row)

idempiere(5432)=# ALTER ROLE idempiere_dbadmin SET search_path TO
adempiere,public;
ALTER ROLE

idempiere(5432)=# \q

[root@accounting-2 ~ (master)]#  psql -E --dbname=idempiere
--username=idempiere_dbadmin --host=localhost
Password for user idempiere_dbadmin:
psql (11.8)
Type "help" for help.

idempiere(5432)=#  select current_schemas(true);
 current_schemas
---
  {pg_catalog,adempiere,public}


I wonder what affect installing uuid-ossp in template1 /before/ 
starting with the idempiere installation might have had.  Such that 
'create database idempiere;' would have put all the related functions 
in place immediately?


Well the issue was not the extension install. It was there. The problem 
was the hide and seek with the search_path.  The idempiere_dbadmin role 
had a database setting that overrode the default search_path and 
prevented non-schema qualified calls to the functions to fail for that 
role.




Agreed, but I wasn't sure the idempiere_dbadmin role creation and 
uuid-ossp import interleaving didn't have a hand in that effect.  But 
water under the bridge now.





Re: Which is the setup with lowest resources you know Postgres is used in?

2020-10-07 Thread Dmitry Igrishin
чт, 8 окт. 2020 г. в 00:14, raf :
>
> On Wed, Oct 07, 2020 at 01:53:44PM +0300, Dmitry Igrishin  
> wrote:
>
> > In many cases concurrency is not a problem and in fact SQLite may
> > handle concurrent requests faster than Postgres. Since SQLite is
> > server-less and access overhead is near to zero (compared to Postgres)
> > each writer does its work quickly and no lock lasts for more than a
> > few dozen milliseconds.
> > On the other hand, Postgres is better in cases of really high concurrency.
>
> Presumably, this is no longer a problem, but many years
> ago (between 14 and 10 years ago) I was using sqlite
> for a low traffic website (probably no more than 40
> users at a time), and the database became corrupted so
> often that I had had to automate rebuilding it from the
> latest backup and my own sql logs. I was very silly.
> Switching to postgres was the real solution.
As for now SQLite is a very robust solution if used properly.




Re: Which is the setup with lowest resources you know Postgres is used in?

2020-10-07 Thread raf
On Thu, Oct 08, 2020 at 01:14:02AM +0300, Dmitry Igrishin  
wrote:

> чт, 8 окт. 2020 г. в 00:14, raf :
> >
> > On Wed, Oct 07, 2020 at 01:53:44PM +0300, Dmitry Igrishin 
> >  wrote:
> >
> > > In many cases concurrency is not a problem and in fact SQLite may
> > > handle concurrent requests faster than Postgres. Since SQLite is
> > > server-less and access overhead is near to zero (compared to Postgres)
> > > each writer does its work quickly and no lock lasts for more than a
> > > few dozen milliseconds.
> > > On the other hand, Postgres is better in cases of really high concurrency.
> >
> > Presumably, this is no longer a problem, but many years
> > ago (between 14 and 10 years ago) I was using sqlite
> > for a low traffic website (probably no more than 40
> > users at a time), and the database became corrupted so
> > often that I had had to automate rebuilding it from the
> > latest backup and my own sql logs. I was very silly.
> > Switching to postgres was the real solution.
>
> As for now SQLite is a very robust solution if used properly.

That's great to hear.





Re: How to migrate column type from uuid to serial

2020-10-07 Thread Hemil Ruparel
I was thinking UUID was not a very good choice for id. Serial would be a
better one because I don't have a billion customers. It is more like a
thousand. So when I saw the customer ID of the payment gateway cannot
accept more than 32 characters, I thought UUID is overkill. So I want to
migrate to using a serial int instead as the primary key.,

On Wed, Oct 7, 2020 at 10:48 PM Francisco Olarte 
wrote:

> Hemil:
>
> On Wed, Oct 7, 2020 at 2:49 PM Hemil Ruparel 
> wrote:
> > I was integrating a payment gateway for my app when I noticed its
> maximum length of customer id string is 32. SIze of UUID is 36 (32
> characters and 4 dashes). So I want to change the type of customer id to
> serial. The problem is by now, the column is being used at many places. How
> to migrate the column to serial without dropping the data?
>
> An uuid is just a 128 bit number, as noted by many. Your problem is
> the default encoding is 32 hex digits plus 4 hyphens. If your payment
> gateway uses a 32 CHARs string  ( "maximum length of customer id
> string is 32" ) you can just use other encodings. Passing to integer
> and using decimal wont cut it ( you'll need 39 digits ), but using
> just hex (without hyphens) will drop the string representation to 32
> characters ( you can encode with a subst and postgres accepts it
> without hyphens ).
>
> If you want to buy a little more space for your own purposes you can
> even fit 128 bits in 22 base64 chars with a couple pairs of bits to
> spare, and IIRC you can do it with a creative decode/encode step after
> killing the dashes.
>
> And if your payment gateway uses unicode codepoints instead of ASCII
> chars as units you could probably use more creative encodings ;-) ,
> but probably using  a "drop the dashes" subst in the interface will be
> your simpler option.
>
> Francisco Olarte.
>


Both type of replications from a single server?

2020-10-07 Thread Srinivasa T N
Hi All,
   Is it possible to have both type of replications (streaming and logical)
from a single server?
   If I have 3 servers A,B and C, then I want to have streaming replication
from A to B whereas logical replication from A to C.  Is it possible?

Regards,
Seenu.


Re: Both type of replications from a single server?

2020-10-07 Thread Michael Paquier
On Thu, Oct 08, 2020 at 11:43:26AM +0530, Srinivasa T N wrote:
>Is it possible to have both type of replications (streaming and logical)
> from a single server?

Yes.

>If I have 3 servers A,B and C, then I want to have streaming replication
> from A to B whereas logical replication from A to C.  Is it possible?

And yes.
--
Michael


signature.asc
Description: PGP signature