Re: Handling time series data with PostgreSQL

2020-10-06 Thread Adalberto Caccia
Hi,
TimescaleDB as a Postgresql extension has been used in my firm for two
years now, I've recently managed to upgrade it from pg10 to pg12 and from
discrete VM's to Kubernetes as well.
Frankly speaking, being new to TimescaleDB at that time, I've found it easy
to manage, easy to scale (it's 100% compatible with pg replication,
unfortunately not the logical one, yet...), easy to install, easy to
upgrade... what else?
>From a developer's perspective, it just adds "superpowers" to ordinary PG
tables, all under the hood. On disk, it features a "chunked" layout, where
each chunk belongs to a definite "time" range; and of course the "time"
column on which to index time data is just passed as a parameter to the
call to TimescaleDB, for each table on which you need such power.
At the moment, we're also using it for time aggregate calculations, but
only for the coarse ones (30m --> 1h and 1h --> 1 day), while we're still
handling the finer ones (1s --> 1m and so on) in Kafka+Flink, which is a
common scenario for a streaming data platform, anyway.

Regards,
Adalberto


Il giorno mar 6 ott 2020 alle ore 11:47 Jayaram  ha
scritto:

> Dear All,
>
> I'm Jayaram S, oracle DBA. Currently we are planning to develop a stock
> market based application which deals 80% of data with time data. We are in
> the process of choosing the right database for the requirement especially
> for time series data. After all multiple investigations, I found PostgreSQL
> with timescaleDB works better than other DBs.
>
> But still I'm new to PGSQL and we wanted only open source technology to
> handle our requirements. It will be helpful to me if anyone can suggest
> implementing the time series concepts in PostgreSQL database.
> It's better if I can get proper docs or links with explanation.
>
> Thanks in advance.,
>
> --
>
>
>
>
> *Thanks & Regards,Jayaram S,Banglore.India.*
>


Re: WAL-G shipping to the cloud

2021-03-19 Thread Adalberto Caccia
Hi,
I'm using Wal-G like this, in pg12:

archive_mode = always
> archive_command = 'wal-g wal-push %p'
> archive_timeout = 60
> restore_command = 'wal-g wal-fetch %f %p'


The only change for pg10 is last line (restore_command) has to go into a
separate file, recovery.conf.

Please note, however, I'm using this in Kubernetes, so the relevant docker
image has already got all the relevant ENV config for Wall-G loaded
beforehand.
But Wal-G is really straightforward to configure, however.

Adalberto




Il giorno ven 19 mar 2021 alle ore 00:30 asli cokay 
ha scritto:

> Thank you, Bruce. Actually I mean I am using WAL-G what I am curious about
> is that I want to test shipping process automate, and I am looking for the
> ideas. But thanks for your help.
>
> Bruce Momjian , 18 Mar 2021 Per, 17:24 tarihinde şunu
> yazdı:
>
>> On Thu, Mar 18, 2021 at 05:18:30PM -0400, aslı cokay wrote:
>> > Hi all,
>> >
>> > I'd like to get postgres log files to the cloud but i want this process
>> going
>> > live i want to automate test process and after that i want it go live.
>> >
>> > What i have in my mind is setting archive_mode is on and
>> archiving_command with
>> > a script. After that i want to execute pgbench to get more WAL files and
>> > trigger with pg_switch_wal() to change WAL files.
>> >
>> > Is there any idea do you have about it or is there anyone shipped their
>> log to
>> > the cloud to give me an idea?
>>
>> Well, there is Wal-E and Wal-G, which archive to the cloud.
>> pg_backreset also supports cloud WAL storage.
>>
>> --
>>   Bruce Momjian  https://momjian.us
>>   EDB  https://enterprisedb.com
>>
>>   If only the physical world exists, free will is an illusion.
>>
>>


Re: Looking for some help with HA / Log Log-Shipping

2021-04-01 Thread Adalberto Caccia
Hi,
Postgresql 11 already changed the position of replication parameters, now
all hosted in the main postgresql.conf file.
So on the standby node, instead of a recovery.conf file, just an empty
STANDBY.SIGNAL file is needed in the $PGDATA directory to start Postgresql
as a standby replica on that node.

However, as directed by the official documentation you've mentioned,
section 26.2 https://www.postgresql.org/docs/12/warm-standby.html
 really advises that
we set-up log-shipping to better support the stand-by recovery process, and
make sure we can easily recover from any transient failure.
I'm a very happy user of wal-g for this; I'm currently running 4 different
Postgresql clusters some pg10 ando some pg12, but wal-g is great for any of
them; I'm running them in Kubernetes, so I'm configuring wal-g directly via
the Environment of the Postgresql containers, making sure wal-g is
installed in my custom Postgresql image, of course. Then it is configured
just like this:

> archive_mode = always
>
> archive_command = 'wal-g wal-push %p'
>
> archive_timeout = 60
>
> restore_command = 'wal-g wal-fetch %f %p'
>
>
Adalberto



Il giorno mer 31 mar 2021 alle ore 17:39 Laurent FAILLIE <
l_fail...@yahoo.com> ha scritto:

> Replying to myself :)
>
> It seems pg_basebackup did all the tricks, even restarting the slave. And
> it is in standby mode.
> Do I have anything to do in addition ?
>
> Thanks
>
>
> Le mercredi 31 mars 2021 à 12:51:29 UTC+2, Laurent FAILLIE <
> l_fail...@yahoo.com> a écrit :
>
>
> Hello,
>
> We are running Postgresql 12 and I'm trying to put in place streaming wal
> replication.
>
> I followed officiale documentation (
> https://www.postgresql.org/docs/12/warm-standby.html ) as well as this
> tutorial :
>
> https://scalegrid.io/blog/managing-high-availability-in-postgresql-part-1/
>
> What I did is :
>
> * created the master db
> * create replication role
> * updated postgresql.conf
> * finally replicate everything on the salve node using pg_basebackup (with
> -R option)
>
> But now I'm lost with §26.2.4 Setting up a standby server.
>
> In which file I have to put the
>
> primary_conninfo
>
> and other options ?
>
> Thanks
>
> Laurent
>
> ps: my goal is to install the PAF
>