RE: pg_dump to a remote server

2018-04-17 Thread Brent Wood
from the pg_dump docs...

...
-Z 0..9
--compress=0..9

   Specify the compression level to use. Zero means no compression. For the 
custom archive format, this specifies compression of individual table-data 
segments, and the default is to compress at a moderate level. For plain text 
output, setting a nonzero compression level causes the entire output file to be 
compressed, as though it had been fed through gzip; but the default is not to 
compress.
...

so perhaps running (on host2):
pg_dump -h host1 -Z 9 dbname | zcat | psql -h host2 dbname

will generate a compressed text output on host1, which is sent over the wire to 
host2 where it is locally uncompressed & fed into psql...

Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529


Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
www.niwa.co.nz<http://www.niwa.co.nz>
[NIWA]<http://www.niwa.co.nz>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.

From: Christoph Moench-Tegeder [c...@burggraben.net]
Sent: Tuesday, April 17, 2018 18:00
To: pgsql-general@lists.postgresql.org
Subject: Re: pg_dump to a remote server

## Ron (ronljohnso...@gmail.com):

> > pg_dump -h host1 dbname | psql -h host2 dbname
>
> But that assumes --format=plain which will send a whole lot of
> uncompressed text across the wire.

You can also use pg_restore with standard input, i.e. pg_dump | pg_restore.

Regards,
Christoph

--
Spare Space.






pgloader question - postgis support

2018-06-29 Thread Brent Wood
Hi,


I'm looking at pgloader to automate data loading into a Postgis enabled 
Postgres database.


I have seen in the tutorial how the internal point type is supported, and how 
the postgis extension can be pre-installed by pgloader if necessary, but I 
can't see how I might take x & y values & save as a postgis point. I guess I 
could do this via a trigger, but that is hopefully an unnecessary workaround.


if I read data in (say from CSV):


1,tan0104,1,173.567,-43.678

...


to a Postgis table:

create table station

(id int primary key,

 trip   char(7)

 station_no int,

 lon_s  decimal(7,4),

 lat_s  decimal(6,4),

 startp geometry(POINT,4326));



the startp column is populated by the SQL:

startp=ST_SetSRID(ST_MakePoint(lon_s,lat_s),4326)


This creates a point feature from the x (lon) & y (lat) coordinates,

  and specifies the coordinate reference system as lat/lon degrees WGS84 
(EPSG:4326) to match the column specification.


How can I implement that in the pgloader command to load the CSV file?



Thanks


Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529



Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
www.niwa.co.nz<http://www.niwa.co.nz>
[NIWA]<http://www.niwa.co.nz>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.





Re: Ways to deal with large amount of columns;

2018-08-31 Thread Brent Wood
You need to normalise your data model... the basic dataset you describe will 
probably fit in one table with three columns, but that might be simplistic...

All your aggregate queries are pretty basic SQL statements, which could be 
represented as views or made simpler to invoke by turning them into functions.

So, one table describing agents, with columns for agent, date and projected 
cashflow will store all your data...
(index agent & date for performance)
eg: for a summary for all agents for the last 12 months:
select agent, sum(cashflow) 
from tablewhere date >= now() - interval '1 year'group by agentorder by agent;

or a cashflow aggregate summary for the latest month
select sum(cashflow), avg(cashflow), min(cashflow), max(cashflow)  
from table 
where date = (select max(date) from table);

or get the agent with highest projected cashflow for the latest month
select agentfrom tablewhere date = (select max(date) from table)  and 
cashflow=(select max(cashflow) from table 
  where date = (select max(date) from table));


  From: a <372660...@qq.com>
 To: pgsql-general  
 Sent: Thursday, August 30, 2018 9:14 PM
 Subject: Ways to deal with large amount of columns;
   
Hi all:
I need to make a table contains projected monthly cashflow for multiple agents 
(10,000 around).
Therefore, the column number would be 1000+. 
I would need to perform simple aggregate function such as count, sum or average 
on each cashflow projected.
So if there is anyway of doing this? Will there be anything like define a macro 
in C that I can manipulate multiple columns by simple word that representing 
them.

Thanks so much!
Shore

   

Re: FDW to postGIS Database

2022-03-06 Thread Brent Wood
 In line below...

On Monday, March 7, 2022, 05:36:34 AM GMT+13, Susan Hurst 
 wrote:  
 
 Let say that I have a database named stp. I also have another database 
named geo that is a PostGIS database for storing geopolitical and 
geospatial data. I want to set up a foreign data wrapper in stp to 
connect to geo, so that for each address in stp, I only have to store 
the primary key value of the lowest level geo unit, such as a city. I 
can always display the city's county, state, country, whatever in views 
in stp. I plan to have other databases that need geo data so I want a 
single source of truth for geopolitical and geospatial data.

My questions are:
1. Do I have to install PostGIS in stp?or is it sufficient to 
install PostGIS only in geo?
If you want to use Postgis functions & datatypes in stp you will need Postgis 
there

2. Do I need the postgis extension in stp?  (I'm thinking yes, but I 
don't want to guess.)
Isn't this the same as (1) above? (Postgis is installed via "create extension 
postgis;")

3. I want to geocode addresses in stp. Do I need to install any 
geocoding software in stp?...or can it be used from geo?
If you can geocode in geo & access the resulting data via fdw in stp, you 
should not need geocoding tools in stp.If you need to access spatial data from 
geo in stp & geocode in stp, you will need the geocoding tools in stp.

Thanks for your help!

Sue

-- 
--
Susan E Hurst
Email:  susan.hu...@brookhurstdata.com
Mobile: 314-486-3261


  

Re: Best Open Source OS for Postgresql

2023-02-01 Thread Brent Wood
I would suggest any major Ubuntu based distro, I have a personal preference for 
Mint.

I'd also suggest you set up the official Postgres repo, rather than using a 
distro repo, so your installs come directly from there.

https://wiki.postgresql.org/wiki/Apt


Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529

From: Tony Shelver 
Sent: Wednesday, February 1, 2023 19:19
To: pgsql-general 
Subject: Re: Best Open Source OS for Postgresql


Copied to the list
On Wed, 1 Feb 2023 at 08:18, Tony Shelver 
mailto:tshel...@gmail.com>> wrote:


On Wed, 1 Feb 2023 at 08:04, Tony Shelver 
mailto:tshel...@gmail.com>> wrote:

On Tue, 31 Jan 2023 at 15:10, Marc Millas 
mailto:marc.mil...@mokadb.com>> wrote:
Sorry for inappropriate "reply".

if you do check the debian postgis repo, you ll find that its NOT possible to 
choose a postgis version.
its possible for postgis 2.4 and 2.5, then ALL 3.x versions are inaccessible 
but one, that did change from time to time.
(you MUST ask for postgis 3 without being able to ask for 3.0 or 3.1 or...  its 
like asking for postgres 9 without .5 or .6)
2 of my customers reverse to a RH family linux because they have been unable to 
install the requested postgres/postgis version on debian.
when I did ask the team, the reply was: we cannot package for all cross 
possibilities (ie. 5 postgres x 6 postgis, less some impossibilities according 
to postgis matrix)


Maybe I am not understanding this, but have you checked the UbuntuGIS source?  
I know not pure Debian, but...

Sent before complete:

UbuntuGIS stable shows PostGIS 2.4 and 2.5 available, as well as 3.1 and 3.2  : 
https://launchpad.net/%7Eubuntugis/+archive/ubuntu/ppa/+index?batch=75&memo=75&start=75<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flaunchpad.net%2F~ubuntugis%2F%2Barchive%2Fubuntu%2Fppa%2F%2Bindex%3Fbatch%3D75%26memo%3D75%26start%3D75&data=05%7C01%7CBrent.Wood%40niwa.co.nz%7C27575ec68b7b481e029508db041c64f3%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C638108292256807488%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Chlgpp%2F1nYCuOzzQ0eDEqVzAbQGUjLSntqC51mwHaYM%3D&reserved=0>
Got this link from 
https://wiki.ubuntu.com/UbuntuGIS<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.ubuntu.com%2FUbuntuGIS&data=05%7C01%7CBrent.Wood%40niwa.co.nz%7C27575ec68b7b481e029508db041c64f3%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C638108292256807488%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=zb8zbuQnogekZSDCRnWyGiyo6Yxbznqv6CbDSTp214g%3D&reserved=0>
  via  
https://postgis.net/install/<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpostgis.net%2Finstall%2F&data=05%7C01%7CBrent.Wood%40niwa.co.nz%7C27575ec68b7b481e029508db041c64f3%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C638108292256807488%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=YBbshMrRUhA7EVaW28g4E%2BvZ0PNGafVVivvUC7FzKyM%3D&reserved=0>
Also note that UbuntuGIS is based off the DebianGIS project: 
https://wiki.debian.org/DebianGis<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.debian.org%2FDebianGis&data=05%7C01%7CBrent.Wood%40niwa.co.nz%7C27575ec68b7b481e029508db041c64f3%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C638108292256807488%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Ufd57MDATI%2F0fqTiJDGNigV%2BKbHeKm3vrRxnnkTC20Y%3D&reserved=0>

We run both Ubuntu and Centos servers.  The change to Centos licensing has led 
us to support it only for one specific application that is embedded in Centos 
that we are trying to end-of-life ASAP.  At least Ubuntu server has a 5 year 
support window, and Ubuntu has now announced a limited 'free' Pro option for 
smaller businesses that provides a 10 year window.


Regards


[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: Re-2: New to PostgreSQL - looking for query writing tools

2023-02-02 Thread Brent Wood
The free version of Valentina Studio and DBeaver are the two I prefer

If you use Postgis for spatial data, DBeaver understands geometry data & can 
display data on a simple map or as tabular output, with a built in Leaflet 
facility.

https://valentina-db.com/en/studio/download
https://dbeaver.io/


Brent Wood


From: Sacha Kerres
Sent: Thursday, February 02, 2023 20:51
To: Giovanni Biscontini; pgsql-general@lists.postgresql.org
Subject: Re-2: New to PostgreSQL - looking for query writing tools

Hi,
we use sqlmanager.net tools:  Query, DBCompare, Data Compare, Automatic Tasks, 
Backup+Restore, etc...

There is a free version, but for professional work, you have to buy a license. 
But there is only a windows version.


Mit freundlichen Grüßen

Sacha Kerres
Kerres-EDV Beratung
Deglhof 45-47
93142 Maxhütte-Haidhof

Tel. +49 9471 60 14 14
Mobil. +49 171 210 48 46
E-Mail. sker...@kerres-edv.de



Original Message processed by david®<https://david.tobit.software>
Re: New to PostgreSQL - looking for query writing tools 2. Februar 2023, 08:34 
Uhr
Von Giovanni Biscontini<mailto:biscontin...@es2000.it>
An  
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org>
Hi Troy,
 we both use pgAdmin4 and DBeaver (https://dbeaver.io/) Community version but 
mainly 'cos we need to get in control of other functions of the database 
server, query is only a part of the job. Another interesting tool is Aquafold 
(https://www.aquafold.com): Aqua Data Studio is interesting (only tested) but 
it's a bit pricey...
Hope it helps, bye Giovanni

Il giorno mer 1 feb 2023 alle ore 18:00 Robin Tang 
mailto:ro...@artie.so>> ha scritto:
There's a bunch listed here: https://wiki.postgresql.org/wiki/PostgreSQL_Clients

I personally use Postico, but it's only available on Mac.

On Wed, Feb 1, 2023 at 8:43 AM Troy Sherven 
mailto:tsher...@mcd.net>> wrote:

Good morning,



First of all, I am not a database professional.  I work for a small company, 
and a portion of my job includes report writing (crystal, i-Net, etc), Excel 
analysis, etc.  We recently migrated to a new ERP that runs on a PostgreSQL 
database.  Our previous system was on a MSSQL database.  I have fairly basic 
experience writing SQL queries.  In the past I would use Microsoft SQL Server 
Management Studio to help in writing my SQL queries (I’d use the diagram, 
criteria, and SQL panes) instead of just writing SQL statements.



I’m looking for recommendations of tools that would work similar to Microsoft 
SQL Server Management Studio.  So far I’ve played around with pgAdmin (doesn’t 
seem to have the same query writing abilities) and EMS SQL Query (seems more 
cumbersome than the Microsoft tool).  Are there any other tools I should be 
looking at for query writing help? I like to something visual and a grid to 
help write queries.



Any recommendations would be appreciated!



Troy


--





Cordiali Saluti



[https://ci3.googleusercontent.com/mail-sig/AIorK4wW12ZNDdF_BcmKN6zTAWE_BVey5jo0Vjx6DCd3Lh8QNf7OZ5pyIuCt8HHiwcR0LNmBay9NP-Y]



 Dott. Giovanni Biscontini

 [Divisone Software]



 Str. Ponte Alto Sud, 74
  41123 Modena  (MO)

 Phone: 059_452094
  Fax: 059_8672171
  E-mail: biscontin...@es2000.it<mailto:biscontin...@es2000.it>

 Skype: g.biscontini.eurosystem2000

 WEB: https://www.es2000.it<http://www.es2000.it/>









Privacy e riservatezza: il presente messaggio, così come i relativi allegati, 
contengono dati ed informazioni da considerarsi strettamente riservate ed è 
indirizzato esclusivamente al destinatario sopra indicato, il quale è l'unico 
autorizzato a trattarlo in osservanza delle norme del Regolamento UE 2016/679 
(RGPD) . Preghiamo chiunque ricevesse questo messaggio per errore di evitare di 
copiarlo, divulgarlo, distribuirlo a terzi e di dare notizia al mittente 
dell’errato invio, distruggendone poi l'eventuale copia cartacea e la copia in 
formato elettronico.

Il titolare dei dati potrà esercitare tutti i diritti di cui all'art.7 del 
suddetto decreto tra cui quelli di accesso, rettifica, aggiornamento, 
opposizione al trattamento e cancellazione

Please, print this e-mail only if necessary


To: biscontin...@es2000.it
pgsql-general@lists.postgresql.org
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensu

Re: Backup schema without data

2023-04-06 Thread Brent Wood
 Looking at the result of:

pg_dump --help
Usage:
  pg_dump [OPTION]... [DBNAME]
...  -n, --schema=PATTERN dump the specified schema(s) only...
 -s, --schema-only    dump only the schema, no data...  


So: pg_dump -s -n  

It works for me...

Cheers
Brent Wood


On Friday, April 7, 2023 at 12:40:58 PM GMT+12, Atul Kumar 
 wrote:  
 
 Hi,
Please help me in telling that how I can take the backup of one single schema 
without its data using pg_dump utility ?

So far, I could not find anything suitable for doing so.
Regards,Atul
  

Re: Relocatable Binaries (RPMs) : custom installation path for PostgreSQL

2019-01-01 Thread Brent Wood
Have you considered symlinking the default paths to where you want it to reside?
That approach would allow updates & upgrades to work as they expect, without 
changing default paths or config files.
Much as described 
here:https://help.cloud66.com/rails/how-to-guides/databases/shells/change-psql-directory.html



Cheers,
  Brent Wood

 
On 12/31/18 8:19 AM, chiru r wrote:
> Hi All,
> 
> I wanted to install the PostgreSQL-11 Software with Custom 
> installation/binary paths.
> 
> Currently I am using EDB one click installer software with option file 
> like below to achieve, However EDB stopped and no longer provides one 
> click installer for Linux.
> 
> *Example:*
> 
> PostgreSQL install software file : *postgresql-9.6.9-1-linux-x64.run*
> 
> *cat Optionsfile.txt*
> mode=unattended
> datadir=/u02/pgdata01/9.6/data
> serverport=5432
> prefix=/u01/postgres/9.6
> I have installed CommunityPostgreSQL RPMs and are going into 
> "/usr/pgsql-11/" by default.
> 

> #-> pwd
> /usr/pgsql-11
> 
> [root@user:/usr/pgsql-11]#
> 
> #-> ls -l
> total 16
> 
> drwxr-xr-x 2 root root 4096 Dec 21 13:49 bin
> drwxr-xr-x 3 root root 4096 Dec 21 13:49 doc
> drwxr-xr-x 3 root root 4096 Dec 21 13:49 lib
> drwxr-xr-x 8 root root 4096 Dec 21 13:49 share
> 
> Please let us know how to get the PostgreSQL-11 installed in above 
> custom paths using RPMs? .
> 
> 
> Thanks,
> 
> Chiranjeevi> 

   

Re: Static PostgreSQL Binaries (Linux + Windows)

2019-01-13 Thread Brent Wood
Does this support extensions such as Postgis & Timescale?

Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529



[cid:image419b22.PNG@6e8846bb.46b21a94]<http://www.niwa.co.nz>


Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
T +64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade, Greta Point, Wellington
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> Twitter<https://twitter.com/niwa_nz> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Instagram<https://www.instagram.com/niwa_science>

To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.



From: Zach van Rijn 
Sent: Sunday, January 13, 2019 16:57
To: pgsql-general@lists.postgresql.org
Subject: Static PostgreSQL Binaries (Linux + Windows)

Hi all,


I've been busy porting popular open-source software to various
platforms (Linux + Windows) via musl- and/or MinGW- based tools,
as part of a project (and future distro?) called 'xstatic' [1].

All packages are statically linked and have zero dependencies,
they can be easily reproduced and audited, they are best suited
for use in environments where software must behave consistently,
and may have additional performance / safety benefits.

I am pleased to announce the immediate availability of binaries
(and source code / build scripts) for the following releases:

release  datelocation
---  --  --
latest   n/a https://xstatic.musl.cc/postgresql/
11.1 2018-11-08  https://xstatic.musl.cc/postgresql-11.1/
10.6 2018-11-08  https://xstatic.musl.cc/postgresql-10.6/
 9.6.11  2018-11-08  https://xstatic.musl.cc/postgresql-9.6.11/
 9.5.15  2018-11-08  https://xstatic.musl.cc/postgresql-9.5.15/
 9.4.20  2018-11-08  https://xstatic.musl.cc/postgresql-9.4.20/
 9.3.25  2018-11-08  https://xstatic.musl.cc/postgresql-9.3.25/


Within each top-level directory, you will find pertaining to an
architecture/ABI combination such as 'riscv32-linux-musl' (this
is the target platform where binaries should run), either:

(1) Directory tree (browse / download individual binaries); or
   e.g., https://xstatic.musl.cc/postgresql/riscv32-linux-musl/

(2) Tarball containing the above contents, with a sha512sum that
   is verifiable against '//SHA512SUMS'. Just extract
   and run (or build/link your own software against libraries).


PostgreSQL has been built for the following platforms, however,
not all platforms have been tested (please feel free to help):

 * aarch64-linux-musleabi
 * aarch64_be-linux-musl
 * arm-linux-musleabi
 * arm-linux-musleabihf
 * armeb-linux-musleabi
 * armeb-linux-musleabihf
 * armel-linux-musleabi
 * armel-linux-musleabihf
 * armv5l-linux-musleabihf
 * armv7l-linux-musleabihf
 * armv7m-linux-musleabi
 * armv7r-linux-musleabihf
 * i486-linux-musl
 * i686-linux-musl
 * i686-w64-mingw32
 * m68k-linux-musl
 * microblaze-linux-musl
 * microblazeel-linux-musl
 * mips-linux-musl
 * mips-linux-musln32sf
 * mips-linux-muslsf
 * mips64-linux-musl
 * mips64-linux-musln32
 * mips64-linux-musln32sf
 * mips64el-linux-musl
 * mips64el-linux-musln32
 * mips64el-linux-musln32sf
 * mipsel-linux-musl
 * mipsel-linux-musln32
 * mipsel-linux-musln32sf
 * mipsel-linux-muslsf
 * or1k-linux-musl
 * powerpc-linux-musl
 * powerpc-linux-muslsf
 * powerpc64-linux-musl
 * powerpc64le-linux-musl
 * powerpcle-linux-musl
 * powerpcle-linux-muslsf
 * riscv32-linux-musl
 * riscv64-linux-musl
 * s390x-linux-musl
 * sh2-linux-musl
 * sh2-linux-muslfdpic
 * sh2eb-linux-musl
 * sh2eb-linux-muslfdpic
 * sh4-linux-musl
 * sh4eb-linux-musl
 * x86_64-linux-musl
 * x86_64-linux-muslx32
 * x86_64-w64-mingw32


Quickly testing on Ubuntu 14.04 LTS (GNU/Linux 3.4.98 armv7l):

 $ file ./armv7l-linux-musleabihf/bin/psql
 psql: ELF 32-bit LSB  executable, ARM, EABI5 version 1 (SYSV),
 statically linked, stripped

 $ ./armv7l-linux-musleabihf/bin/psql --version
 psql (PostgreSQL) 11.1

 $ ./armv7l-linux-musleabihf/bin/psql \
   -h pellefant.db.elephantsql.com\
   -U abcdefgh
 Password:
 psql (11.1, server 9.5.2)
 Type "help" for help.

 abcdefgh=>


The directory listing looks something like: http://ix.io/1yaV

That said, if you find bugs or encounter issues, please file a
bug report here [2]. Windows support may need tweaking.


Regards,

ZV


[1]: https://xstatic.musl.cc/

[2]: https://git.zv.io/xstatic/builder/issues







Re: Can anyone please provide me list of customers using postgreSQL

2019-01-16 Thread Brent Wood
One comment I thought valuable after a large New Zealand Govt agency 
implemented a national GIS suite. This was their first install using Postgis 
instead of  the usual MS SQL Server.
I asked how they found Postgis/Postgres as a platform. I got a very succinct 
reply. 

"It's so easy to connect to, and sh** it's fast!"
In general, you don't need to know much more...

Brent Wood

  From: "Ramamoorthi, Meenakshi" 
 To: pgsql-general  
 Sent: Wednesday, January 16, 2019 8:42 AM
 Subject: Can anyone please provide me list of customers using postgreSQL
   
 Dear folks:    1)Can someone please send me a link 
of all companies using PostgreSQL ? 2)Both government and private companies 
using PostgreSQL 3)Any security issues found earlier and the steps taken 
for resolution or how it was mitigated. 4)Advantages of PostgreSQL compared 
to other databases like Oracle, MySQL etc.       Thanks and best regards 
Meenakshi Ramamoorthi             From: Alexander Farber 

Sent: Tuesday, January 15, 2019 11:24 AM
Cc: pgsql-general 
Subject: Re: aggregate functions are not allowed in UPDATE    And I should 
better change the avg_time column from TEXT to TIMESTAMPTZ (and use TO_CHAR on 
it later down the road) so that I can compare my players    Regards Alex 

   

Re: Subquery to select max(date) value

2019-02-14 Thread Brent Wood
I have not really followed this thread but would not a query along the lines of

select * from activity where person_id = n and timestamp = (select 
max(timestamp) from activity where person_id = n);

give the required answer ie, always return the latest result for the specified 
person_id??


Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529



[cid:imagee69a07.PNG@f4290eba.45ae534e]<http://www.niwa.co.nz>


Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
T +64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade, Greta Point, Wellington
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> Twitter<https://twitter.com/niwa_nz> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Instagram<https://www.instagram.com/niwa_science>

To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.



From: Rich Shepard 
Sent: Thursday, February 14, 2019 12:13
To: pgsql-generallists.postgresql.org
Subject: Re: Subquery to select max(date) value

On Wed, 13 Feb 2019, Adrian Klaver wrote:

> Given a sufficiently large date range that may not be true as you may have
> contacted a given person multiple times during that range and generated
> multiple activities records.

Adrian,

This is true as it has occurred. I want only the most recent activity row
associated with that person_id. (NB: while I'm really comfortable with DDL
statements my DML experience is seriously lacking and that's what I need to
improve now.)

I've just read a couple of blog posts on the LATERAL join added in 9.3 and
understand it in theory. Properly applying it to my application is now my
focus (and I need to re-read Andrew's example very closely.)

Best regards,

Rich






Re: Postgres for SQL Server users

2019-05-06 Thread Brent Wood
 Hi Igal,

One relevant comment I found interesting a couple of years ago...
A New Zealand Govt agency was installing an institutional GIS system (several 
thousand potential users). It supported different back-end spatial databases. 
Previous installs of this system for other clients had used MS SQL Server, this 
was the first that was installed on a Postgres/Postgis back end. 

As it was about to go into action as a live, production system, I was speaking 
to one of the team doing the installation, and asked how they had found working 
with Postgres instead of SQL Server. The reply was worth a few formal 
benchmarks:
"It's so easy to connect to and sh*t it's fast!"
Might be a useful comment for your presentation :-) A few years on & it is 
still working fine...


Cheers...
  Brent Wood


On Tuesday, May 7, 2019, 7:21:40 AM GMT+12, Igal Sapir  
wrote:  
 
 Next month I'll be making a presentation about Postgres to a SQL Server crowd 
in L.A. at their SQL Saturday event.
I was wondering if anyone has any tips that are specific for SQL Server users?  
Best features?  Known issues?  Common rebuttals?

Thanks,
Igal

  

Re: Postgres for SQL Server users

2019-05-07 Thread Brent Wood
 I have not used SS for spatial data, but I don't have a Postgres database 
without Postgis installed. The OSGEO ecosystem and synergies with other FOSS 
GIS tools is fantastic.
And it does not stop with the Postgis extension. For time series data (anything 
from fleet management to sensor data) Postgres has the (new) TimescaleDB 
extension. I ran this very effectively with a 600,000,000 record database of 
sensor readings from a research vessel - on a $400 laptop (with an SSD) for 
testing/prototyping. The sensor data was stored in Timescaledb Hypertables & 
the location data in Postgis geometry columns in those tables. Significantly 
better performance than native Postgres.
Also consider language support for database functions... pl/R supports some 
very nice capabilities, especially supporting websites. Instead if running a 
Postgres query to return the data to plot via the web page, or storing static 
plots in your CMS that need updating when you get new data, you can use 
Postgres functions in pl/R to render the plot of the data in a file, and return 
the name of the file. The web site does no rendering, just invokes the SQL & 
displays the file that is returned. So the DB can return the data and/or the 
graphic. Back up your database & back up your functions. This paradigm can work 
very effectively...
Generally, the FOSS ecosystem around Postgres offers an incredible array of 
tools and capabilities that I don't think any other db - FOSS or not - can 
provide. I have had limited exposure to Oracle, SQL Server, Sybase, Empress, 
Teradata, Netezza, DB2, Sqlite/Spatialite, Interbase & Informix. Of these, 
Postgres & Sqlite3 (which one depends on use cases) are all I use these days.


On Tuesday, May 7, 2019, 5:36:00 PM GMT+12, Tony Shelver 
 wrote:  
 
 I have to agree on the geospatial (GIS) features.   
I converted from SQL Server to Postgresql for our extended tracking database.  
The SS geospatial feature set doesn't seem nearly as robust or complete or 
perfoirmant as that supplied by PostGIS. 
The PostGIS ecosystem of open source / 3rd party tools is also far bigger, for 
anything to do with mapping.  Openstreetmaps.org stores their world dataset on 
Postgresql / PostGIS, and there a ton of mostly open source-based tools and 
organizations that work with it or any other PostGIS data to provide a complete 
GIS solution.
My first sS implementation had me backing out of storing geographic points in 
the relevant SQL Server datatype as the performance hit during loading was just 
too big.  Doing the same thing in Postgresql / PostGIS is nardly noticeable.
Another feature in Postgres is that you are not restricted to just plpgsql as 
an internal procedural language. 

I am not an expert, but it also seems far easier to create, install and work 
with major extensions to Postgresql than SQL Server.  I found installing the 
GIS featureset in SS to be a bit of a pain back oin the day..  

On Tue, 7 May 2019 at 00:53, Michel Pelletier  
wrote:

On Mon, May 6, 2019 at 2:49 PM Adam Brusselback  
wrote:

I think the main "gotcha" when I moved from SQL Server to Postgres was I didn't 
even realize the amount of in-line t-sql I would use to just get stuff done for 
ad-hoc analysis. Postgres doesn't have a good way to emulate this. DO blocks 
cannot return resultsets, so short of creating a function and dropping it, it's 
not possible to get the same workflow.

Just ruminating here, and this has probably already been discussed in the past, 
but I've always wanted something like a 'SELECT DO [LANGUAGE ...] RETURNS 
rettype | TABLE (...) $$ RETURN [NEXT | QUERY] ... $$; but haven't had any 
serious problem with creating/dropping functions like you mentioned.
-Michel 
The lack of GUI tooling was also a huge "whoa" moment for me, which I still 
grapple with.

  

Re: Need a DB layout gui

2019-06-25 Thread Brent Wood
I use Valentina Studio these days...
https://www.valentina-db.com/en/get-free-valentina-studio


Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529



[cid:image0a50e7.PNG@cb42a178.41b47e97]<http://www.niwa.co.nz>


Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
T +64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade, Greta Point, Wellington
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> Twitter<https://twitter.com/niwa_nz> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Instagram<https://www.instagram.com/niwa_science>

To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.



From: Adrian Klaver 
Sent: Tuesday, June 25, 2019 10:38
To: David Gauthier; pgsql-gene...@postgresql.org
Subject: Re: Need a DB layout gui

On 6/24/19 3:25 PM, David Gauthier wrote:
> Hi:
>
> I've been poking around
> https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FCommunity_Guide_to_PostgreSQL_GUI_Tools&data=02%7C01%7CBrent.Wood%40niwa.co.nz%7C9dc698049dda43b64ddb08d6f8f4adec%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C636970127137909001&sdata=rpquqXVkI4Btn%2FYXn56eW3WVEe9krM0TLlh2cJV5BhU%3D&reserved=0
>  to
> see if there is anything that can read PG metadata in and display it
> graphically in a gui.  You know, the kind of picture that is 1 block per
> table with 1->many arrows connecting up the primary/foreign keys of
> different tables.  SQL Power Architect looked promising, but fails to do
> the linkage for some PK-FK relationships (buggy?) .
>
> Any suggestions on what might do this?

After my previous post I realized you said GUI:(

Do you want something that you can interact with?

Also SQL Workbench/J will display FK relationships:

https://aus01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.sql-workbench.eu%2FFkDisplay_png.html&data=02%7C01%7CBrent.Wood%40niwa.co.nz%7C9dc698049dda43b64ddb08d6f8f4adec%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C636970127137909001&sdata=GsU9%2FGJRTK%2B0l1UDKTgVAxI6hyCWPsfxO7iMZBQPzFo%3D&reserved=0


--
Adrian Klaver
adrian.kla...@aklaver.com








Re: Strategies for converting UTC data to local windows for arbitrary resolutions and timezones

2023-10-04 Thread Brent Wood
Slightly off topic, but has anyone tried TimescaleDB for timeseries databases?

The issues discussed here are still there as they apply to the underlying 
Postgres ORDBMS.

We solve the problem (around 4 billion records of instrument sensor readings) 
by using UTC for the "native" timestamp, and working in that. Even though we 
are ½ way around the world. The local times can easily be determined & applied 
if desired, but by standardising on the reference time zone at the start, 
things have "just worked", for around 15 years now.


Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529

From: Lincoln Swaine-Moore 
Sent: Thursday, October 5, 2023 08:30
To: Alban Hertroys 
Cc: Marian Wendt ; pgsql-general 

Subject: Re: Strategies for converting UTC data to local windows for arbitrary 
resolutions and timezones

> What I do in such cases is to add an extra column with the UTC timestamp to 
> serve as a linear scale to the local timestamps. That also helps with 
> ordering buckets in reports and such during DST changes (especially the ones 
> where an hour repeats).

> For hours and quarter hours I found it to be fairly convenient to base a view 
> on a join between a date calendar and an (quarter of an) hour per UTC day 
> table, but materialising that with some indexes may perform better (at the 
> cost of disk space). I do materialise that currently, but our database server 
> doesn’t have a lot of memory so I’m often not hitting the cache and 
> performance suffers a bit (infrastructure is about to change for the better 
> though).

That's an interesting idea, but I'm not sure I fully understand. Assuming 
you're aggregating data: what do you group by? For instance, at an hourly 
resolution, if you group by both the UTC timestamp and the local one, you might 
end up, say, dividing an hour-long bucket in two for time zones with 
half-hour-based offsets, no?

Thanks for the detailed writeup! Definitely helpful to learn more about what 
people are using in production to handle this sort of thing.

--
Lincoln Swaine-Moore
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.


Re: Postgres limitation in user management

2023-11-04 Thread Brent Wood
>> We have 2 sets of database user groups –
>>
>>  1. App – who owns the application schemas (and tables)
>>  2. Support – who provides db support
>>
>> We want Support users to have no SELECT or DML privilege but only ALTER TABLE
>> to perform any troubleshooting in the database.

>This seems strange to me. What kind of troubleshooting requires to
>ability to ALTER TABLE but not to do DML?

Where your db admin & data admin are separated. Data security issues can 
require minimal access to data, which a dba does not necessarily require. 
Especially when the DBA role is contracted out.

Sort of along this line, we have offloaded user management to AD, so our DB 
user management is now carried out via in-house IT, who are not DBA's and have 
no access to data.

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Peter J. Holzer
Sent: Sunday, November 05, 2023 10:33
To: pgsql-general@lists.postgresql.org
Subject: Re: Postgres limitation in user management

On 2023-11-03 06:26:21 +, Kar, Swapnil (TR Technology) wrote:
> We have 2 sets of database user groups –
>
>  1. App – who owns the application schemas (and tables)
>  2. Support – who provides db support
>
> We want Support users to have no SELECT or DML privilege but only ALTER TABLE
> to perform any troubleshooting in the database.

This seems strange to me. What kind of troubleshooting requires to
ability to ALTER TABLE but not to do DML?

hp

--
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.


Re: Foreign Data Wrappers

2024-09-06 Thread Brent Wood
 The most interesting (IMHO) FDW implementation I have used is an Australian 
WFS service mounted locally as a Postgis table.

Not wonderful performance, but full spatial query functionality.
If performance was an issue, "create table as select * from FDW;" creates a 
local cache...
Very useful functionality.

Brent Wood

On Saturday, September 7, 2024 at 12:55:37 PM GMT+12, Gus Spier 
 wrote:  
 
 I find myself in new territory, Foreign Data Wrappers (FDW). I stumble closer 
to success through the documentation, youtube videos, and various google 
resources. But I've come across a concept that intrigues me very deeply.
If I understand the concepts correctly, FDW not only makes other databases 
available, FDW also offers access to .csv files, plain text, or just about 
anything that can be bullied into some kind of query-able order. Has anyone 
ever tried to connect to redis or elasticache? If so, how did it work out? 
Regards,Gus


-- 
Gus
gus.spier@gmail.com540-454-3074
“Characteropathic individuals adopt ideologies created by doctrinaire, often 
schizoidal people, recast them into an active propaganda form, and disseminate 
it with their characteristic pathological egotism and paranoid intolerance for 
any philosophies which may differ from their own.” (Political Ponerology, 
Andrew Lobaczewski, 1984)

  

Re: Overlapping values (?) in multi-column partitioned tables

2024-09-11 Thread Brent Wood
I'm not sure of the less than vs less than or equals in this one, so it may be 
my ignorance... but which partition gets records with a date of '24-02-01", it 
seems that without a more explicit definition there is ambiguity on dates at 
the partition limits when those dates are common to multiple partitions?

To avoid this shouldn't the definition be:

> xof=# create table t (pk bigint not null, ts timestamp not null) partition by 
> range (ts, pk);
> CREATE TABLE
> xof=# create table t1 partition of t for values from 
> ('2024-01-01'::timestamp, minvalue) to ('2024-01-31'::timestamp, maxvalue);
> CREATE TABLE
> xof=# create table t2 partition of t for values from 
> ('2024-02-01'::timestamp, minvalue) to ('2024-02-29'::timestamp, maxvalue);



Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Christophe Pettus 
Sent: Wednesday, September 11, 2024 10:13
To: pgsql-general 
Subject: Overlapping values (?) in multi-column partitioned tables

Hi,

I am clearly not understanding something.  Consider:

> xof=# create table t (pk bigint not null, ts timestamp not null) partition by 
> range (ts, pk);
> CREATE TABLE
> xof=# create table t1 partition of t for values from 
> ('2024-01-01'::timestamp, minvalue) to ('2024-02-01'::timestamp, maxvalue);
> CREATE TABLE
> xof=# create table t2 partition of t for values from 
> ('2024-02-01'::timestamp, minvalue) to ('2024-03-01'::timestamp, maxvalue);
> ERROR:  partition "t2" would overlap partition "t1"
> LINE 1: ...on of t for values from ('2024-02-01'::timestamp, minvalue) ...
>  ^
> xof=#

In what way do those partitions overlap?  In other words, there does not appear 
to be a value of (ts, pk) having '2024-01-01'::timestamp <= ts < 
'2024-03-01'::timestamp for any pk where it would be ambiguous which partition 
that row would go into.  (I'm imagining it on a cartesian plane, and there 
isn't any overlap between the rectangles those partition boundaries define.)

I'm obviously missing something, but... what?  Thank you!

[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.


Re: Effects of REVOKE SELECT ON ALL TABLES IN SCHEMA pg_catalog FROM PUBLIC

2024-09-12 Thread Brent Wood
Could you use FDW's in another completely separate db for them to access so 
they have no direct access to the source data (or database), only the linked 
tables which have no local data, other users, etc,  present at all?

Which is sort of what was suggested: "Put some kind of restrictive app in front 
of the database." This other db could be that app?

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Tom Lane 
Sent: Thursday, September 12, 2024 16:36
To: Andreas Joseph Krogh 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Effects of REVOKE SELECT ON ALL TABLES IN SCHEMA pg_catalog FROM 
PUBLIC

Andreas Joseph Krogh  writes:
> Motivation: I have PowerBI users, with a separate ‘reporting’-role, accessing
> a database and I want to prevent them from listing all tables, users, 
> databases
> and view-definitions (to not see the underlying query).

Postgres is not designed to support this requirement.

> I'm evaluating this:
> REVOKE SELECT ON ALL TABLES IN SCHEMA pg_catalog FROM PUBLIC; REVOKE SELECT ON
> ALL TABLES IN SCHEMA information_schema FROM PUBLIC;
> Will this affect “normal behaviour”, ie. prevent the planner, or other
> internal mechanisms, from working properly for sessions logged in with the
> ‘reporting’-role?

Probably 95% of that stuff will still work.  By the same token, there
are plenty of information-leaking code pathways that will still leak.
For instance, your restricted user will have no trouble discovering
the OIDs and names of all extant tables, using something like

do $$ begin
for tid in 1..100 loop
  if tid::regclass::text != tid::text then
raise notice 'tid % is %', tid, tid::regclass;
  end if; end loop;
end $$;

Functions such as pg_describe_object still work fine, too.

Experimenting with psql, a lot of stuff is broken as expected:

busted=> \d mytable
ERROR:  permission denied for table pg_class

but some things still work:

busted=> \sf sin
CREATE OR REPLACE FUNCTION pg_catalog.sin(double precision)
 RETURNS double precision
 LANGUAGE internal
 IMMUTABLE PARALLEL SAFE STRICT
AS $function$dsin$function$

This is pretty much the other side of the same coin.
The reason you can still parse and plan a query is that
it does not occur to large parts of the backend that there
should be any reason to refuse to read a system catalog.
That carries over to these operations as well.

This recent thread might be enlightening:

https://www.postgresql.org/message-id/flat/18604-04d64b68e981ced6%40postgresql.org<https://www.postgresql.org/message-id/flat/18604-04d64b68e981ced6%40postgresql.org>

If you have a requirement like this, I think the only safe
way to meet it is to not give those users direct SQL access.
Put some kind of restrictive app in front of the database.

regards, tom lane


[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz/>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz/> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.


Re: Need help with clarification on stored procedure support in PostGreSQL database

2021-03-09 Thread Brent Wood
They are discussed here,
https://kb.objectrocket.com/postgresql/function-vs-stored-procedure-602

(from this page):
The main difference between function and stored procedure is that user-defined 
functions do not execute transactions. This means, inside a given function you 
cannot open a new transaction, neither can you commit or rollback the current 
transaction. It is important to note that stored procedures are just functions 
that can support transactions and were introduced in Postgresql 11. PostgreSQL 
functions can be created in a variety of languages including SQL, PL/pgSQL, C, 
Python.

Perhaps add some detail from here to the official docs?

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Guyren Howe 
Sent: Tuesday, March 9, 2021 17:31
To: Tom Lane ; raf 
Cc: Adrian Klaver ; Bysani, Ram 
; pgsql-general@lists.postgresql.org 

Subject: Re: Need help with clarification on stored procedure support in 
PostGreSQL database

This seems like an important consideration. I’ve spent 10 minutes searching the 
documentation for PG 11 and can’t find where it is documented. Perhaps it 
should be made more prominent?
On Mar 8, 2021, 16:53 -0800, raf , wrote:
On Mon, Mar 08, 2021 at 06:31:56PM -0500, Tom Lane  wrote:

raf  writes:
Is there any important different between a stored procedure and
a stored function that returns void? I've been using functions
for everything but calling them procedures in my head (even though
most of them do return a value or result set).

A procedure can commit or rollback a transaction (and then start
a new one).

regards, tom lane

thanks.

cheers,
raf



[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png][https://niwa.co.nz/sites/niwa.co.nz/files/ETNZ_Official-supplier-logo-75h.png]<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529
National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


hstore each() function - returned order??

2021-03-12 Thread Brent Wood
Hi,

I'm using the following in an SQL :

select (EACH(value)).key as measurement_key,
   (EACH(value)).value as value
from t_reading_hstore;

I'm assuming this will give two columns containing the key/value pairs in the 
hstore record.

The docs suggest the order of the keys/values returned is undetermined. That is 
a bit ambiguous to me.

The order is not a problem in this case: as long as the keys and values are 
returned in the SAME order, what that order is I don't care.
Just that the key is always returned in the same row as its value.

It does seem to work, at least in my test cases, but I need to know if that is 
just luck, or if it is a safe assumption to make.


Can anyone confirm this is a robust assumption for me?


Thanks

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png][https://niwa.co.nz/sites/niwa.co.nz/files/ETNZ_Official-supplier-logo-75h.png]<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529
National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: hstore each() function - returned order??

2021-03-12 Thread Brent Wood
From: David G. Johnston 
Sent: Friday, March 12, 2021 21:19
To: Brent Wood 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: hstore each() function - returned order??


Thanks David,

Is this what you mean?

select measurement_key,
   value
from t_reading_hstore,
 lateral ((EACH(value)).key as measurement_key,
 lateral ((EACH(value)).value as value;

I'll try it with \timing & explain to look at performance, but I don't see how 
this provides any greater assurance that the keys & values are provided in the 
same order, and so the correct value is in the same row as its key?

This still seems to depend on just how the EACH() function returns the values 
rather than any use of lateral joins, and was the question I was asking.

Cheers

Brent Wood

On Thursday, March 11, 2021, Brent Wood 
mailto:brent.w...@niwa.co.nz>> wrote:
Hi,

I'm using the following in an SQL :

select (EACH(value)).key as measurement_key,
   (EACH(value)).value as value
from t_reading_hstore;


Just move the each call to the from clause as a lateral join and treat the 
result as a two-column table.

David J.




[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png][https://niwa.co.nz/sites/niwa.co.nz/files/ETNZ_Official-supplier-logo-75h.png]<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529
National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: hstore each() function - returned order??

2021-03-12 Thread Brent Wood
 Ahh, light dawns!!!
Thanks guys, much appreciated!!!  (as always)

Brent

On Saturday, March 13, 2021, 3:52:33 AM GMT+13, Tom Lane 
 wrote:  
 
 "David G. Johnston"  writes:
> On Fri, Mar 12, 2021 at 3:18 AM Brent Wood  wrote:
>> Is this what you mean?
>> 
>> select measurement_key,
>> value
>> from t_reading_hstore,
>> lateral ((EACH(value)).key as measurement_key,
>> lateral ((EACH(value)).value as value;

> The definition of "each" shows that it returns a record type.  That whole
> type can be considered a table and referenced in the select list.

Yeah.  To my mind, the main point here is to run each() just once per
t_reading_hstore row, not twice.  So something like

SELECT e.key, e.value FROM t_reading_hstore AS t, each(t.value) AS e;

(Writing LATERAL is optional.)

            regards, tom lane


  

Timestamp/hstore query?

2021-04-14 Thread Brent Wood
Hi,


I have a database with instrument readings stored in hstore key-value pairs, so 
a record has a timestamp attribute and an hstore attribute with all the sensor 
readings for that time. The key identifies the sensor, the value is the reading.

Not all sensors have a reading at every timestamp.
Timestamps are 1 sec intervals.
The db uses Timescaledb hypertables to partition the data

How can I select the LAST (hstore) value for each instrument (hstore key) at 
one minute intervals. So I get an output of 1 minute timestamps & an hstore 
containing the last key-value pairs for each key in present in that 1 minute 
interval.

Essentially this provides a snapshot saying "at this time (1 min interval) the 
latest (current) reading for each sensor. was this value"

I'm OK with a user defined pl/pgsql function if this is the best approach.


Thanks

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png][https://niwa.co.nz/sites/niwa.co.nz/files/ETNZ_Official-supplier-logo-75h.png]<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529
National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: Timestamp/hstore query?

2021-04-14 Thread Brent Wood
Thanks for your reply,
The table is essentially:create table readings (timer    timestamp primary key, 
  readings  hstore);

the hstore comprises ( ) key/value pairs for readings taken 
at the time specified in the timestamp.
eg:  "67" "-45.67436", "68" "176.5424" could be key/value pairs representing 
latitude & longitude, with a timestamp in the timer column.

There would be several lat/lon hstore pairs in a given minute, the query I want 
would return the last one in the timeseries for that minute (for each key).

I don't think your examples will give me the separate hstore key-value pairs, 
extracted as the last in the interval for each key & reassembled as an hstore 
list in the result. The sensor id is the hstore key, as described above, not a 
separate attribute. That said, the keys can be extracted from the hstore much 
like a column, but I'm not sure that is the best approach.
Treating each set of hstores in an interval as an array & extracting the last 
elements may be viable. But I['m not sure how...

 
 
 On Thursday, April 15, 2021, 6:33:08 AM GMT+12, Michael Lewis 
 wrote: 





If you share example schema and desired output (like a dummy table or even 
pseudo code SQL), then I'm sure many people could help you. Right now, the 
description of your desired result seems a bit unclear, at least to me.


If you wanted to run this hourly for the last 1 hour, it sounds a bit like want 
this-

select sensor_id, date_trunc( 'minute', timestamptz_field_name_here ), last( 
value_from_hstore ) over ( partition by sensor_id, date_trunc( 'minute', 
timestamptz_field_name_here ) ) as last_value_recorded
from data_table_here
where timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and NOW()
group by sensor_id, date_trunc( 'minute', timestamptz_field_name_here )


You could also use the left join "where is null" pattern to check that a prior 
record in the minute period of time does not exist for that same key. Something 
like this-

select d1.sensor_id, date_trunc( 'minute', d1.timestamptz_field_name_here ), 
d1.timestamptz_field_name_here as last_value_recorded
from data_table_here as d1
left join data_table_here as prior_d1 ON prior_d1.sensor_id = d1.sensor_id AND 
prior_d1.timestamptz_field_name_here < d1.timestamptz_field_name_here and 
prior_d1.timestamptz_field_name_here >= date_trunc( 'minute', 
d1.timestamptz_field_name_here )
where d1.timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and NOW()


>  
 



Re: Timestamp/hstore query?

2021-04-15 Thread Brent Wood
 Thanks Mike,
That approach seems to work very well!!!
There are something on the order of 40 billion readings in the table. This 
returns an arbitrary one minutes data, as required, in under 30ms:
SELECT distinct date_trunc('minute',_timer) as timer, 
    key::int, 
    first_value(value)
 OVER (partition by key order by _timer desc)
 FROM (SELECT timer as _timer, 
  (each(v.values)).*
   FROM v_sec v
   WHERE timer between '2020-04-15 08:10:00'
   and '2020-04-15 08:10:59' ) as x
 order by key::int;

v_sec is a view on a table with 30+ years of 1 second readings from dozens of 
sensors, columns are timer (timestamp) & values (hstore).
I used "between timestamps" rather than "= date_trunc()" in the where clause as 
this uses the index on timestamps, so is MUCH faster.

FYI, we are using Timescaledb to manage the underlying data and data structures.


Appreciated!!!
Brent Wood


On Friday, April 16, 2021, 12:28:41 AM GMT+12, Mike Rylander 
 wrote:  
 
 On Wed, Apr 14, 2021 at 5:33 PM Brent Wood  wrote:
>
> Thanks for your reply,
>
> The table is essentially:
> create table readings (timer    timestamp primary key,
>                        readings  hstore);
>
> the hstore comprises ( ) key/value pairs for readings 
> taken at the time specified in the timestamp.
>
> eg:  "67" "-45.67436", "68" "176.5424" could be key/value pairs representing 
> latitude & longitude, with a timestamp in the timer column.
>
> There would be several lat/lon hstore pairs in a given minute, the query I 
> want would return the last one in the timeseries for that minute (for each 
> key).
>
> I don't think your examples will give me the separate hstore key-value pairs, 
> extracted as the last in the interval for each key & reassembled as an hstore 
> list in the result. The sensor id is the hstore key, as described above, not 
> a separate attribute. That said, the keys can be extracted from the hstore 
> much like a column, but I'm not sure that is the best approach.
>
> Treating each set of hstores in an interval as an array & extracting the last 
> elements may be viable. But I['m not sure how...
>

Hi Brent,

With the table structure like that, you'll need to project all the
sensor data onto each timestamp and then collapse it.  Something like
this:

  SELECT date_trunc('minute',timer), key, first(value) over (partition
by key order by timer desc) FROM (SELECT timer, (each(readings)).*
FROM table WHERE date_trunc('minute',timer) = '2021-04-15
08:10:00-04')x;

If your queries will always target a specific timestamp truncated to
the minute, you'll find an index on date_trunc('minute',timer) will be
useful, I think.  If your intervals are more complex than date_trunc()
can handle then you'll have to get more creative, but that's the basic
structure -- normalize the primary key, project the sensor data to
make it table-ish, then use regular SQL to pull out what you want.

--
Mike Rylander
 | Research and Development Manager
 | Equinox Open Library Initiative
 | phone:  1-877-OPEN-ILS (673-6457)
 | email:  mi...@equinoxoli.org
 | web:  https://equinoxOLI.org

>
>
> On Thursday, April 15, 2021, 6:33:08 AM GMT+12, Michael Lewis 
>  wrote:
>
>
>
>
>
> If you share example schema and desired output (like a dummy table or even 
> pseudo code SQL), then I'm sure many people could help you. Right now, the 
> description of your desired result seems a bit unclear, at least to me.
>
>
> If you wanted to run this hourly for the last 1 hour, it sounds a bit like 
> want this-
>
> select sensor_id, date_trunc( 'minute', timestamptz_field_name_here ), last( 
> value_from_hstore ) over ( partition by sensor_id, date_trunc( 'minute', 
> timestamptz_field_name_here ) ) as last_value_recorded
> from data_table_here
> where timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and NOW()
> group by sensor_id, date_trunc( 'minute', timestamptz_field_name_here )
>
>
> You could also use the left join "where is null" pattern to check that a 
> prior record in the minute period of time does not exist for that same key. 
> Something like this-
>
> select d1.sensor_id, date_trunc( 'minute', d1.timestamptz_field_name_here ), 
> d1.timestamptz_field_name_here as last_value_recorded
> from data_table_here as d1
> left join data_table_here as prior_d1 ON prior_d1.sensor_id = d1.sensor_id 
> AND prior_d1.timestamptz_field_name_here < d1.timestamptz_field_name_here and 
> prior_d1.timestamptz_field_name_here >= date_trunc( 'minute', 
> d1.timestamptz_field_name_here )
> where d1.timestamptz_field_name_here BETWEEN NOW() - interval '1 hour' and 
> NOW()
>
>
> >
>
>
  

RE: Multi-master replication

2021-08-13 Thread Brent Wood
BDS
Only using the free v1.0...

Sent from Yahoo Mail on Android 
 
  On Sat, 14 Aug 2021 at 2:07, Laura Smith 
wrote:   ‐‐‐ Original Message ‐‐‐

On Friday, August 13th, 2021 at 12:35 PM, Zahir Lalani 
 wrote:

> Confidential
>
> Thx Laura
>
> So here is the dilemma - everything in the cloud world tends toward 
> horizontal scaling. We do that with PG using single master and multiple 
> slaves. But we are write heavy and of course the load on the master is quite 
> high. It does seem that multi-master has risks (everyone seems to talk about 
> conflict resolution!) so I wonder whether the sensible route is simply to 
> scale vertically on the master?
>


Scaling on the master remains a popular choice these days even in the trendy 
world of cloud and horizontal scaling.  See, for example Let's Encrypt - 
admittedly a MySQL/InnoDB infrastructure, but still master/replica - 
https://letsencrypt.org/2021/01/21/next-gen-database-servers.html

I suppose (depending on the nature of your data), you could perhaps use 
sharding, so multiple master/slaves with data (and hence load) spread across.  
Most likely you would need something like Redis at the frontend to push clients 
in to the right shard.

Conflict resolution (whether due to split brain or otherwise) is a big worry of 
mine with multi-master setups.  The Googles and Facebook's of this world seem 
to be solving the problem on the basis of having super precise time sync setup 
on their database servers and use high-resolution timestamps to act as 
tie-break arbitrator.  But of course that comes with significant expense as it 
needs PTP infrastructure.


  


Re: Choosing an index on partitioned tables.

2021-09-07 Thread Brent Wood
Hi Tim,

I've had good success with TimescaleDB for large timesries databases (40b 
readings).
https://www.timescale.com/

You turn your timestamp table into a Timescale hypertable and it looks after 
the indexing and partitioning automatically, with the table accessed like a 
normal postgres table, but very quickly.

It also adds some SQL functions to add a bit of time based query functionality.


Cheers

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Tim Uckun 
Sent: Tuesday, September 7, 2021 15:44
To: pgsql-general 
Subject: Choosing an index on partitioned tables.

I have a series of tables which are going to be queries mostly on two
columns. A timestamp table and a metric type column.

My plan is to partition by date ranges which means the primary key has
to include the timestamp column and the id column  As far as I know
there is no way to specify an index type for those columns.

The metric type is a text column and will not be very selective. It
will have somewhere around 200 types of metrics and they will all be
short, less than ten characters.

Given that there will be a lot of records I was wondering what type of
index would be ideal for that column. Seems like hash indexes would be
ideal because only comparison will be = and they are smaller than
Btrees but for a while they were not recommended.

Would hash be the best or would something work better?


[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: SQL queries as sets: was The tragedy of SQL

2021-09-15 Thread Brent Wood
I agree.

I've always thought SQL was a great example of set theory put into practice. 
You specify the elements (select items) and conditions (where clause) to return 
the (sub)sets you want.

Spatial data is also about sets - a polygon is theoretically defined as the set 
of points contained by a perimeter. A line is the set of points between 
specified points (complicated somewhat by coordinate systems). I have found the 
set based approach of SQL & relational databases is a very comfortable fit with 
spatial data queries - Postgres + Postgis is an awesome combination, then add 
hstore, jsonb & timescale to the mix and I have no need for a NoSQL db for 
key/value rather than normalised data.

I also suggest that as a simple language to retrieve stored data, SQL works 
well. But as we try to do more complex retrievals, the difference between a 
query to ask for data and a query to ask for a somewhat complex analysis of the 
data becomes very apparent. Yet these extensions, as is so often the case, are 
where the true power and the limitations of SQL can be found.

>From a pragmatic perspective, the useability & value to the global community 
>of relational databases and SQL is self evident. They have been and continue 
>to be incredibly successful and useful. Whether you like it or not, they work, 
>and have solved many problems for people and organisations for many years.

I have yet to see the complainers achieve what Codd & Date did, many years ago.

So from me, if no-one else, a heartfelt thank you to everyone who has 
contributed to the SQL/Postgres space which enables me to do so much!!


In appreciation,

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Steve Litt 
Sent: Wednesday, September 15, 2021 16:54
To: pgsql-general@lists.postgresql.org 
Subject: SQL queries as sets: was The tragedy of SQL

Rich Shepard said on Tue, 14 Sep 2021 05:49:07 -0700 (PDT)

>On Mon, 13 Sep 2021, Guyren Howe wrote:
>
>> They are making a decent decision. SQL is a *fucking terrible*
>> language, which I don’t blame them for not wanting to learn.
>
>>> SQL is not the problem. Problem are the devs. I love SQL. I hate
>>> orms. The problem with databases is people refuse to treat it as
>>> the entity it is and want to use their beautiful OO system. Problem
>>> is databases are not OO. We need to recognize that and treat
>>> databases as databases.
>
>Guyren/Hemil,
>
>As a non-SQL expert who's used postgres since 1997 I've come to
>believe the basic issue is that SQL is based on sets, neither
>procedural or object oriented. Few people think in sets so they try to
>fit SQL into what they know rather than understand the how sets work.

Rich, could you please elaborate on SQL queries being based on sets? I
never thought of it that way, and would like to hear your related
thoughts.

SteveT

Steve Litt
Spring 2021 featured book: Troubleshooting Techniques of the Successful
Technologist 
https://aus01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.troubleshooters.com%2Ftechniques&data=04%7C01%7CBrent.Wood%40niwa.co.nz%7C97da6827647945f5a5fa08d9780500e6%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C637672785197067730%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=WNGTIg0HsMJOBoYPBpieaOgHzoPtx%2Fv2I055ycDpBqE%3D&reserved=0


[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


help implementing OGR Postgres foreign data wrapper

2021-10-04 Thread Brent Wood
Hi,

Apologies, this not strictly a Postgres question, but Postgres is so 
fundamental to what I'm trying to do and this list is generally so helpful, so 
here I go...

I'm using FDW's to create virtual tables providing access to data from external 
data sources. This is working well for data in other Postgres databases using 
the Postgres FDW.

I also want to be able to link to data from WFS services via the OGR FDW 
(https://www.pgxn.org/dist/ogr_fdw/)
The first service I'm trying to connect to is:
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer

I can retrieve the capabilities doc in a browser (test 1 passed!) and plot the 
data as a layer in QGIS (test 2 passed). My next step is to test OGR access via 
ogrinfo. This I can't seem to get working. I'm assuming I need to sort this out 
before trying to get the OGR_FDW working.

I've tried:
ogrinfo 
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer
ogrinfo 
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer?SERVICE=WFS&REQUEST=GetCapabilities

I can only get the following response
ogrinfo 
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer?SERVICE=WFS&REQUEST=GetCapabilities
[1] 3121502
ERROR 1: HTTP error code : 400
ERROR 1: HTTP error code : 400
FAILURE:
Unable to open datasource 
`https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer?SERVICE=WFS'
 with the following drivers.
  -> ESRIC
...
  -> OGR_PDS
  -> WFS
  -> OAPIF
...

Can anyone suggest what might be the problem?

Thanks.

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: help implementing OGR Postgres foreign data wrapper

2021-10-05 Thread Brent Wood
Thanks Ben,

How is life treating the AusVet these days? Been quite a few versions under the 
bridge since we discussed R and Postgres stuff. 2009 was it??

That kicked some quiescent brain cells into gear. I just tried it from home, 
where it also works fine, if a bit sluggish, which seems to be how ESRI 
supports open standards.

So it may be some network settings from NIWA causing the failure. At least I 
can kick that problem elsewhere!!!


Much appreciated.


Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Ben Madin 
Sent: Tuesday, October 5, 2021 20:57
To: Brent Wood 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: help implementing OGR Postgres foreign data wrapper

Hi Brent,

not sure why, but it is working for me using ogrinfo:

~ $ ogrinfo 
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer\?SERVICE\=WFS\&REQUEST\=GetCapabilities<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdservices3.arcgis.com%2Ffp1tibNcN9mbExhG%2Farcgis%2Fservices%2FBOMEC_15_Class_WFS%2FWFSServer%2F%3FSERVICE%255C%3DWFS%255C%26REQUEST%255C%3DGetCapabilities&data=04%7C01%7CBrent.Wood%40niwa.co.nz%7C4ef57eaa3412469bbb8308d987d5d398%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C637690174756253198%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=%2FMRyBfX7VM0vTqSMfLbwqCtr8Ad00olZlqgvB62DOBw%3D&reserved=0>
INFO: Open of 
`https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer?SERVICE=WFS&REQUEST=GetCapabilities<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdservices3.arcgis.com%2Ffp1tibNcN9mbExhG%2Farcgis%2Fservices%2FBOMEC_15_Class_WFS%2FWFSServer%3FSERVICE%3DWFS%26REQUEST%3DGetCapabilities&data=04%7C01%7CBrent.Wood%40niwa.co.nz%7C4ef57eaa3412469bbb8308d987d5d398%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C637690174756253198%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=I45hJ8vsYBkVcFvyBXp5kk4ThLAHokobBg3XDFnveqA%3D&reserved=0>'
  using driver `WFS' successful.
Metadata:
  ABSTRACT=To develop a marine environment classification for waters in depths 
shallower than 3000 m within the outer boundary of the NZ 200 naut. mile EEZ  
(an OGC WFS Layer).
  TITLE=WFS
1: BOMEC_15_Class_WFS:BOMEC_15_Class (title: BOMEC_15_Class) (Multi Surface)

not very helpful, but a 400 error might be some kind of encoding problem with 
request?

cheers

Ben



On Tue, 5 Oct 2021 at 14:18, Brent Wood 
mailto:brent.w...@niwa.co.nz>> wrote:
Hi,

Apologies, this not strictly a Postgres question, but Postgres is so 
fundamental to what I'm trying to do and this list is generally so helpful, so 
here I go...

I'm using FDW's to create virtual tables providing access to data from external 
data sources. This is working well for data in other Postgres databases using 
the Postgres FDW.

I also want to be able to link to data from WFS services via the OGR FDW 
(https://www.pgxn.org/dist/ogr_fdw/<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.pgxn.org%2Fdist%2Fogr_fdw%2F&data=04%7C01%7CBrent.Wood%40niwa.co.nz%7C4ef57eaa3412469bbb8308d987d5d398%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C637690174756263153%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=IMqSDsY9Ian0t9y3xFnhapgtJiNf2Ya2rAHvR%2BU8zG8%3D&reserved=0>)
The first service I'm trying to connect to is:
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdservices3.arcgis.com%2Ffp1tibNcN9mbExhG%2Farcgis%2Fservices%2FBOMEC_15_Class_WFS%2FWFSServer&data=04%7C01%7CBrent.Wood%40niwa.co.nz%7C4ef57eaa3412469bbb8308d987d5d398%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C637690174756263153%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=oxQTYKqK01EE9lbtYwrH9pK5WQRzVi460b4BUytg2Rs%3D&reserved=0>

I can retrieve the capabilities doc in a browser (test 1 passed!) and plot the 
data as a layer in QGIS (test 2 passed). My next step is to test OGR access via 
ogrinfo. This I can't seem to get working. I'm assuming I need to sort this out 
before trying to get the OGR_FDW working.

I've tried:
ogrinfo 
https://dservices3.arcgis.com/fp1tibNcN9mbExhG/arcgis/services/BOMEC_15_Class_WFS/WFSServer<https://aus01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdservices3.arcgis.com%2Ffp1tibNcN9mbExhG%2Farcgis%2Fservices%2FBOMEC_15_Class_WFS%2FWFSServer&data=04%7C01%7CBrent.Wood%40niwa.co.nz%7C4ef57eaa3412469bbb8308d987d5d398%7C41caed736a0c468aba499ff6aafd1c77%7C0%7C0%7C637690174756273111%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJ

Re: Zero Downtime Upgrade

2021-11-08 Thread Brent Wood
 You might try ogr2ogr - it can do postgres to postgres as easily as Postgres 
to something else and I think can do databases, schemas or individual tables in 
a single command

Though your original stays up, I'd recommend not writing to it while being 
copied...


Brent Wood


On Tuesday, November 9, 2021, 01:51:46 PM GMT+13, Akheel Ahmed 
 wrote:  
 
 Hi Group

What tools exist besides bucardo, londiste and rubyrep for logical replication 
to do a zero/near zero downtime upgrade?

Thanks in advance
  

Re: DELETING then INSERTING record with same PK in the same TRANSACTION

2022-02-09 Thread Brent Wood
My take on this...

Because both statements are in the transaction, the delete is not fully 
actioned until the commit. So it still exists in the table when you try to 
insert the record with the duplicate key.

Check if the error is generated during the transaction or at the commit stage, 
run it without the commit, rollback instead to check this.

I don't see how you can do this within a transaction, someone else might?



Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529


From: Andrew Hardy 
Sent: Thursday, February 10, 2022 07:11
To: pgsql-general@lists.postgresql.org 
Subject: DELETING then INSERTING record with same PK in the same TRANSACTION

Hi,

When I:

Begin a transaction
DELETE from  where id (PK) = 
INSERT INTO  VALUES ()
...
...
...
COMMIT

I get

insert into "itinerary" ("dk", "locator") values ($1, $2) - duplicate key value 
violates unique constraint "itinerary_pkey"

Do I need some particular kind of settings on my transaction to be able to 
delete and insert afresh in the same transaction?

In case it is relevant - the first delete will lead to cascaded deletes on 
children.

Alternatively I wonder if I change the PK column value to 
"-FORDELETION" will I be free to insert under the same original PK 
value in the same transaction, then delete the FORDELETE item just before 
committing or will I hit the same issue?

Thanks,

Andrew


[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems


Re: Does PostgreSQL check database integrity at startup?

2017-12-28 Thread Brent Wood
Some thoughts

A tool to calculate a checksum of sorts based on the table (file) content would 
provide a better surety of duplication than simply checking file size - like 
differently vacuumed tables in each copy could have the same content but be 
different file sizes. 

Something like these could be adapted to compare database content by filesystem 
checks rather than db queries. Following tablespaces, etc as well.

http://www.commandlinefu.com/commands/view/3555/find-duplicate-files-based-on-size-first-then-md5-hashor
 other similar tools
Yes, there is some overhead, especially for large databases but it would be 
worth that to robustly ensure genuine and complete duplication.

I do wonder though - given the order of records in a table (file) is not 
necessarily identical (or is it?) event this may be problematic. Perhaps a 
checksum based on the result of a query output ordered by primary key could 
work?

Brent Wood

  From: Edson Carlos Ericksson Richter 
 To: pgsql-general@lists.postgresql.org 
 Sent: Friday, December 29, 2017 6:47 AM
 Subject: Re: Does PostgreSQL check database integrity at startup?
   
Em 28/12/2017 10:16, Stephen Frost escreveu:
> Alvaro,
>
> * Alvaro Herrera (alvhe...@alvh.no-ip.org) wrote:
>> For context: this was first reported in the Barman forum here:
>> https://groups.google.com/forum/#!msg/pgbarman/3aXWpaKWRFI/weUIZxspDAAJ
>> They are using Barman for the backups.
> A, I see.  I wasn't aware of that history.
>
>> Stephen Frost wrote:
>>
>>>> But at some point in time, slave became corrupt (one of the base
>>>> files are zero size where it should be 16Mb in size), and IMHO a
>>>> "red alert" should arise - Slave server shall not even startup at
>>>> all.
>>> How do you know it should be 16Mb in size...?  That sounds like you're
>>> describing a WAL file, but you should be archiving your WAL files during
>>> a backup, not just using whatever is in pg_xlog/pg_wal..
>> It's not a WAL file -- it's a file backing a table.
> Interesting.
>
>>>> Since backups are taken from slave server, all backups are also corrupt.
>>> If you aren't following the appropriate process to perform a backup
>>> then, yes, you're going to end up with corrupt and useless/bad backups.
>> A few guys went over the backup-taking protocol upthread already.
>>
>> But anyway the backup tool is a moot point.  The problem doesn't
>> originate in the backup -- it originates in the standby, from where the
>> backup is taken.  The file can be seen as size 0 in the standby.
>> Edson's question is: why wasn't the problem detected in the standby?
>> It seems a valid question to me, to which we currently we don't have any
>> good answer.
> The last message on that thread seems pretty clear to me- the comment is
> "I think this is a failure in standby build."  It's not clear what that
> failure was but I agree it doesn't appear related to the backup tool
> (the comment there is "I'm using rsync"), or, really, PostgreSQL at all
> (a failure during the build of the replica isn't something we're
> necessairly going to pick up on..).
>
> As discussed on this thread, zero-byte files are entirely valid to
> appear in the PostgreSQL data directory.
>
> To try and dig into what happened, I'd probably look at what forks there
> are of that relation, the entry in pg_class, and try to figure out how
> it is that replication isn't complaining when the file on the primary
> appeared to be modified well after the last modify timestamp on the
> replica.  If it's possible to replica this into a test environment,
> maybe even do a no-op update of a row of that table and see what happens
> with replication.  One thing I wonder is if this table used to be
> unlogged and it was later turned into a logged table but something
> didn't quite happen correctly with that.  I'd also suggest looking for
> other file size mismatches between the primary and the replica.
>
> Thanks!
>
> Stephen
The table was never unlogged. From very beginning, it was always logged. 
I've dozens of databases with exactly same setup - and right now, I'm 
rebuilding the slave server. Instead of investigating something probably 
I will not find the cause, I would like to have a alert for the future.

Would be possible to include in future versions:
1) After start standby, standby run all WAL files until it is 
synchronized with master (current behavior)
3) Before getting into "accept read only queries", check if all base 
files have same size as master server (new behavior). In case something 
is different, throw an error and stop database startup?
4) Then start "accept read only queries" (current behavior)
???

Thanks,

Edson



   

concatenating hstores in a group by?

2025-01-19 Thread Brent Wood
Hi,

I have a table with a timestamp and hstore columns with times in milliseconds.

I want to concatenate the hstores, grouped by timestamp, with the timestamps 
truncated to whole seconds: date_trunc('seconds', timer).

How can I concatenate all the hstores within a one second interval,

 ie, something like:

select timer,
  
from (select  date_trunc('seconds', timer) as timer_sec, hstore)
group by timer;


Thanks


Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529

[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.


Re: concatenating hstores in a group by?

2025-01-21 Thread Brent Wood
Thanks for the replies, appreciated...

My current solution is:

select trip_code,
station_no,
timer_sec + interval '12 hour' as NZST,
timer_sec as utc,
hstore_to_json(string_agg(values_sec::text, ', ')::hstore) as 
values_sec
 from (select '$TRIP' as trip_code,
  $STATION as station_no,
  date_trunc('second', timer) as timer_sec,
  values_sec
   from t_reading_hstore_sec
   where timer >= '$ISO_S'::timestamp - interval '12 hour'
 and timer <= '$ISO_F'::timestamp - interval '12 hour') as foo
group by timer_sec, trip_code, station_no;

Convert the hstore to text, aggregate the text with string_agg(), convert back 
to hstore (which seems to remove duplicate keys, OK for my purpose)
and group by timer truncated to whole seconds. I also provide UTC & local 
timezone times for each set of readings. It is run in a bash script which 
passes the trip & station values to the query, as well as the start/finish 
times as ISO format strings.

The output is going to a Sqlite3 (Spatialite) database, which does not have 
hstore, or all the hstore functionality that Postgres has, but does have a json 
datatype which is adequate for our purposes, hence the hstore_to_json in the 
query.


Thanks again,

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529
____
From: David G. Johnston 
Sent: Monday, January 20, 2025 04:28
To: Brent Wood 
Cc: pgsql-general@lists.postgresql.org 
Subject: concatenating hstores in a group by?

On Friday, January 17, 2025, Brent Wood 
mailto:brent.w...@niwa.co.nz>> wrote:

I want to concatenate the hstores,


There are no hstore aggregate functions.  You’ll want to convert them to,json 
first then use the json_objectagg aggregate function.

David J.

[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.


Re: concatenating hstores in a group by?

2025-01-21 Thread Brent Wood
 Yes, a Timescale hypertable with 500,000,000 rows of about 15 key/value pairs 
per record.
I'm not sure why there is both a gin & gist index on the hstore, or the merits 
of each.

Thanks

 \d t_reading_hstore_sec
 Table "public.t_reading_hstore_sec"
   Column   |    Type | Collation | Nullable |  
    Default  
+-+---+--+---
 key    | bigint  |   | not null | 
nextval('t_reading_hstore_sec_key_seq'::regclass)
 timer  | timestamp without time zone |   | not null | 
 values_sec | hstore  |   |  | 
Indexes:
    "t_reading_hstore_sec_pkey" PRIMARY KEY, btree (key, timer)
    "t_reading_hstore_sec_timer_idx" btree (timer)
    "t_reading_hstore_sec_timer_key" UNIQUE CONSTRAINT, btree (timer)
    "t_reading_hstore_sec_values_idx_gin" gin (values_sec)
    "t_reading_hstore_sec_values_idx_gist" gist (values_sec)




On Wednesday, January 22, 2025 at 06:34:38 AM GMT+13, Adrian Klaver 
 wrote:  
 
 On 1/19/25 12:09, Brent Wood wrote:
> Thanks for the replies, appreciated...
> 
> My current solution is:
> 
> /select trip_code,/
> /            station_no,/
> /            timer_sec + interval '12 hour' as NZST,/
> /            timer_sec as utc,/
> /            hstore_to_json(string_agg(values_sec::text, ', ')::hstore) 
> as values_sec/
> /     from (select '$TRIP' as trip_code,/
> /                  $STATION as station_no,/
> /                  date_trunc('second', timer) as timer_sec,/
> /                  values_sec/
> /           from t_reading_hstore_sec/
> /           where timer >= '$ISO_S'::timestamp - interval '12 hour'/
> /             and timer <= '$ISO_F'::timestamp - interval '12 hour') as foo/
> /group by timer_sec, trip_code, station_no;/
> 
> Convert the hstore to text, aggregate the text with string_agg(), 
> convert back to hstore (which seems to remove duplicate keys, OK for my 
> purpose)

To be clear values_sec in t_reading_hstore_sec is the hstore field?

If so what is it's structure?

> and group by timer truncated to whole seconds. I also provide UTC & 
> local timezone times for each set of readings. It is run in a bash 
> script which passes the trip & station values to the query, as well as 
> the start/finish times as ISO format strings.
> 
> The output is going to a Sqlite3 (Spatialite) database, which does not 
> have hstore, or all the hstore functionality that Postgres has, but does 
> have a json datatype which is adequate for our purposes, hence the 
> hstore_to_json in the query.
> 
> 
> Thanks again,
> 
> Brent Wood
> 

-- 
Adrian Klaver
adrian.kla...@aklaver.com



  

Re: timescaledb vs NULL vs pg_timeseries vs partman + pgcron + pg_ivm

2025-04-09 Thread Brent Wood
I also use the free community edition on internal servers, but under Ubuntu. No 
issues and very good performance.
Brent Wood...

Yahoo Mail: Search, organise, conquer 
 
  On Wed, 9 Apr 2025 at 13:51, Amitabh Kant wrote:   On 
Tue, Apr 8, 2025 at 11:29 PM Achilleas Mantzios 
 wrote:
  

 
 On 8/4/25 20:37, Amitabh Kant wrote:
  
  On Tue, Apr 8, 2025 at 9:40 PM Achilleas Mantzios - cloud 
 wrote:  
Hi
 
 timescaledb seemed mature, but also exotic, allow me the term. No way to 
 use native logical replication, shortage of options to run on premise or 
 self hosted, which leaves us with those options :
 
 
 
 
   I cannot comment on the applicability of timescaledb in your context, but 
running it on premise/self-hosted has posed no problems, at least on Debian.  
  If I understood your query incorrectly, please ignore.   
 Thank you, I meant the paid/supported service not the community version. Which 
of the two do you use?
 
   



I use the community version.    


Re: timescaledb vs NULL vs pg_timeseries vs partman + pgcron + pg_ivm

2025-04-12 Thread Brent Wood
Our Timescale dbs are on individual hosts, separate from our main Postgres 
servers.

So standalone systems to be upgraded individually as required.

Brent Wood

Principal Technician, Fisheries
NIWA
DDI:  +64 (4) 3860529

From: Achilleas Mantzios 
Sent: Friday, April 11, 2025 00:17
To: pcr...@yahoo.com 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: timescaledb vs NULL vs pg_timeseries vs partman + pgcron + pg_ivm


Hi Brent

how do you plan to cope with future upgrades based on logical replication ? Do 
you run timescale on a dedicated/separate system from the rest of your 
PostgreSQL cluster(s)?

On 10/4/25 08:25, Amitabh Kant wrote:
On Wed, Apr 9, 2025 at 11:50 AM Achilleas Mantzios - cloud 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 4/9/25 04:50, Amitabh Kant wrote:
Thank you, I meant the paid/supported service not the community version. Which 
of the two do you use?

I use the community version.


On 4/9/25 05:23, Brent Wood wrote:

I also use the free community edition on internal servers, but under Ubuntu. No 
issues and very good performance.

Brent Wood...


Thanks Amitabh abd Brent, how do you plan to cope with future upgrades based on 
logical replication ? Do you run timescale on a dedicated/separate system from 
the rest of your PostgreSQL cluster(s)?

I don't use the inbuilt logical replication. I depend upon pgBackRest for my 
backups. My database is mainly time series data, and runs on the same systems.
thanks
[https://www.niwa.co.nz/static/niwa-2018-horizontal-180.png] 
<https://www.niwa.co.nz>
Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529

National Institute of Water & Atmospheric Research Ltd (NIWA)
301 Evans Bay Parade Hataitai Wellington New Zealand
Connect with NIWA: niwa.co.nz<https://www.niwa.co.nz> 
Facebook<https://www.facebook.com/nzniwa> 
LinkedIn<https://www.linkedin.com/company/niwa> 
Twitter<https://twitter.com/niwa_nz> 
Instagram<https://www.instagram.com/niwa_science> 
YouTube<https://www.youtube.com/channel/UCJ-j3MLMg1H59Ak2UaNLL3A>
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.
Note: This email is intended solely for the use of the addressee and may 
contain information that is confidential or subject to legal professional 
privilege. If you receive this email in error please immediately notify the 
sender and delete the email.