Re: Set COLLATE on a session level

2020-11-30 Thread Dirk Mika

> > pá 20. 11. 2020 v 15:28 odesílatel Dirk Mika 
> > napsal:
> >
> > > Let's assume there is an app that accesses the same database from
> > > different countries. And in this app data should be displayed ordered. And
> > > the sort order is not identical in all countries.
> > >
> > > Does the app have to send different SQL commands depending on the country?
> > > Not nice.
> > > Do the data have to be sorted in the app? Not nice either.
> > >
> >
> > the query is the same - you just use a different COLLATE clause. For
> > Postgres there is not any other way.
>
> One might use a function producing a SELECT taking the locale as a parameter.
>
> Or views in schemas per locale. Selecting the search path
> per locale pulls in the right view.

Suppose the SQL statements are generated by a database layer such as Hibernate. 
It seems to me that it is not possible to use a function that adds the COLLATE 
clause.

And one view per locale would mean that I would have to create a whole schema 
including all views for each locale I want to support. I would have to roll out 
a new version of the data model, just for an additional locale.

This all seems unnecessarily complicated to me.

Dirk


--
Dirk Mika
Software Developer

mika:timing GmbH
Strundepark - Kürtener Str. 11b
51465 Bergisch Gladbach
Germany

fon +49 2202 2401-1197
dirk.m...@mikatiming.de
www.mikatiming.de

AG Köln HRB 47509 * WEEE-Reg.-Nr. DE 90029884
Geschäftsführer: Harald Mika, Jörg Mika




Re: Set COLLATE on a session level

2020-11-30 Thread Karsten Hilbert
Am Mon, Nov 30, 2020 at 10:11:38AM + schrieb Dirk Mika:

> > Or views in schemas per locale. Selecting the search path
> > per locale pulls in the right view.
>
> And one view per locale would mean that I would have to
> create a whole schema including all views for each locale I
> want to support. I would have to roll out a new version of
> the data model, just for an additional locale.

Or a "smart" view. Set a session variable before running the
query and have the (one) view return the locale'd data based
on the session variable ...

set session "mika.current_locale" = 'locale@2_use';

and use

select current_setting('mika.current_locale')

as needed inside the view definition

> This all seems unnecessarily complicated to me.

No one said it is going to be particularly convenient...  You
asked for possible, I guess. :-)

Best,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Storage and querying of filesystem paths

2020-11-30 Thread Laura Smith
Hi,
How have others approached the problem of storing and querying filesystem paths 
?
I'm talking about *nix/*nux style paths, e.g. /home/foo/bar
A text column with a GIN index is an obvious possibility, but I'm interested to 
hear what approach others have taken ?
Laura




Re: Storage and querying of filesystem paths

2020-11-30 Thread Laurenz Albe
On Mon, 2020-11-30 at 12:05 +, Laura Smith wrote:
> How have others approached the problem of storing and querying filesystem 
> paths ?
> I'm talking about *nix/*nux style paths, e.g. /home/foo/bar
> A text column with a GIN index is an obvious possibility, but I'm interested 
> to hear what approach others have taken ?

I have not personally taken that approach, but you could give the
ltree extension a go.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Migration from SQL Server to PostgeSQL

2020-11-30 Thread Thibaut Madelaine


Le 30/11/2020 à 06:11, Muthukumar.GK a écrit :

HiTeam,

We need to migrate tables,Records,Procedure and function from SQL 
server to Postgresql. Kindly let me know, Is there any  tool or 
easiest way  available to migrate from SQL Server to Postgresql. 
Please let me know if there are any limitations in *postgresql 
*compared to SQL server.


Regards
Muthu.


Hello Muthukumar,

We maintain a tool to migrate from SQL Server to PostgreSQL: 
sqlserver2pgsql 


You can use it to convert your SQL schema.
It can generate Pentaho Data Integrator (Kettle) job to migrate all the 
data.


Unfortunately , it does not migrate procedures and functions. 
Transact-SQL is very different from PostgreSQL's many PL languages. 
These would need a manual migration.


Hope this tool can be of any help!

Cordialement,

Thibaut

--
Thibaut Madelaine
Dalibo Support PostgreSQL



OpenPGP_0x6571D14E47869FF3.asc
Description: application/pgp-keys


OpenPGP_signature
Description: OpenPGP digital signature


concurrent re-partitioning of declarative partitioned tables

2020-11-30 Thread Nick Cleaton
I want to set up a large table on postgresql 12.4, using declarative
partitioning to partition by record creation date. I'd like to have recent
records in small partitions but old records in a few larger partitions, so
I want merges. The merges should be concurrent, in the sense that they lock
out readers or writers only for very short intervals if at all.

I'm looking at adding an extra boolean column and partitioning on that at
the top level, with two parallel date-based partition trees underneath, so
that I can effectively have overlapping date partitions:


create table mytable (
  record_date timestamp with time zone not null,
  _partition_channel boolean,
  ...
)
partition by list (_partition_channel);

create table mytable_chan_null
  partition of mytable for values in (null)
  partition by range (record_date);

create table mytable_chan_true
  partition of mytable for values in (true)
  partition by range (record_date);

create table mytable_day_20200101
  partition of mytable_chan_null
  for values from ('2020-01-01') to ('2020-01-02');

...

create table mytable_day_20200107
  partition of mytable_chan_null
  for values from ('2020-01-07') to ('2020-01-08');


Then to merge several day-partitions into a week-partition:

create table mytable_week_20200101
  partition of mytable_chan_true
  for values from ('2020-01-01') to ('2020-01-08');

... and migrate rows in batches by updating _partition_channel to true,
then finally drop the empty day partitions.

Since record_date is an insertion timestamp, I don't mind that after this
merge updating the record_date of a merged row could fail due to a missing
partition. Likewise there's no need for new rows to be inserted with
record_date values in previously merged ranges.

Questions:

Are there any hidden pitfalls with this approach ?

Have I missed a simpler way ?

Is there a project out there that will manage this for me ?


Re: Storage and querying of filesystem paths

2020-11-30 Thread Laura Smith
‐‐‐ Original Message ‐‐‐
On Monday, November 30, 2020 12:36 PM, Laurenz Albe  
wrote:

> On Mon, 2020-11-30 at 12:05 +, Laura Smith wrote:
>
> > How have others approached the problem of storing and querying filesystem 
> > paths ?
> > I'm talking about *nix/*nux style paths, e.g. /home/foo/bar
> > A text column with a GIN index is an obvious possibility, but I'm 
> > interested to hear what approach others have taken ?
>
> I have not personally taken that approach, but you could give the
> ltree extension a go.
>
> Yours,
> Laurenz Albe
>
> ---
>
> Cybertec | https://www.cybertec-postgresql.com


Thanks Laurenz. Will take a look at ltree




How much shared memory does Postgresql need per max_locks_per_transaction?

2020-11-30 Thread David Tinker
I have a Postgresql 10 database with about 300k tables in 23k schemas. I am
trying to upgrade to Postgresql 13 using pg_upgradecluster. This is failing
while attempting dump all the schemas:

pg_dump: error: query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE "a45119740"."activity_hrc" IN ACCESS
SHARE MODE

Is setting max_locks_per_transaction to 300k something that can be done? I
haven't been able to find anything explaining how much shared memory this
might need. The machine has 64G of RAM.

(I understand that I need to change my db design .. I have been backing up
one schema at a time until now so wasn't aware of this problem)

Thanks
David


PostgreSQL 12.4 Parallel Query doesn't work while EXPLAIN is OK

2020-11-30 Thread mobigroup
Hi,

I have strange behaviour when EXPLAIN returns parallel execution plan but the 
query execution is not parallel. The query looks as

SELECT
plpgsql_function(...parameters…)
FROM table as t
WHERE id=1
UNION ALL
SELECT
plpgsql_function(...parameters…)
FROM table as t
WHERE id=2
UNION ALL
...

EXPLAIN outputs looks ok:

Append  (cost=0.00..10.25 rows=8 width=32)
  ->  Gather  (cost=0.00..1.27 rows=1 width=32)
Workers Planned: 8
->  Parallel Seq Scan on table t  (cost=0.00..1.27 rows=1 width=32)
  Filter: (id = 1)
  ->  Gather  (cost=0.00..1.27 rows=1 width=32)
Workers Planned: 8
->  Parallel Seq Scan on table t_1  (cost=0.00..1.27 rows=1 width=32)
  Filter: (id = 2)
  ->  Gather  (cost=0.00..1.27 rows=1 width=32)
...


But the query still uses just one core and I see sequential RAISE NOTICE 
outputs from the function plpgsql_function().

select version();
 version

-
 PostgreSQL 12.4 (Ubuntu 12.4-1.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled 
by gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0, 64-bit