Re: Postgres 10 temp tablespace question

2019-03-09 Thread Andreas Kretschmer




Am 09.03.19 um 02:05 schrieb Joseph Dunleavy:


I am building a multi-tenant deployment with multiple database - 1 
tenant per database.


I would like to be able to dedicate specific temp tablespace to 
a specific database or user/schemas.


I understand how to define temp_tablespace in postgresql.conf.


I can't find anything in the documentation on how to dedicate a 
specific temp_tablespaces to a specific database, user or schema.


I also thought maybe I could create a trigger on logon to set a 
specific temp tablespace per user, but I can't do that either.



Is it possible in Postgresql to dedicate a specific temp tablespace to 
a specific database or user/schema?





yes:

test=*# create tablespace tmp_tbsp1 location '/tmp/tbsp1';
FEHLER:  CREATE TABLESPACE kann nicht in einem Transaktionsblock laufen
test=*# commit;
COMMIT
test=# create tablespace tmp_tbsp1 location '/tmp/tbsp1';
CREATE TABLESPACE
test=# create tablespace tmp_tbsp2 location '/tmp/tbsp2';
CREATE TABLESPACE
test=# create tablespace tmp_tbsp3 location '/tmp/tbsp3';
CREATE TABLESPACE
test=# create user usr1;
CREATE ROLE
test=*# create user usr2;
CREATE ROLE
test=*# create user usr3;
CREATE ROLE
test=*# alter user usr1 set temp_tablespaces = 'tmp_tbsp1';
ALTER ROLE
test=*# alter user usr2 set temp_tablespaces = 'tmp_tbsp2';
ALTER ROLE
test=*# alter user usr3 set temp_tablespaces = 'tmp_tbsp3';
ALTER ROLE
test=*#


test=*# show temp_tablespaces;
 temp_tablespaces
--

(1 row)

test=*# commit;
COMMIT
test=# \c - usr2;
psql (11.1 (Ubuntu 11.1-3.pgdg16.04+1), server 9.5.15)
You are now connected to database "test" as user "usr2".
test=> show temp_tablespaces;
 temp_tablespaces
--
 tmp_tbsp2
(1 row)

test=*>



Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Postgres 9.6 Slave Creation

2019-03-09 Thread Eric Katchan
Hello,


We have recently upgraded our postgres cluster from 9.1 to 9.6.  we are having 
a difficult time creating our slave server.  We have encountered the 
"contrecord is requested" error which is preventing the slave from starting up. 
 We have recreated the slave , still no luck.  Our master is running apparently 
without issue.  We did not use the pg_basebackup command but the


pg_start_backup

tar

pg_stop_backup?



How can we debug?  I am afraid to stop and start our master server in case 
there is data corruption and it will not start.  I have however successfully 
restored the database using a pg_dump onto another machine  (I understand that 
this cannot be used for slave creation, I just wanted to verify that I could 
restore the new upgraded db).  Would it be better to use the streaming 
pg_basebackup command?


Please advise.

Eric


Re: Hot to model data in DB (PostgreSQL) for SNMP-like multiple configurations

2019-03-09 Thread Rene Romero Benavides
Try partitioning your table based on your device_id, that will give you a
considerable boost for queries which the where clause includes it. for 9.6
(that's the one your using right?) there's pg_partman for that kind of
thing, in this case you would partition by ranges, if the id's are
sequential it's pretty straightforward. Any chance of upgrading to a newer
PG version? partitioning becomes native from  PG10 onwards, so you don't
have to rely on particular plugins, and there are always significant
performance improvements for several use cases in newer versions (like
improved parallelism).



On Fri, Mar 8, 2019 at 10:40 AM Alexandru Lazarev <
alexandru.laza...@gmail.com> wrote:

> For now I do not see the strong reason, but i inherited this project from
> other developers,
> Originally there was MongoDB and structure was more complex, having SNMP
> like nested tables with OID.Instance1.Instance2.instance3 and in JSON it
> looked like:
> {
> "1.3.6.1.4.1..3.10.2.2.25.4.1.43.1": {
> "1": {
> "24": "vlan24",
> "25": "vlan25"
> },
> "2": {
> "24": "127.0.0.1",
> "25": "8.8.8.8"
> }
> }
> }
>
> Here we have table in table - How to model this in relational - with
> separate tables and JOINs only?
> I am not excluding in future I'll have such requirement
>
> the other reason is that devices request their config and some other tools
> requests devices configs as a single document/file - this a bit create
> overhead for composing document in JSON or XML or CSV format from
> relational table (I understand it is doable, but...)
>
> BTW in PG documentation:
> "
> *8.14.2. Designing JSON documents effectively*
>
>
>
>
> *Representing data as JSON can be considerably more flexible than the
> traditional relational data model, which is compelling in environments
> where requirements are fluid. It is quite possible for both approaches to
> co-exist and complement each other within the same application. However,
> even for applications where maximal flexibility is desired, it is still
> recommended that JSON documents have a somewhat fixed structure. The
> structure is typically unenforced (though enforcing some business rules
> declaratively is possible), but having a predictable structure makes it
> easier to write queries that usefully summarize a set of "documents"
> (datums) in a table.JSON data is subject to the same concurrency-control
> considerations as any other data type when stored in a table. Although
> storing large documents is practicable, keep in mind that any update
> acquires a row-level lock on the whole row. Consider limiting JSON
> documents to a manageable size in order to decrease lock contention among
> updating transactions. Ideally, JSON documents should each represent an
> atomic datum that business rules dictate cannot reasonably be further
> subdivided into smaller datums that could be modified independently."
> https://www.postgresql.org/docs/9.6/datatype-json.html
> *
>
>
>
> On Fri, Mar 8, 2019 at 5:15 PM Alban Hertroys  wrote:
>
>> Is there a reason not to use a relational model instead of json(b) here?
>> I think that is in fact considered best practice.
>>
>> On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <
>> alexandru.laza...@gmail.com> wrote:
>>
>>> I am working on product managing and monitoring Network (NMS-like
>>> products).
>>>
>>> Product manages configuration of network devices, for now each device
>>> has stored its configuration in simple table - this was the original design.
>>>
>>> CREATE TABLE public.configuration(
>>>   id integer NOT NULL,
>>>   config json NOT NULL,
>>>   CONSTRAINT configuration_pkey PRIMARY KEY (id),)
>>>
>>> A config looks like:
>>>
>>> {
>>> "_id": 20818132,
>>> "type": "Modem",
>>> "data": [{
>>> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.40",
>>> "instance": "24",
>>> "value": "null"
>>> },
>>> {
>>> "oid": "1.3.6.1.4.1..3.5.10.1.86",
>>> "instance": "0",
>>> "value": "562"
>>> },
>>> {
>>> "oid": "1.3.6.1.4.1..3.5.10.3.92.4.1",
>>> "instance": "0",
>>> "value": "0"
>>> },
>>> {
>>> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
>>> "instance": "24",
>>> "value": "vlan24"
>>> },
>>> {
>>> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
>>> "instance": "25",
>>> "value": "vlan25"
>>> }
>>> ]}
>>>
>>> And there are many plv8 (java script procedural language extension for
>>> PostgreSQL) stored procedures working on bulks of such config, reading some
>>> OIDs, changing them conditionally, removing some of them and adding others,
>>> especially in use cases like: There are some upper-level META-configuration
>>> of different level, which 

delete on table with many partitions uses a lot of ram

2019-03-09 Thread reg_pg_stefanz

Hi,

I noticed that a delete on a table with many partitions seems to be 
using a lot of ram.
It seems to occur during the planing phase, as  explain behaves the same 
as the actual execution of the delete.


On the simplified test below for 4000 partitions it seems to be using 
for a short time over 5Gb of Memory,  as if for each partition more than 
1 MB of Ram is allocated,
if a few concurrent sessions are doing this, the server is fast running 
out of memory


Only a delete is showing this behaviour, insert or select do not; I have 
not tested update.

Is this a known behaviour or related to my setup?

Versions 10, 11, even 12 complied from github source, showed similar 
behaviour.


Regards
Stefan


a simplified test with 4000 partitions:

drop table if exists big;

CREATE TABLE big (i int, j int)
 PARTITION BY RANGE (i);

CREATE TABLE big_0  PARTITION OF big
   FOR VALUES FROM (-1) TO (0);
CREATE INDEX ON big_0 (i);

do $$
DECLARE
 v_part varchar(100);
 v_sql TEXT;
 r record;
 dt date;
begin
  for r in (select generate_series(1,4000,1) nr)
    loop
   v_part:='big_'||r.nr;
   v_sql := format( 'CREATE TABLE %s PARTITION OF %s
 FOR VALUES FROM (''%s'') TO (''%s'');
 CREATE INDEX ON %s (i);',
 v_part,'big',
 r.nr-1,r.nr,
 v_part);
    EXECUTE v_sql;
 end loop;
 END;
$$ LANGUAGE plpgsql;

select name, setting, short_desc from pg_settings
  where name in ('max_connections','max_locks_per_transaction');

begin;

select locktype, virtualtransaction, pid, mode, granted, fastpath, count(*)
  from pg_locks
 group by locktype, virtualtransaction, pid, mode, granted, fastpath;

-- delete from big where i=3 and j=0;
explain delete from big where i=3 and j=0;

select locktype, virtualtransaction, pid, mode, granted, fastpath, count(*)
  from pg_locks
 group by locktype, virtualtransaction, pid, mode, granted, fastpath;

rollback;

\q







Re: delete on table with many partitions uses a lot of ram

2019-03-09 Thread David Rowley
On Sun, 10 Mar 2019 at 10:20,  wrote:
> I noticed that a delete on a table with many partitions seems to be
> using a lot of ram.

> Is this a known behaviour or related to my setup?

Yeah, It's known. There's a warning against what you're doing in
https://www.postgresql.org/docs/10/ddl-partitioning.html

I see that note has been changed in v11's documents, but I really
don't think v11 should have changed that. The memory problem still
exists with v11.  The only thing that was improved on that front was
with how partition pruning works, which only saves CPU, not RAM.

It's down to how DELETE and UPDATE plans are generated with
partitioned table or inheritance parents.   The memory growth is
basically quadratic with the number of partitions.  It's possible we
may have a solution for this by the time PostgreSQL 13 is out, but it
won't be fixed for 12.  However, nothing is entirely certain that far
out.

It's probably best to reduce the number of partitions.

-- 
 David Rowley   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services