Re: Connection pooling for differing databases?

2019-03-08 Thread Moreno Andreo

  
  
Il 07/03/2019 21:19, Arjun Ranade ha
  scritto:


  
  
I'm looking at pgbouncer and it does most of what I need. 
  I'm wondering about clients connecting via pgadmin, is there a
  way for users using pgadmin or another tool to see all the
  databases that are part of the configs?
  

It's an issue I ran into when I set up my pgbouncer architecture,
  but since all servers are reachable by the same private network
  pgAdmin host is, there's no security issue in connecting directly
  to them, instead of passing through pgbouncer, so I did not spend
  time (that I hadn't :-)) in investigating.
If you resolve this (or someone has already done so), sharing the
  solution would be much appreciated.
Cheers,
Moreno.-


  
On Thu, Mar 7, 2019 at 2:39 PM
  Moreno Andreo 
  wrote:

Il
  07/03/2019 20:27, Arjun Ranade ha scritto:
  > Hi all,
  >
  > I'm wondering if there's a tool like pgpool that can
  provide a single 
  > origin point (host/port) that will proxy/direct
  connections to the 
  > specific servers that contain the db needing to be
  accessed.
  Yes, I think there are many, but I'm encouraging you to take a
  look at 
  pgbouncer
  
  https://pgbouncer.github.io/
  
  in pgbouncer.ini you enter database configuration values like
  
  database = host=hostname port=xyzk, like
  mydb1 = host=cluster1 port=6543 or
  mydb2 = host=cluster1 port=9876
  mydb3 = host=cluster2 port=6543
  
  but there many other parameters to refine your config (like
  "proxying" 
  database names, so if you share names across clusters you can
  easily 
  avoid conflicts)
  
  Pgbouncer should be installed on the same server as the
  databases or in 
  another and listens on a different port than Postgres' (say
  5431 while 
  postgres is on 5432)
  I'm actively using in my environment with 2 clusters and about
  500 
  databases, works flawlessly.
  
  One thing you have to consider, if under heavy workload (say
  100's of 
  connections) is to raise kernel value of maximum open files
  
  Cheers
  
  Moreno.-
  
  
  

  



  





Hot to model data in DB (PostgreSQL) for SNMP-like multiple configurations

2019-03-08 Thread Alexandru Lazarev
I am working on product managing and monitoring Network (NMS-like products).

Product manages configuration of network devices, for now each device has
stored its configuration in simple table - this was the original design.

CREATE TABLE public.configuration(
  id integer NOT NULL,
  config json NOT NULL,
  CONSTRAINT configuration_pkey PRIMARY KEY (id),)

A config looks like:

{
"_id": 20818132,
"type": "Modem",
"data": [{
"oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.40",
"instance": "24",
"value": "null"
},
{
"oid": "1.3.6.1.4.1..3.5.10.1.86",
"instance": "0",
"value": "562"
},
{
"oid": "1.3.6.1.4.1..3.5.10.3.92.4.1",
"instance": "0",
"value": "0"
},
{
"oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
"instance": "24",
"value": "vlan24"
},
{
"oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
"instance": "25",
"value": "vlan25"
}
]}

And there are many plv8 (java script procedural language extension for
PostgreSQL) stored procedures working on bulks of such config, reading some
OIDs, changing them conditionally, removing some of them and adding others,
especially in use cases like: There are some upper-level META-configuration
of different level, which during change have to update all their updated
parameters to all affected leaves configs. An simple test-example (but
without touching 'data' node)

CREATE OR REPLACE FUNCTION public.process_jsonb_plv8()
  RETURNS void AS$BODY$
var CFG_TABLE_NAME = "configurations";
var selPlan = plv8.prepare( "select c.config from " + CFG_TABLE_NAME +
" c where c.id = $1", ['int'] );
var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config
= $1 where id = $2', ['jsonb','int'] );

try {

var ids = plv8.execute('select id from devices');

for (var i = 0; i < ids.length; i++) {
var db_cfg = selPlan.execute(ids[i].id); //Get current json
config from DB
var cfg = db_cfg[0].config;
cfg["key0"] = 'plv8_json'; //-add some dummy key
updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB
plv8.elog(NOTICE, "UPDATED = " + ids[i].id);


}} finally {
selPlan.free();
updPlan.free();}
return;$BODY$
  LANGUAGE plv8 VOLATILE
  COST 100;

For real use-cases plv8 SPs are more complicated, doing FOR-LOOP through
ALL OIDs object of 'data' array, checking if it is looking for and update
value an/or remove it and/or add newer if necessary.

Since number of devices in DB increased from several hundreds to 40K or
even 70K, and number of OID+Instance combinations also increased from
several hundred to ~1K and sometimes up to 10K within a config, we start
facing slowness in bulk (especially global -> update to ALL Devices)
updates/searches.

In order to get rid off FOR LOOP step for each configuration I've converted
data-node from array to object (key-value model), something like :

{
"_id": 20818132,
"type": "Modem",
"data": {
"1.3.6.1.4.1..3.10.2.2.25.4.1.40": {
"24": "null"
},
"1.3.6.1.4.1..3.5.10.1.86": {
"0": "562"
},
"1.3.6.1.4.1..3.5.10.3.92.4.1": {
"0": "0"
},
"1.3.6.1.4.1..3.10.2.2.25.4.1.43": {
"24": "vlan24",
"25": "vlan25"
}
}}

Now in order to get a concrete OID (e.g.
"1.3.6.1.4.1..3.10.2.2.25.4.1.43") and/or its instance I do 1-2 *O(1)*
operations instead *O(n)*. And it become a bit faster. After I've changed
column type from json to jsonb - I've got a lot of memory issues with plv8
stored procedures, so now ideas is:

*What are the best practices to store such data and use cases in DB?*
taking in considerations following: - Bulk and global updates are often
enough (user-done operation) - several times per week and it takes long
time - several minutes, annoying user experience. - Consulting some OIDs
only from concrete config is medium frequency use case - Consulting ALL
devices have some specific OID (SNMP Parameter) settled to a specific value
- medium frequency cases. - Consult (read) a configuration for a specific
device as a whole document - often use case (it is send to device as json
or as converted CSV, it is send in modified json format to other utilities,
etc)

One of suggestion from other oppinions is to move ALL configurations to
simple plain relational table

CREATE TABLE public.configuration_plain(
  device_id integer,
  oid text,
  instance text,
  value text)

Looking like

*id*

*oid*

*instance*

*value*

20818132

1.3.6.1.4.1..2.13

0

VSAT

20818132

1.3.6.1.4.1..3.10.2.2.10.15

0

0

20818132

1.3.6.1.4.1..3.10.2.2.10.17

0

0

20818132

1.3.6.1.4.1..3.10.2.2.10.18

0

1

20818132

1.3.6.1.4.1..3.10.2.2.10.19

0

2

20818132

1.3.6.1.4.1..3.10.

Re: Hot to model data in DB (PostgreSQL) for SNMP-like multiple configurations

2019-03-08 Thread Alban Hertroys
Is there a reason not to use a relational model instead of json(b) here? I
think that is in fact considered best practice.

On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev 
wrote:

> I am working on product managing and monitoring Network (NMS-like
> products).
>
> Product manages configuration of network devices, for now each device has
> stored its configuration in simple table - this was the original design.
>
> CREATE TABLE public.configuration(
>   id integer NOT NULL,
>   config json NOT NULL,
>   CONSTRAINT configuration_pkey PRIMARY KEY (id),)
>
> A config looks like:
>
> {
> "_id": 20818132,
> "type": "Modem",
> "data": [{
> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.40",
> "instance": "24",
> "value": "null"
> },
> {
> "oid": "1.3.6.1.4.1..3.5.10.1.86",
> "instance": "0",
> "value": "562"
> },
> {
> "oid": "1.3.6.1.4.1..3.5.10.3.92.4.1",
> "instance": "0",
> "value": "0"
> },
> {
> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
> "instance": "24",
> "value": "vlan24"
> },
> {
> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
> "instance": "25",
> "value": "vlan25"
> }
> ]}
>
> And there are many plv8 (java script procedural language extension for
> PostgreSQL) stored procedures working on bulks of such config, reading some
> OIDs, changing them conditionally, removing some of them and adding others,
> especially in use cases like: There are some upper-level META-configuration
> of different level, which during change have to update all their updated
> parameters to all affected leaves configs. An simple test-example (but
> without touching 'data' node)
>
> CREATE OR REPLACE FUNCTION public.process_jsonb_plv8()
>   RETURNS void AS$BODY$
> var CFG_TABLE_NAME = "configurations";
> var selPlan = plv8.prepare( "select c.config from " + CFG_TABLE_NAME + " c 
> where c.id = $1", ['int'] );
> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 
> where id = $2', ['jsonb','int'] );
>
> try {
>
> var ids = plv8.execute('select id from devices');
>
> for (var i = 0; i < ids.length; i++) {
> var db_cfg = selPlan.execute(ids[i].id); //Get current json config 
> from DB
> var cfg = db_cfg[0].config;
> cfg["key0"] = 'plv8_json'; //-add some dummy key
> updPlan.execute(cfg, ids[i].id); //put uopdated JSON config in DB
> plv8.elog(NOTICE, "UPDATED = " + ids[i].id);
>
>
> }} finally {
> selPlan.free();
> updPlan.free();}
> return;$BODY$
>   LANGUAGE plv8 VOLATILE
>   COST 100;
>
> For real use-cases plv8 SPs are more complicated, doing FOR-LOOP through
> ALL OIDs object of 'data' array, checking if it is looking for and update
> value an/or remove it and/or add newer if necessary.
>
> Since number of devices in DB increased from several hundreds to 40K or
> even 70K, and number of OID+Instance combinations also increased from
> several hundred to ~1K and sometimes up to 10K within a config, we start
> facing slowness in bulk (especially global -> update to ALL Devices)
> updates/searches.
>
> In order to get rid off FOR LOOP step for each configuration I've
> converted data-node from array to object (key-value model), something like
> :
>
> {
> "_id": 20818132,
> "type": "Modem",
> "data": {
> "1.3.6.1.4.1..3.10.2.2.25.4.1.40": {
> "24": "null"
> },
> "1.3.6.1.4.1..3.5.10.1.86": {
> "0": "562"
> },
> "1.3.6.1.4.1..3.5.10.3.92.4.1": {
> "0": "0"
> },
> "1.3.6.1.4.1..3.10.2.2.25.4.1.43": {
> "24": "vlan24",
> "25": "vlan25"
> }
> }}
>
> Now in order to get a concrete OID (e.g.
> "1.3.6.1.4.1..3.10.2.2.25.4.1.43") and/or its instance I do 1-2 *O(1)*
> operations instead *O(n)*. And it become a bit faster. After I've changed
> column type from json to jsonb - I've got a lot of memory issues with
> plv8 stored procedures, so now ideas is:
>
> *What are the best practices to store such data and use cases in DB?*
> taking in considerations following: - Bulk and global updates are often
> enough (user-done operation) - several times per week and it takes long
> time - several minutes, annoying user experience. - Consulting some OIDs
> only from concrete config is medium frequency use case - Consulting ALL
> devices have some specific OID (SNMP Parameter) settled to a specific value
> - medium frequency cases. - Consult (read) a configuration for a specific
> device as a whole document - often use case (it is send to device as json
> or as converted CSV, it is send in modified json format to other utilities,
> etc)
>
> One of suggestion from other oppinions is to move ALL configurations to
> simple plain relational tabl

Re: Hot to model data in DB (PostgreSQL) for SNMP-like multiple configurations

2019-03-08 Thread Alexandru Lazarev
For now I do not see the strong reason, but i inherited this project from
other developers,
Originally there was MongoDB and structure was more complex, having SNMP
like nested tables with OID.Instance1.Instance2.instance3 and in JSON it
looked like:
{
"1.3.6.1.4.1..3.10.2.2.25.4.1.43.1": {
"1": {
"24": "vlan24",
"25": "vlan25"
},
"2": {
"24": "127.0.0.1",
"25": "8.8.8.8"
}
}
}

Here we have table in table - How to model this in relational - with
separate tables and JOINs only?
I am not excluding in future I'll have such requirement

the other reason is that devices request their config and some other tools
requests devices configs as a single document/file - this a bit create
overhead for composing document in JSON or XML or CSV format from
relational table (I understand it is doable, but...)

BTW in PG documentation:
"
*8.14.2. Designing JSON documents effectively*




*Representing data as JSON can be considerably more flexible than the
traditional relational data model, which is compelling in environments
where requirements are fluid. It is quite possible for both approaches to
co-exist and complement each other within the same application. However,
even for applications where maximal flexibility is desired, it is still
recommended that JSON documents have a somewhat fixed structure. The
structure is typically unenforced (though enforcing some business rules
declaratively is possible), but having a predictable structure makes it
easier to write queries that usefully summarize a set of "documents"
(datums) in a table.JSON data is subject to the same concurrency-control
considerations as any other data type when stored in a table. Although
storing large documents is practicable, keep in mind that any update
acquires a row-level lock on the whole row. Consider limiting JSON
documents to a manageable size in order to decrease lock contention among
updating transactions. Ideally, JSON documents should each represent an
atomic datum that business rules dictate cannot reasonably be further
subdivided into smaller datums that could be modified independently."
https://www.postgresql.org/docs/9.6/datatype-json.html
*



On Fri, Mar 8, 2019 at 5:15 PM Alban Hertroys  wrote:

> Is there a reason not to use a relational model instead of json(b) here? I
> think that is in fact considered best practice.
>
> On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev <
> alexandru.laza...@gmail.com> wrote:
>
>> I am working on product managing and monitoring Network (NMS-like
>> products).
>>
>> Product manages configuration of network devices, for now each device has
>> stored its configuration in simple table - this was the original design.
>>
>> CREATE TABLE public.configuration(
>>   id integer NOT NULL,
>>   config json NOT NULL,
>>   CONSTRAINT configuration_pkey PRIMARY KEY (id),)
>>
>> A config looks like:
>>
>> {
>> "_id": 20818132,
>> "type": "Modem",
>> "data": [{
>> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.40",
>> "instance": "24",
>> "value": "null"
>> },
>> {
>> "oid": "1.3.6.1.4.1..3.5.10.1.86",
>> "instance": "0",
>> "value": "562"
>> },
>> {
>> "oid": "1.3.6.1.4.1..3.5.10.3.92.4.1",
>> "instance": "0",
>> "value": "0"
>> },
>> {
>> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
>> "instance": "24",
>> "value": "vlan24"
>> },
>> {
>> "oid": "1.3.6.1.4.1..3.10.2.2.25.4.1.43",
>> "instance": "25",
>> "value": "vlan25"
>> }
>> ]}
>>
>> And there are many plv8 (java script procedural language extension for
>> PostgreSQL) stored procedures working on bulks of such config, reading some
>> OIDs, changing them conditionally, removing some of them and adding others,
>> especially in use cases like: There are some upper-level META-configuration
>> of different level, which during change have to update all their updated
>> parameters to all affected leaves configs. An simple test-example (but
>> without touching 'data' node)
>>
>> CREATE OR REPLACE FUNCTION public.process_jsonb_plv8()
>>   RETURNS void AS$BODY$
>> var CFG_TABLE_NAME = "configurations";
>> var selPlan = plv8.prepare( "select c.config from " + CFG_TABLE_NAME + " c 
>> where c.id = $1", ['int'] );
>> var updPlan = plv8.prepare( 'update ' + CFG_TABLE_NAME + ' set config = $1 
>> where id = $2', ['jsonb','int'] );
>>
>> try {
>>
>> var ids = plv8.execute('select id from devices');
>>
>> for (var i = 0; i < ids.length; i++) {
>> var db_cfg = selPlan.execute(ids[i].id); //Get current json config 
>> from DB
>> var cfg = db_cfg[0].config;
>> cfg["key0"] = 'plv8_json'; //-add some dummy key
>> updPlan.execute(

Postgres 10 temp tablespace question

2019-03-08 Thread Joseph Dunleavy
I am building a multi-tenant deployment with multiple database - 1 tenant per 
database.

I would like to be able to dedicate specific temp tablespace to a specific 
database or user/schemas.

I understand how to define temp_tablespace in postgresql.conf.


I can't find anything in the documentation on how to dedicate a specific 
temp_tablespaces to a specific database, user or schema.

I also thought maybe I could create a trigger on logon to set a specific temp 
tablespace per user, but I can't do that either.


Is it possible in Postgresql to dedicate a specific temp tablespace to a 
specific database or user/schema?


Thank you for the help.

JD





DDL for database creation

2019-03-08 Thread Julie Nishimura
Hello psql friends,
Is there a way to reconstruct DDL for creation of existing databases? For 
example, I need to create dev environment for some (not all) of our existing 
databases, and I would like to keep the same encoding, connection limit (if 
anything was specified), etc. Is there a way to create this DDL?

Thank you!


Re: DDL for database creation

2019-03-08 Thread Bruce Momjian
On Sat, Mar  9, 2019 at 02:13:57AM +, Julie Nishimura wrote:
> Hello psql friends,
> Is there a way to reconstruct DDL for creation of existing databases? For
> example, I need to create dev environment for some (not all) of our existing
> databases, and I would like to keep the same encoding, connection limit (if
> anything was specified), etc. Is there a way to create this DDL?

pg_dump --schema-only maybe?

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+  Ancient Roman grave inscription +



Re: DDL for database creation

2019-03-08 Thread Tom Lane
Bruce Momjian  writes:
> On Sat, Mar  9, 2019 at 02:13:57AM +, Julie Nishimura wrote:
>> Is there a way to reconstruct DDL for creation of existing databases? For
>> example, I need to create dev environment for some (not all) of our existing
>> databases, and I would like to keep the same encoding, connection limit (if
>> anything was specified), etc. Is there a way to create this DDL?

> pg_dump --schema-only maybe?

The specific settings the OP mentions are database-level settings;
so she'd need pg_dumpall (perhaps with -g), or pg_dump with -C.

regards, tom lane



Re: DDL for database creation

2019-03-08 Thread Ron

On 3/8/19 9:20 PM, Tom Lane wrote:

Bruce Momjian  writes:

On Sat, Mar  9, 2019 at 02:13:57AM +, Julie Nishimura wrote:

Is there a way to reconstruct DDL for creation of existing databases? For
example, I need to create dev environment for some (not all) of our existing
databases, and I would like to keep the same encoding, connection limit (if
anything was specified), etc. Is there a way to create this DDL?

pg_dump --schema-only maybe?

The specific settings the OP mentions are database-level settings;
so she'd need pg_dumpall (perhaps with -g), or pg_dump with -C.


Why not "pg_dumpall --schema-only" in combination with "pg_dumpall -g"?

https://www.postgresql.org/docs/9.6/app-pg-dumpall.html
-s
--schema-only
    Dump only the object definitions (schema), not data.


--
Angular momentum makes the world go 'round.