Remove default privilege from DB

2018-02-12 Thread Durumdara
Hello!

I need to remove default privileges from a Database.
After that some of them remains.

 Default access privileges
   Owner| Schema |   Type   | Access privileges
++--+---
 postgres   || function | =X/postgres
 postgres   || sequence |
 postgres   || table|
 postgres   || type | =U/postgres
 *suser|| function | =X/suser*
* suser|| sequence |*
* suser|| table|*
* suser|| type | =U/suser*

How to completely remove the last items?
Could you send me one example?

Thank you for it!

Best regards
  DD


RE: Remove default privilege from DB

2018-02-12 Thread Charles Clavadetscher
Hi

 

From: Durumdara [mailto:durumd...@gmail.com] 
Sent: Montag, 12. Februar 2018 09:32
To: Postgres General 
Subject: Remove default privilege from DB

 

Hello!

 

I need to remove default privileges from a Database.

After that some of them remains.

 

 Default access privileges

   Owner| Schema |   Type   | Access privileges

++--+---

 postgres   || function | =X/postgres

 postgres   || sequence |

 postgres   || table|

 postgres   || type | =U/postgres

 suser|| function | =X/suser

 suser|| sequence |

 suser|| table|

 suser|| type | =U/suser

 

How to completely remove the last items? 

Could you send me one example?

 

I assume

 

ALTER DEFAULT PRIVILEGES FOR ROLE suser REVOKE EXECUTE ON FUNCTIONS FROM PUBLIC;

ALTER DEFAULT PRIVILEGES FOR ROLE suser REVOKE USAGE ON TYPE FROM PUBLIC;

 

Bye

Charles

 

Thank you for it!

 

Best regards

  DD

 

 



Re: execute block like Firebird does

2018-02-12 Thread PegoraroF10
We can change all those execute blocks, but it would be a huge work if we
need to rewrite them all.
Today, just for a test, I replaced a Firebird execute block to a Postgres
CTE. OK, worked but I spend 40 minutes and the problem is that we have
hundreds of these execute blocks and on each one we need to rethink,
rewrite, retest. 

When we changed all our triggers and procedures from Firebird to PostGres we
needed only to rewrite "first 1" to "limit 1", "starting with" to "like" as
examples. So, just a Search and Replace will do solve it.

And now if PostGres doesn´t have something similar to Execute Block we have
to change lots of things. As you may know, change a function body to a CTE
is not so trivial.

Another approach to solve my problem would be a function that receives a
dynamic SQL, runs it and returns a XML or JSON and on client side I convert
that XML back to a recordset. Is that possible ?



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: New Copy Formats - avro/orc/parquet

2018-02-12 Thread Magnus Hagander
On Sun, Feb 11, 2018 at 11:48 PM, Tom Lane  wrote:

> Andres Freund  writes:
> > So, I think making COPY extensible would be quite beneficial. I'm
> > however quite doubtful that we want to add core code to handle all of
> > the above. I think we should make the COPY input/output formatting
> > extensible by extensions.
>
> +1.  I can't see carrying code for these formats in-core, but I've
> no objection to making it possible for someone else to maintain them.
>

+1. And bonus points if an API can also be defined so such an extension
parsing also becomes useful to file_fdw automatically (or at least
optionally).

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: New Copy Formats - avro/orc/parquet

2018-02-12 Thread Tom Lane
Magnus Hagander  writes:
> +1. And bonus points if an API can also be defined so such an extension
> parsing also becomes useful to file_fdw automatically (or at least
> optionally).

Hm, well, file_fdw already goes through COPY FROM, so it seems like it'd
almost just work.  "Almost" because there'd need to be a way for it to
support the appropriate options.  So this means that whatever the
mechanism is for extension-specific options, it has to be introspectable
enough for file_fdw.c to understand what to expose as FDW options.

regards, tom lane



Trying to the behavior of a parallel query with with a change in the transaction isolation mode

2018-02-12 Thread Luis Carril
Hi all,



 I am trying to verify if during a transaction a parallel plan ca be 
generated but later only a sequential one is executed.

 The documentation says the following (in 
https://www.postgresql.org/docs/9.6/static/when-can-parallel-query-be-used.html):

---
Even when parallel query plan is generated for a particular query, there are 
several circumstances under which it will be impossible to execute that plan in 
parallel at execution time. If this occurs, the leader will execute the portion 
of the plan below the Gather node entirely by itself, almost as if the Gather 
node were not present. This will happen if any of the following conditions are 
met:
 [...]

The transaction isolation level is serializable. This situation does not 
normally arise, because parallel query plans are not generated when the 
transaction isolation level is serializable. However, it can happen if the 
transaction isolation level is changed to serializable after the plan is 
generated and before it is executed.

---


 I have tried the following:

BEGIN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED, READ ONLY;
EXPLAIN (COSTS OFF) SELECT avg(a) FROM parallel_big;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY;
EXPLAIN ANALYZE SELECT avg(a) FROM parallel_big;
COMMIT;


 But complains that after the first SELECT (even if it is in an EXPLAIN) 
the isolation level cannot be changed, so the transaction is aborted  and the 
SELECT is never executed (even sequentially).


 Is there any way to test the possible behavior described in the 
documentation?


Thanks you very much

Luis M Carril




Re: Trying to the behavior of a parallel query with with a change in the transaction isolation mode

2018-02-12 Thread Alvaro Herrera
Luis Carril wrote:

> The transaction isolation level is serializable. This situation does not 
> normally arise, because parallel query plans are not generated when the 
> transaction isolation level is serializable. However, it can happen if the 
> transaction isolation level is changed to serializable after the plan is 
> generated and before it is executed.

> BEGIN;
> SET TRANSACTION ISOLATION LEVEL READ COMMITTED, READ ONLY;
> EXPLAIN (COSTS OFF) SELECT avg(a) FROM parallel_big;
> SET TRANSACTION ISOLATION LEVEL SERIALIZABLE, READ ONLY;
> EXPLAIN ANALYZE SELECT avg(a) FROM parallel_big;
> COMMIT;
> 
> 
>  But complains that after the first SELECT (even if it is in an EXPLAIN) 
> the isolation level cannot be changed, so the transaction is aborted  and the 
> SELECT is never executed (even sequentially).
> 
> 
>  Is there any way to test the possible behavior described in the 
> documentation?

I think you would do a PREPARE in a regular transaction, then open a
transaction changing the isolation level to serializable and try the
EXPLAIN EXECUTE there.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Barman 2.3 errors

2018-02-12 Thread GALLIANO Nicolas
Hi,

I'm trying to backup a remote DB (9.6.6) using barman 2.3 but backup failed 
start.
In barman.log i've such errors :

2018-02-12 16:18:23,852 [57691] barman.server ERROR: Check 'replication slot' 
failed for server 'tcgepg96ddm'
2018-02-12 16:18:23,857 [57691] barman.server ERROR: Check 'receive-wal 
running' failed for server 'tcgepg96ddm'
2018-02-12 16:18:30,815 [57779] barman.wal_archiver INFO: No xlog segments 
found from streaming for tcgepg96ddm.
2018-02-12 16:18:30,824 [57781] barman.server INFO: Starting receive-wal for 
server tcgepg96ddm
2018-02-12 16:18:30,905 [57781] barman.wal_archiver INFO: Synchronous WAL 
streaming for barman_receive_wal: True
2018-02-12 16:18:30,906 [57781] barman.wal_archiver INFO: Activating WAL 
archiving through streaming protocol
2018-02-12 16:18:30,922 [57781] barman.cli ERROR: 'utf8' codec can't decode 
byte 0xe0 in position 47: invalid continuation byte
See log file for more details.
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/barman/cli.py", line 1126, in main
p.dispatch(pre_call=global_config)
  File "/usr/lib/python2.7/site-packages/argh/helpers.py", line 55, in dispatch
return dispatch(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/argh/dispatching.py", line 174, in 
dispatch
for line in lines:
  File "/usr/lib/python2.7/site-packages/argh/dispatching.py", line 277, in 
_execute_command
for line in result:
  File "/usr/lib/python2.7/site-packages/argh/dispatching.py", line 231, in 
_call
result = function(namespace_obj)
  File "/usr/lib/python2.7/site-packages/barman/cli.py", line 792, in 
receive_wal
server.receive_wal(reset=args.reset)
  File "/usr/lib/python2.7/site-packages/barman/server.py", line 1708, in 
receive_wal
archiver.receive_wal(reset)
  File "/usr/lib/python2.7/site-packages/barman/wal_archiver.py", line 751, in 
receive_wal
receive.execute()
  File "/usr/lib/python2.7/site-packages/barman/command_wrappers.py", line 418, 
in execute
self.pipe_processor_loop(processors)
  File "/usr/lib/python2.7/site-packages/barman/command_wrappers.py", line 484, 
in pipe_processor_loop
eof = stream.process()
  File "/usr/lib/python2.7/site-packages/barman/command_wrappers.py", line 82, 
in process
self._buf += data.decode('utf-8')
  File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe0 in position 47: invalid 
continuation byte
2018-02-12 16:18:32,373 [58491] barman.server ERROR: Check 'WAL archive' failed 
for server 'tcgepg96ddm'
2018-02-12 16:18:32,468 [58491] barman.wal_archiver INFO: Synchronous WAL 
streaming for barman_receive_wal: True
2018-02-12 16:18:32,485 [58491] barman.server ERROR: Check 'receive-wal 
running' failed for server 'tcgepg96ddm'


My remote Db server conf is :

[myclient]
conninfo = host=myclient user=barman dbname=dbclient
streaming_conninfo = host= myclient user=streaming_barman
backup_method = postgres
streaming_backup_name = barman_streaming_backup
streaming_archiver = on
slot_name = barman

A barman check gives :

$barman check tcgepg96ddm
Server tcgepg96ddm:
WAL archive: FAILED (please make sure WAL shipping is setup)
PostgreSQL: OK
is_superuser: OK
PostgreSQL streaming: OK
wal_level: OK
replication slot: OK
directories: OK
retention policy settings: OK
backup maximum age: OK (no last_backup_maximum_age provided)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 0 backups, expected at least 
0)
pg_basebackup: OK
pg_basebackup compatible: OK
pg_basebackup supports tablespaces mapping: OK
pg_receivexlog: OK
pg_receivexlog compatible: OK
receive-wal running: FAILED (See the Barman log file for more details)
archiver errors: OK


Have you ever had same problem with barman ?
Thanks for you help.
nicolas


Re: execute block like Firebird does

2018-02-12 Thread Adrian Klaver

On 02/12/2018 05:48 AM, PegoraroF10 wrote:

We can change all those execute blocks, but it would be a huge work if we
need to rewrite them all.
Today, just for a test, I replaced a Firebird execute block to a Postgres
CTE. OK, worked but I spend 40 minutes and the problem is that we have
hundreds of these execute blocks and on each one we need to rethink,
rewrite, retest.

When we changed all our triggers and procedures from Firebird to PostGres we
needed only to rewrite "first 1" to "limit 1", "starting with" to "like" as
examples. So, just a Search and Replace will do solve it.

And now if PostGres doesn´t have something similar to Execute Block we have
to change lots of things. As you may know, change a function body to a CTE
is not so trivial.


I do not see a direct correspondence between Execute Block and anything 
in Postgres. This means one way or another you will be rewriting code.




Another approach to solve my problem would be a function that receives a
dynamic SQL, runs it and returns a XML or JSON and on client side I convert
that XML back to a recordset. Is that possible ?


Why not just return a recordset directly?:

https://www.postgresql.org/docs/10/static/plpgsql-overview.html#PLPGSQL-ARGS-RESULTS

"PL/pgSQL functions can also be declared to return a “set” (or table) of 
any data type that can be returned as a single instance. Such a function 
generates its output by executing RETURN NEXT for each desired element 
of the result set, or by using RETURN QUERY to output the result of 
evaluating a query."


Evaluating what you are trying to do would be helped by a complete 
working example of one of your Execute Blocks.






--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: execute block like Firebird does

2018-02-12 Thread David G. Johnston
On Mon, Feb 12, 2018 at 6:48 AM, PegoraroF10  wrote:

> Another approach to solve my problem would be a function that receives a
> dynamic SQL, runs it and returns a XML or JSON and on client side I convert
> that XML back to a recordset. Is that possible ?


Yes, you can pass "text" SQL into a pl/pgsql function and "EXECUTE"​ it.
That text must be plain SQL though, not pl/pgsql.

Converting pl/pgsql into plain SQL and executing it as a CTE seems like an
over-solution.  What should be reasonably possible to rewrite the "execute
block" as a "create function" then modify your clients to do send "select *
from function();" instead of "execute block ..."

If I was you I'd even be curious enough to see if maybe there is an
external third-party extension "pl/firebase" language out there which would
let you comfortably copy-paste the block text into the function body with
minimal or no editing.

​David J.​

​p.s. reading PostGres is hard on our (mine at least) eyes.  Its either
Postgres, or PostgreSQL - neither with a capital G.​


Re: execute block like Firebird does

2018-02-12 Thread PegoraroF10
I know I need to think a different approach of what execute blocks does on
Firebird. 
What I was trying to was just let them the way they were wrote, because
their results are correct and with more time replace them to a new way. 

But, if that way cannot be used, I´ll probably write some hundred of
functions right now and use them, because this way works for sure. And then,
with more time to spend, I´ll replace them one by one.





--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: execute block like Firebird does

2018-02-12 Thread Adrian Klaver

On 02/12/2018 08:57 AM, PegoraroF10 wrote:

I know I need to think a different approach of what execute blocks does on
Firebird.
What I was trying to was just let them the way they were wrote, because
their results are correct and with more time replace them to a new way.


That may not be necessary. As someone mentioned upstream you maybe able 
to replace the EXECUTE BLOCK with CREATE OR REPLACE FUNCTION and then do 
some clean up/additions to the enclosed code. To be more certain about 
this we would need to see a complete example of one of EXECUTE BLOCKs. 
If that is possible for security reasons, then a made example that does 
the same thing.




But, if that way cannot be used, I´ll probably write some hundred of
functions right now and use them, because this way works for sure. And then,
with more time to spend, I´ll replace them one by one.


Why? Once you create the functions and they do the task what is the 
purpose of replicating them and more to the point how would you?








--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: persistent read cache

2018-02-12 Thread Steve Atkins

> On Feb 11, 2018, at 5:14 PM, Sand Stone  wrote:
> 
> 
> Hi. I wonder if there is such a thing or extension in the PG world.
> 
> Here is my use case. I am using PG (PG10 to be more specific) in a
> cloud VM environment. The tables are stored in RAID0 managed SSD
> backed attached storage. Depending on the VM I am using, I usually
> have 256GB local SSD unused.
> 
> I wonder if PG could leverage this local SSD as a read (page/block)
> cache, to complement/extend  the DRAM by used by shared_buffer today.

It seems something that PostgreSQL could take advantage of, but
it's probably the wrong layer to implement it. If your VM infrastructure
doesn't have any way to use it directly, maybe you could do it at the
drive / filesystem level with something like bcache, lvmcache or
enhanceio?

Adding that sort of complexity to something that needs solid data
integrity makes me nervous, but those solutions have been in the
field for years.

Cheers,
  Steve





Re: execute block like Firebird does

2018-02-12 Thread Daniel Verite
PegoraroF10 wrote:

> Another approach to solve my problem would be a function that receives a
> dynamic SQL, runs it and returns a XML or JSON and on client side I convert
> that XML back to a recordset. Is that possible ?

Yet another tool that can be handy to transfer polymorphic
results is a cursor through the plpgsql REFCURSORs:

https://www.postgresql.org/docs/10/static/plpgsql-cursors.html

Here's an actual example with a DO block :

  BEGIN;

  DO $$
   DECLARE
 c1 refcursor := 'cursor1';
 c2 refcursor := 'cursor2';
   BEGIN
 OPEN c1 FOR select 'result #1 column 1', 'result #1 column 2';
 OPEN c2 FOR select 'result #2 column 1', 'result #2 column 2';
   END;
  $$ LANGUAGE plpgsql;

  FETCH cursor1;
  FETCH cursor2;

  COMMIT;

The interface is a bit weird because the value of the refcursor variable
is the name of  the underlying SQL cursor object. The name can
also be auto-generated by postgres; the above code uses fixed
names instead. Anyway that cursor, once instanciated in the
DO block, can be fetched from with FETCH statements initiated
client-side or by other server-side code. 

The above code will retrieve two independant resultsets:

postgres=# FETCH cursor1;
  ?column?  |  ?column?  
+
 result #1 column 1 | result #1 column 2
(1 row)

postgres=# FETCH cursor2;
  ?column?  |  ?column?  
+
 result #2 column 1 | result #2 column 2
(1 row)

These cursors disappear at transaction end, or they can be explicitly
closed with CLOSE statements.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite



Re: execute block like Firebird does

2018-02-12 Thread PegoraroF10
Explaining better my problem. All reports our customer use are customizable.
So, when a customer runs a report it just runs all SQLs that are inside that
report, being SQL or Execute Blocks. But because they are completelly
customizable, one customer has 80 reports with 300 Execute Blocks on them
and other one has just 10 reports with 100 execute blocks and they can be
used to different purposes. If, instead of rewriting them, just create a
function on each one, imagine that Customer A will have 300 hundred
functions and Customer B will have only 100. And worse, some of those
functions have same header but different body. Can you imagine a mess like
that ?

95% of those execute blocks are simple and can be replaced by a well done
SQL or a CTE. But we have hundreds of these blocks and we need to not just
recompile them but rewrite them using a different approach. 

Show you some examples of our execute blocks.
This one is easy to understand and can be easily replaced.
execute block returns(CargaTotal Moeda) as
  declare variable Aluno_ID I32;
  declare variable Turma_ID I32;
  declare variable MateriasIn t1;
  declare variable Presente I16;
  declare variable JustificativaHistorico_ID I32;
  declare variable qtdeAulas i32;
  declare variable qtdePresencas i32;
begin
  select Pessoa_Id From mov_Contrato Where Contrato_Id = %d Into :Aluno_Id;
  Turma_Id = %d;
  qtdeAulas = 0;
  qtdePresencas = 0;
  for select Presente, JustificativaHistorico_ID from col_Aula A inner join
col_Frequencia F on F.Aula_ID = A.Aula_ID where
a.Materia_Id in (select distinct a.Materia_Id from col_aula a where
a.Turma_Id = :Turma_Id) and f.Aluno_ID = :Aluno_Id
  into :Presente, :JustificativaHistorico_ID do begin
qtdeAulas = :qtdeAulas + 1;
if ((:Presente=1) or (:JustificativaHistorico_ID is not Null)) then
  qtdePresencas = :qtdePresencas + 1;
  end
  if (:qtdeAulas > 0) then
CargaTotal = (Cast(:qtdePresencas as Moeda) / Cast(:qtdeAulas as Moeda)
* 100.00);
  else
CargaTotal = 0;
  Suspend;
end

But other ones needs to be carefully read to be translated.

execute block returns (
  Curso_ID type of column col_Curso.Curso_ID,
  Turma_ID type of column col_Turma.Turma_ID,
  Cursotype of column col_Curso.Descricao,
  Turmatype of column col_Turma.Nome,
  IniciandoLogico,
  PeriodoSequencia I32,
  Periodo  T50,
  OrdemI32,
  DescricaoSemana  varchar(15),
  SemanaInicio type of column col_Aula.Data,
  SemanaFimtype of column col_Aula.Data,
  AulaData Data,
  Contrato_ID  type of column mov_Contrato.Contrato_ID,
  Contrato type of column mov_Contrato.NumeroContrato,
  Aluno_ID type of column rel_AlunoTurma.Aluno_ID,
  AlunoDaAula  type of column rel_AlunoTurma.lkAluno,
  StatusAtual  type of column mov_Contrato.lkStatus,
  StatusNoPeriodo  type of column mov_Contrato.lkStatus,
  Presente type of column col_Frequencia.Presente
) as
  declare variable Semanas  I32 = %0:d;
  declare variable II32;
  declare variable tmpData  Data;
  declare variable PrevIni  Data = '%1:s'; --Execute block doesn´t
have IN Param, so we change this variable using Format();
  declare variable PrevFim  Data = '%2:s'; --This one too.
  declare variable HoraInicio   VarChar(6) = ' 00:00';
  declare variable HoraFinalVarChar(6) = ' 23:59';
  declare variable PeriodoManha type of column sys_LookUp.Descricao =
'Matutino';
  declare variable PeriodoTarde type of column sys_LookUp.Descricao =
'Vespertino';
  declare variable PeriodoNoite type of column sys_LookUp.Descricao =
'Noturno';
  declare variable StatusPauta  Memo;
  declare variable StatusDesistente I32;
  declare variable sqlTemp   Memo;
  declare variable Turmas Memo = ':ListaTurma';
  declare variable sqlPeriodo Memo = 'select :Numero, '':Descricao'',
'':DataIni'', '':DataFim'' from rdb$database:where';
  declare variable sqlAulas Memo;
  declare variable sqlLista Memo = 'select distinct
col_Curso.Curso_ID,
col_Curso.Descricao,
col_Turma.Turma_ID,
col_Turma.Nome,
case when extract(hour from col_Aula.Data) < 12 then 1 when extract(hour
from col_Aula.Data) between 12 and 18 then 2 when extract(hour from
col_Aula.Data) > 18 then 3 end,
case when extract(hour from col_Aula.Data) < 12 then '':PeriodoManha''
when extract(hour from col_Aula.Data) between 12 and 18 then
'':PeriodoTarde'' when extract(hour from col_Aula.Data) > 18 then
'':PeriodoNoite'' end
  from
col_Turma inner join col_Curso using(Curso_ID) inner join col_Aula
using(Turma_ID)
  where 1=1
and col_Turma.Turma_ID in (:Turmas)
and col_Aula.Data between '':PrevIni'' and '':PrevFim''
  order by
col_Curso.Descricao,/*Iniciando  */ 5,  /* PeriodoSequencia */6,
col_Turma.Nome';
begin
  I= 2;
  tmpData = dateADD(-1 Day to :prevIni);
  sqlAulas = '';
  while (:I < :Semanas + 2) do begi

Re: execute block like Firebird does

2018-02-12 Thread David G. Johnston
On Monday, February 12, 2018, PegoraroF10  wrote:

> Explaining better my problem. All reports our customer use are
> customizable.
> So, when a customer runs a report it just runs all SQLs that are inside
> that
> report, being SQL or Execute Blocks. But because they are completelly
> customizable, one customer has 80 reports with 300 Execute Blocks on them
> and other one has just 10 reports with 100 execute blocks and they can be
> used to different purposes.
>

I love PostgreSQL...but why are you migrating away from something with this
extent of dependency on Firebird?

I'd probably be trying to figure out some kind of hybrid platform here and
not a clean cut-over.

David J.


Re: execute block like Firebird does

2018-02-12 Thread Adrian Klaver

On 02/12/2018 10:02 AM, PegoraroF10 wrote:

Explaining better my problem. All reports our customer use are customizable.
So, when a customer runs a report it just runs all SQLs that are inside that
report, being SQL or Execute Blocks. But because they are completelly
customizable, one customer has 80 reports with 300 Execute Blocks on them
and other one has just 10 reports with 100 execute blocks and they can be
used to different purposes. If, instead of rewriting them, just create a
function on each one, imagine that Customer A will have 300 hundred
functions and Customer B will have only 100. And worse, some of those
functions have same header but different body. Can you imagine a mess like
that ?


I can, but not sure why it would have to happen? While Postgres supports 
function overloading there is no requirement that you have to do it. 
Since you are charge of naming the functions you could create distinct 
names for function, maybe appended with customer name or id for instance.




95% of those execute blocks are simple and can be replaced by a well done
SQL or a CTE. But we have hundreds of these blocks and we need to not just
recompile them but rewrite them using a different approach.


That is going to happen whatever path you choose. I would start with 
some of the easier blocks and see how difficult it would be to transform 
to pl/pgsql. If it is not that bad(from below I would think not) then 
you can knock out the 95% in a reasonable time. Then you can turn your 
attention to the 5%.


For below see:

https://www.postgresql.org/docs/10/static/plpgsql-control-structures.html#PLPGSQL-RECORDS-ITERATING



Show you some examples of our execute blocks.
This one is easy to understand and can be easily replaced.
execute block returns(CargaTotal Moeda) as
   declare variable Aluno_ID I32;
   declare variable Turma_ID I32;
   declare variable MateriasIn t1;
   declare variable Presente I16;
   declare variable JustificativaHistorico_ID I32;
   declare variable qtdeAulas i32;
   declare variable qtdePresencas i32;
begin
   select Pessoa_Id From mov_Contrato Where Contrato_Id = %d Into :Aluno_Id;
   Turma_Id = %d;
   qtdeAulas = 0;
   qtdePresencas = 0;
   for select Presente, JustificativaHistorico_ID from col_Aula A inner join
col_Frequencia F on F.Aula_ID = A.Aula_ID where
 a.Materia_Id in (select distinct a.Materia_Id from col_aula a where
a.Turma_Id = :Turma_Id) and f.Aluno_ID = :Aluno_Id
   into :Presente, :JustificativaHistorico_ID do begin
 qtdeAulas = :qtdeAulas + 1;
 if ((:Presente=1) or (:JustificativaHistorico_ID is not Null)) then
   qtdePresencas = :qtdePresencas + 1;
   end
   if (:qtdeAulas > 0) then
 CargaTotal = (Cast(:qtdePresencas as Moeda) / Cast(:qtdeAulas as Moeda)
* 100.00);
   else
 CargaTotal = 0;
   Suspend;
end

But other ones needs to be carefully read to be translated.

execute block returns (
   Curso_ID type of column col_Curso.Curso_ID,
   Turma_ID type of column col_Turma.Turma_ID,
   Cursotype of column col_Curso.Descricao,
   Turmatype of column col_Turma.Nome,
   IniciandoLogico,
   PeriodoSequencia I32,
   Periodo  T50,
   OrdemI32,
   DescricaoSemana  varchar(15),
   SemanaInicio type of column col_Aula.Data,
   SemanaFimtype of column col_Aula.Data,
   AulaData Data,
   Contrato_ID  type of column mov_Contrato.Contrato_ID,
   Contrato type of column mov_Contrato.NumeroContrato,
   Aluno_ID type of column rel_AlunoTurma.Aluno_ID,
   AlunoDaAula  type of column rel_AlunoTurma.lkAluno,
   StatusAtual  type of column mov_Contrato.lkStatus,
   StatusNoPeriodo  type of column mov_Contrato.lkStatus,
   Presente type of column col_Frequencia.Presente
) as
   declare variable Semanas  I32 = %0:d;
   declare variable II32;
   declare variable tmpData  Data;
   declare variable PrevIni  Data = '%1:s'; --Execute block doesn´t
have IN Param, so we change this variable using Format();
   declare variable PrevFim  Data = '%2:s'; --This one too.
   declare variable HoraInicio   VarChar(6) = ' 00:00';
   declare variable HoraFinalVarChar(6) = ' 23:59';
   declare variable PeriodoManha type of column sys_LookUp.Descricao =
'Matutino';
   declare variable PeriodoTarde type of column sys_LookUp.Descricao =
'Vespertino';
   declare variable PeriodoNoite type of column sys_LookUp.Descricao =
'Noturno';
   declare variable StatusPauta  Memo;
   declare variable StatusDesistente I32;
   declare variable sqlTemp   Memo;
   declare variable Turmas Memo = ':ListaTurma';
   declare variable sqlPeriodo Memo = 'select :Numero, '':Descricao'',
'':DataIni'', '':DataFim'' from rdb$database:where';
   declare variable sqlAulas Memo;
   declare variable sqlLista Memo = 'select distinct
 col_Curso.Curso_ID,
 col_Curso.Descricao,
 col_Turma.Turma_ID,

Re: execute block like Firebird does

2018-02-12 Thread PegoraroF10
I loved Firebird but now we have to have some cool things that Postgres has
and Firebird doesn´t. 
Fiirebird has just 3 cool features that Postgres doesn´t: Computed by
columns, Position for fields and triggers and execute blocks, just that.

Replication, PITR, JSON and JSONB, XML, inherited tables, arrays, grouping
sets, User defined datatypes, SELECT without a FROM clause, Parallel
queries, Tuple comparison, Transactional DDL and a lot of other useful
things that Firebird doesn´t know what it is.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



oracle_fdw Question

2018-02-12 Thread chiru r
Hi,

I am trying to install and configure the oracle_fdw on PostgreSQL DB
servers.
What we are trying is, inorder to install (make, make install) oracle_fdw
on all DB servers,we want to compile on one server and want build a
package/tar file to copy the library files and required SQL scripts into
all other servers.
So that it will reduce the installation efforts.

can we copy the oracle_fdw.so file into "/9.5/lib/postgresql"
And  oracle_fdw.control,oracle_fdw--1.1.sql and oracle_fdw--1.0--1.1.sql
files into "/9.5/share/postgresql/extension" directory.
Is this the right approach?

Is there any process to create re-locatable binary files for oracle_fdw?

Thanks,
chiru


Re: oracle_fdw Question

2018-02-12 Thread Venkata B Nagothi
On Tue, 13 Feb 2018 at 5:55 am, chiru r  wrote:

> Hi,
>
> I am trying to install and configure the oracle_fdw on PostgreSQL DB
> servers.
> What we are trying is, inorder to install (make, make install) oracle_fdw
> on all DB servers,we want to compile on one server and want build a
> package/tar file to copy the library files and required SQL scripts into
> all other servers.
> So that it will reduce the installation efforts.
>
> can we copy the oracle_fdw.so file into "/9.5/lib/postgresql"
> And  oracle_fdw.control,oracle_fdw--1.1.sql and oracle_fdw--1.0--1.1.sql
> files into "/9.5/share/postgresql/extension" directory.
> Is this the right approach?
>
> Is there any process to create re-locatable binary files for oracle_fdw?
>

If you are installing same Postgres versions across all the DB servers,
then, I would suggest build re-locatable Postgres binaries which includes
oracle_fdw as well.

Thanks,
Venkata B

>
> Thanks,
> chiru
>
-- 

Regards,

Venkata B N
Database Consultant


I do not get the point of the information_schema

2018-02-12 Thread Thiemo Kellner
I try to implement SCD2 on trigger level and try to generated needed 
code on the fly. Therefore I need to read data about the objects in the 
database. So far so good. I know of the information_schema and the 
pg_catalog. The documentation for the information_schema states that it 
'is defined in the SQL standard and can therefore be expected to be 
portable and remain stable'. I can think of a sensible meaning of 
portable. One cannot port it to MariaDB, can one? Maybe different 
PostreSQL version but then a one fits all implementation would mean only 
parts of the catalogue that never ever change can be exposed by the 
information_schema. Coming from Oracle I consider the information_schema 
the analogy to Oracles data dictionary views giving a stable interface 
on the database metadata hiding catalogue structure changes. But I 
dearly miss some information therein. I created following query to get 
the index columns of an index. I fear breakage when not run on the 
specific version I developed it against. Is there a more elegant way by 
the information_schema?


with INDEX_COLUMN_VECTOR as(
  select
i.indkey
  from
pg_catalog.pg_index i
  inner join pg_catalog.pg_class c on
i.indexrelid = c.oid
  where
c.relname = 'idiom_hist'
),
COLUMNS as(
  select
a.attname,
a.attnum
  from
pg_catalog.pg_attribute a
  inner join pg_catalog.pg_class c on
a.attrelid = c.oid
  where
c.relname = 'idiom'
) select
  c.attname
from
  COLUMNS c
inner join INDEX_COLUMN_VECTOR v on
  c.attnum = any(v.indkey)
order by
  c.attnum asc;

An other simpler case.

select
  indexname
from
  pg_catalog.pg_indexes
where
  schemaname = 'act'
  and tablename = i_table_name
  and indexname = i_table_name || '_hist';


--
Öffentlicher PGP-Schlüssel:
http://pgp.mit.edu/pks/lookup?op=get&search=0xCA167FB0E717AFFC
<>

pglogical in postgres 9.6

2018-02-12 Thread greigwise
Hello.  I'm trying to get pglogical setup between 2 postgres 9.6 instances. 
I can get everything replicating over fine, my problem is that I can't seem
to get it to work in the mode where it does not try to copy all the data
over initially.  On the source side, I'm doing this:

  psql -U hireology -d $SRCDB -c "SELECT pglogical.create_node( node_name :=
'${SRCNODE}', dsn := 'host=localhost port=${SRCPORT} dbname=${SRCDB}
user=logical_replication')"
  
  psql -U hireology -d $SRCDB -c "select
pglogical.create_replication_set('dw_repl', true, true, true, true)"
  
  psql -U hireology -d $SRCDB -c "select pglogical.replication_set_add_table
('dw_repl', 'accounts', false)"
  psql -U hireology -d $SRCDB -c "select pglogical.replication_set_add_table
('dw_repl', 'organizations', false)"
  psql -U hireology -d $SRCDB -c "select
pglogical.replication_set_add_sequence ('dw_repl', 'accounts_id_seq',
false)"
  psql -U hireology -d $SRCDB -c "select
pglogical.replication_set_add_sequence ('dw_repl', 'organizations_id_seq',
false)"

Note on the replication_set_add_table command, I'm sending false as the
third parameter which should tell it to not copy over the initial data.  But
when I create the subscription on the target side, I can see in the logs
where it is issuing the copy command.  And since I have the data already
there, I get duplicate key errors and it bombs out.  Any suggestions?

Thanks,
Greig Wise




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: I do not get the point of the information_schema

2018-02-12 Thread Tom Lane
Thiemo Kellner  writes:
> I try to implement SCD2 on trigger level and try to generated needed 
> code on the fly. Therefore I need to read data about the objects in the 
> database. So far so good. I know of the information_schema and the 
> pg_catalog. The documentation for the information_schema states that it 
> 'is defined in the SQL standard and can therefore be expected to be 
> portable and remain stable'. I can think of a sensible meaning of 
> portable. One cannot port it to MariaDB, can one?

If MariaDB implements information_schema according to the spec, then
yes.  (If they don't, that's something to complain about to them,
not us.)

> I created following query to get 
> the index columns of an index. I fear breakage when not run on the 
> specific version I developed it against. Is there a more elegant way by 
> the information_schema?

No, because indexes are not a part of the SQL standard.  (I'm not here
to debate the wisdom of that choice; we didn't make it.)  You can get
information about constraints out of the information_schema, so to the
extent that what you're interested in is the indexes underlying PK or
UNIQUE constraints, that's an option.  Otherwise, you're dealing with an
implementation-specific feature and you shouldn't be surprised that the
way of finding out about it is likewise implementation-specific.

FWIW, the stuff used in your sample query has all been there for a
very long time; we don't like to break plausible client queries lightly.

regards, tom lane



Re: pglogical in postgres 9.6

2018-02-12 Thread Adrian Klaver

On 02/12/2018 02:11 PM, greigwise wrote:

Hello.  I'm trying to get pglogical setup between 2 postgres 9.6 instances.
I can get everything replicating over fine, my problem is that I can't seem
to get it to work in the mode where it does not try to copy all the data
over initially.  On the source side, I'm doing this:

   psql -U hireology -d $SRCDB -c "SELECT pglogical.create_node( node_name :=
'${SRCNODE}', dsn := 'host=localhost port=${SRCPORT} dbname=${SRCDB}
user=logical_replication')"
   
   psql -U hireology -d $SRCDB -c "select

pglogical.create_replication_set('dw_repl', true, true, true, true)"
   
   psql -U hireology -d $SRCDB -c "select pglogical.replication_set_add_table

('dw_repl', 'accounts', false)"
   psql -U hireology -d $SRCDB -c "select pglogical.replication_set_add_table
('dw_repl', 'organizations', false)"
   psql -U hireology -d $SRCDB -c "select
pglogical.replication_set_add_sequence ('dw_repl', 'accounts_id_seq',
false)"
   psql -U hireology -d $SRCDB -c "select
pglogical.replication_set_add_sequence ('dw_repl', 'organizations_id_seq',
false)"

Note on the replication_set_add_table command, I'm sending false as the
third parameter which should tell it to not copy over the initial data.  But
when I create the subscription on the target side, I can see in the logs
where it is issuing the copy command.  And since I have the data already
there, I get duplicate key errors and it bombs out.  Any suggestions?


I don't know what is going on, but for those that might the pglogical 
version you are using might be helpful.




Thanks,
Greig Wise




--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: persistent read cache

2018-02-12 Thread Sand Stone
Thanks for the info. on bcache etc., first time heard about it. But it
seems quite coarse grained for the needs of a database.

The closest thing might be this implemented by the RocksDB engine:
https://github.com/facebook/rocksdb/wiki/Persistent-Read-Cache

On Mon, Feb 12, 2018 at 9:18 AM, Steve Atkins  wrote:
>
>> On Feb 11, 2018, at 5:14 PM, Sand Stone  wrote:
>>
>>
>> Hi. I wonder if there is such a thing or extension in the PG world.
>>
>> Here is my use case. I am using PG (PG10 to be more specific) in a
>> cloud VM environment. The tables are stored in RAID0 managed SSD
>> backed attached storage. Depending on the VM I am using, I usually
>> have 256GB local SSD unused.
>>
>> I wonder if PG could leverage this local SSD as a read (page/block)
>> cache, to complement/extend  the DRAM by used by shared_buffer today.
>
> It seems something that PostgreSQL could take advantage of, but
> it's probably the wrong layer to implement it. If your VM infrastructure
> doesn't have any way to use it directly, maybe you could do it at the
> drive / filesystem level with something like bcache, lvmcache or
> enhanceio?
>
> Adding that sort of complexity to something that needs solid data
> integrity makes me nervous, but those solutions have been in the
> field for years.
>
> Cheers,
>   Steve
>
>
>



Re: pglogical in postgres 9.6

2018-02-12 Thread greigwise
I downloaded it from git and compiled from source just today.

Looks like it's 2.0.0 based on the pglogical.control file that was in the
source.  I'm not sure how else I'd know.

Greig



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pglogical in postgres 9.6

2018-02-12 Thread greigwise
I downloaded from git and compiled from source.  Based on the
pglogical.control file, it appears to be 2.0.0.  Is there a better way to
determine the version?

Thanks,
Greig



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: pglogical in postgres 9.6

2018-02-12 Thread Adrian Klaver

On 02/12/2018 04:46 PM, greigwise wrote:

I downloaded from git and compiled from source.  Based on the
pglogical.control file, it appears to be 2.0.0.  Is there a better way to
determine the version?


In psql do:

\dx



Thanks,
Greig



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pglogical in postgres 9.6

2018-02-12 Thread Adrian Klaver

On 02/12/2018 04:46 PM, greigwise wrote:

I downloaded from git and compiled from source.  Based on the
pglogical.control file, it appears to be 2.0.0.  Is there a better way to
determine the version?


Should have added that you need run the \dx in the database you 
installed pglogical into.




Thanks,
Greig



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html





--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pglogical in postgres 9.6

2018-02-12 Thread greigwise
hireology_tmp=# \dx
List of installed extensions
   Name   | Version |  Schema  |   
Description
--+-+--+
 pglogical| 2.0.0   | pglogical| PostgreSQL Logical
Replication
 pglogical_origin | 1.0.0   | pglogical_origin | Dummy extension for
compatibility when upgrading from Postgres 9.4



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html



Re: Barman 2.3 errors

2018-02-12 Thread Michael Paquier
On Mon, Feb 12, 2018 at 03:23:22PM +, GALLIANO Nicolas wrote:
> I'm trying to backup a remote DB (9.6.6) using barman 2.3 but backup
> failed start. 
> In barman.log i've such errors :

It would be a better idea to contact directly the maintainers of the
project here:
http://www.pgbarman.org/support/
--
Michael


signature.asc
Description: PGP signature


RE: Barman 2.3 errors

2018-02-12 Thread Ahmed, Nawaz

Hi Nicolas,

I would like to take stab at this one, as i had recently worked on a demo of 
barman. But like Michael Paquier said, it is better to check the information i 
provide and the situation you are facing with the maintainers of the project. 
Here is what i found.

The first thing is to check if you can see the replication slot named "barman" 
created on the target database using the below command. It should return a slot 
named "barman" with the slot_type as "physical". The "replication slot: OK" 
line of the check command shows the slot is available, however, please double 
check if it is true with the below query.


select * from pg_replication_slots;


Now let us target the line "WAL archive: FAILED", I faced this issue when i had 
killed the "receive-wal" process and restarted it.  First look for the 
receive-wal process with the ps command

ps -ef|grep receive-wal

if it is not running then start it up in the background using the command

$ barman receive-wal tcgepg96ddm &

If the receive-wal process is running but you still face that error in the 
check command, then I suggest you switch the xlog using the below command.

$ barman switch-xlog tcgepg96ddm

if the above command fails to switch the xlog, then try to force it with the 
below command.

$ barman switch-xlog --force tcgepg96ddm

if you still cannot get it to work, then try to reset the status of the 
receive-wal process using the --reset option as below.

$ barman receive-wal --reset tcgepg96ddm


If you can successfully run the above reset command, then try to switch the log 
file  and run the check command again to see if everything looks fine.

$ barman switch-xlog tcgepg96ddm


Hope that helps, again, these are the steps i had taken to resolve a similar 
issue. You might still want to get in touch with the project maintainers about 
the validity of the above commands.


Best Regards,

Nawaz Ahmed
Software Development Engineer

Fujitsu Australia Software Technology Pty Ltd
14 Rodborough Road, Frenchs Forest NSW 2086, Australia
T +61 2 9452 9027
na...@fast.au.fujitsu.com
fastware.com.au

[cid:image001.jpg@01D3A4F5.6FE3CFE0]
[cid:image002.jpg@01D3A4F5.6FE3CFE0]

From: GALLIANO Nicolas [mailto:nicolas.galli...@dsi.cnrs.fr]
Sent: Tuesday, 13 February 2018 2:23 AM
To: pgsql-gene...@postgresql.org
Subject: Barman 2.3 errors

Hi,

I’m trying to backup a remote DB (9.6.6) using barman 2.3 but backup failed 
start.
In barman.log i’ve such errors :

2018-02-12 16:18:23,852 [57691] barman.server ERROR: Check 'replication slot' 
failed for server 'tcgepg96ddm'
2018-02-12 16:18:23,857 [57691] barman.server ERROR: Check 'receive-wal 
running' failed for server 'tcgepg96ddm'
2018-02-12 16:18:30,815 [57779] barman.wal_archiver INFO: No xlog segments 
found from streaming for tcgepg96ddm.
2018-02-12 16:18:30,824 [57781] barman.server INFO: Starting receive-wal for 
server tcgepg96ddm
2018-02-12 16:18:30,905 [57781] barman.wal_archiver INFO: Synchronous WAL 
streaming for barman_receive_wal: True
2018-02-12 16:18:30,906 [57781] barman.wal_archiver INFO: Activating WAL 
archiving through streaming protocol
2018-02-12 16:18:30,922 [57781] barman.cli ERROR: 'utf8' codec can't decode 
byte 0xe0 in position 47: invalid continuation byte
See log file for more details.
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/barman/cli.py", line 1126, in main
p.dispatch(pre_call=global_config)
  File "/usr/lib/python2.7/site-packages/argh/helpers.py", line 55, in dispatch
return dispatch(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/argh/dispatching.py", line 174, in 
dispatch
for line in lines:
  File "/usr/lib/python2.7/site-packages/argh/dispatching.py", line 277, in 
_execute_command
for line in result:
  File "/usr/lib/python2.7/site-packages/argh/dispatching.py", line 231, in 
_call
result = function(namespace_obj)
  File "/usr/lib/python2.7/site-packages/barman/cli.py", line 792, in 
receive_wal
server.receive_wal(reset=args.reset)
  File "/usr/lib/python2.7/site-packages/barman/server.py", line 1708, in 
receive_wal
archiver.receive_wal(reset)
  File "/usr/lib/python2.7/site-packages/barman/wal_archiver.py", line 751, in 
receive_wal
receive.execute()
  File "/usr/lib/python2.7/site-packages/barman/command_wrappers.py", line 418, 
in execute
self.pipe_processor_loop(processors)
  File "/usr/lib/python2.7/site-packages/barman/command_wrappers.py", line 484, 
in pipe_processor_loop
eof = stream.process()
  File "/usr/lib/python2.7/site-packages/barman/command_wrappers.py", line 82, 
in process
self._buf += data.decode('utf-8')
  File "/usr/lib64/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xe0 in position 47: invalid 
continuation byte
2018-02-12 16:18:32,373 [58491] barman.server ERROR: Check 'WAL arch

Re: Connection loosing at some places - caused by firewall

2018-02-12 Thread Durumdara
Hello!

2017-11-14 12:09 GMT+01:00 Durumdara :

>
> Windows 10, PGSQL 9.4 and 9.6 (Debian with SSL, and Windows without it -
> it doesn't matter).
>
> When I (or my boss) work(s) at home, I got connection lost errors from
> PGAdmin (3/4) or from other applications too.
>
>
> server closed the connection unexpectedly
>  This probably means the server terminated abnormally
>  before or while processing the request.
>
>
> PGAdmin, EMS PostgreSQL manager, Navicat for PGSQL, and psql too.
>
> When minimum 5 minutes passed after the last request (select, insert, or
> any communication what I did) the connection lost.
>
>
>

For PGAdmin III I found a solution-like thing.

https://superuser.com/questions/729034/any-way-to-keep-connection-alive-in-pgadmin-without-setting-it-on-the-server

dd