could not read block 0 in file : read only 0 of 8192 bytes when doing nasty on immutable index function

2018-06-27 Thread Luca Ferrari
I've found this strange (to me) behavior when doing nasty things with
indexes and immutable functions:

create table t( pk serial, t text );
insert into t( t ) values( 'hello' ), ('world');
create or replace function f_fake( i int )
returns text
as $body$
declare
  v_t text;
begin
   select t into strict v_t
   from t where pk = i limit 1;
   return v_t;
exception
  when no_data_found then return 'a';
end
$body$
language plpgsql immutable;

Of course, f_fake is not immutable.
When on 10.4 or 11 beta 1 I try to create an index on this nasty
crappy function:

create index idx_fake on t ( f_fake( pk ) );

ERROR:  could not read block 0 in file "base/16392/16444": read only 0
of 8192 bytes
CONTEXT:  SQL statement "select tfrom t where pk =
i limit 1"
PL/pgSQL function f_fake(integer) line 5 at SQL statement

that is somehow correct (because the function cannot be used to build
an index), but then it goes worst:

elect * from t;
ERROR:  could not open relation with OID 16444

If I then disconnect and reconnect I'm able to issue the select and
get back the results. But if I issue a reindex I got the same error
and the table "becames unreadable" for the whole session.
On 10.3 the table is never locked for the session, that is I can
create the index, I can query the table and get the results, but I
cannot reindex. However, even after a reindex, it does allow me to
select data from the table.

So my question is: why this behavior in later PostgreSQL?



Re: could not read block 0 in file : read only 0 of 8192 bytes when doing nasty on immutable index function

2018-06-27 Thread Luca Ferrari
On Wed, Jun 27, 2018 at 10:44 PM Andres Freund  wrote:
> But I also can't reproduce it either on 10.4, 10-current, master.  Did
> you build from source? Packages? Any extensions? Is there anything
> missing from the above instruction to reproduce this?

Somehow today I cannot reproduce it by myself, I must have missed
something since I cannot get locked out from the table.
However I've tested that on 10.3 (one I've at the moment):
1) create table, insert, create index, reindex causes the problem but
do not locks the further select
2) create table, create index, insert, reindex does not show the
problem (i.e., no comlain at all)

while on the following version both 1 and 2 shows the reading problem
once the reindex is issued (but allows further selects):
testdb=> select version();

  version
-
 PostgreSQL 11beta1 on x86_64-unknown-freebsd11.1, compiled by FreeBSD
clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on LLVM
4.0.0), 64-bit


It seems not to depend on max_parallel_maintance_workers.
Sorry, I cannot provide more help at the moment.
I will try to reproduce it again.

Luca



Re: could not read block 0 in file : read only 0 of 8192 bytes when doing nasty on immutable index function

2018-06-27 Thread Luca Ferrari
Got it: it happens if you drop and recreate the index. It shows up
either setting max_parallel_maintanance_workers to zero or a greater
value.

testdb=> create table t( pk serial, t text );
CREATE TABLE
testdb=> insert into t( t ) values( 'hello' ), ('world');
INSERT 0 2
testdb=> create or replace function f_fake( i int )
returns text
as $body$
declare
  v_t text;
begin
   select t into strict v_t
   from t where pk = i limit 1;
   return v_t;
exception
  when no_data_found then return 'a';
end
$body$
language plpgsql immutable;
CREATE FUNCTION
testdb=> create index idx_fake on t ( f_fake( pk ) );
CREATE INDEX
testdb=> drop index idx_fake;
DROP INDEX

testdb=> create index idx_fake on t ( f_fake( pk ) );
2018-06-28 10:23:18.275 CEST [892] ERROR:  could not read block 0 in
file "base/16392/16538": read only 0 of 8192 bytes
2018-06-28 10:23:18.275 CEST [892] CONTEXT:  SQL statement "select t
 from t where pk = i limit 1"
PL/pgSQL function f_fake(integer) line 5 at SQL statement
2018-06-28 10:23:18.275 CEST [892] STATEMENT:  create index idx_fake
on t ( f_fake( pk ) );
ERROR:  could not read block 0 in file "base/16392/16538": read only 0
of 8192 bytes
CONTEXT:  SQL statement "select tfrom t where pk =
i limit 1"
PL/pgSQL function f_fake(integer) line 5 at SQL statement

testdb=> select * from t;
2018-06-28 10:23:23.642 CEST [892] ERROR:  could not open relation
with OID 16538
2018-06-28 10:23:23.642 CEST [892] STATEMENT:  select * from t;
ERROR:  could not open relation with OID 16538

This has been tested on

testdb=> select version();

  version
-
 PostgreSQL 11beta1 on x86_64-unknown-freebsd11.1, compiled by FreeBSD
clang version 4.0.0 (tags/RELEASE_400/final 297347) (based on LLVM
4.0.0), 64-bit

testdb=> show max_parallel_maintenance_workers ;
 max_parallel_maintenance_workers
--
 2



help understanding create statistic

2018-06-28 Thread Luca Ferrari
Hi all,
in order to better understand this feature of 10, I've created a table
like this:

CREATE TABLE expenses(
   ...
   day date,
   year int,
   CHECK( year = EXTRACT( year FROM day ) )
);

so that I can ensure 'year' and 'day' are tied together:

SELECT
count(*) FILTER( WHERE year = 2016 ) AS by_year,
count(*) FILTER( WHERE EXTRACT( year FROM day ) = 2016 ) AS by_day
FROM expenses;
-[ RECORD 1 ]-
by_year | 8784
by_day  | 8784

Then I created a statistic:

CREATE STATISTICS stat_day_year ( dependencies )
ON day, year
FROM expenses;

select * from pg_statistic_ext ;
-[ RECORD 1 ]---+-
stxrelid| 42833
stxname | stat_day_year
stxnamespace| 2200
stxowner| 16384
stxkeys | 3 5
stxkind | {f}
stxndistinct|
stxdependencies | {"3 => 5": 1.00}

Now, having an index on the extract year of day as follows:

CREATE INDEX idx_year
ON expenses ( EXTRACT( year FROM day ) );

why is the planner not choosing to use such index on a 'year' raw query?

EXPLAIN SELECT * FROM expenses
WHERE year = 2016;
  QUERY PLAN
---
 Gather  (cost=1000.00..92782.34 rows=8465 width=32)
   Workers Planned: 2
   ->  Parallel Seq Scan on expenses  (cost=0.00..90935.84 rows=3527 width=32)
 Filter: (year = 2016)

The number of rows are correct, but I was expecting it to use the same
index as a query like "WHERE EXTRACT( year FROM day) = 2016" triggers.

Even altering the year column to not null does show any change, and
this is the plan obtained turning off seq_scan (to see the costs):

EXPLAIN ANALYZE SELECT * FROM expenses
WHERE year = 2016;
QUERY PLAN
--
 Seq Scan on expenses  (cost=100.00..1127402.44 rows=8451
width=32) (actual time=972.734..2189.300 rows=8784 loops=1)
   Filter: (year = 2016)
   Rows Removed by Filter: 4991216


Am I misunderstaing this functional dependency?

Thanks,
Luca



Re: plperl and plperlu language extentsions

2018-06-28 Thread Luca Ferrari
On Thu, Jun 28, 2018 at 9:26 AM Niles Oien  wrote:
>
> Yet plperl would seem to be installed :
>
> # yum list | grep postgres | grep perl
> postgresql-plperl.x86_64  9.2.23-3.el7_4   base
> postgresql10-plperl.x86_6410.4-1PGDG.rhel7 pgdg10
>

Reinstalling the package?
Is the correct package for the 10 distribution?

Luca



select version() with internal number version?

2018-08-09 Thread Luca Ferrari
HI all,
I see a lot of external tools that do a 'SELECT version()' and then
parse the output to get the version number (e.g., 10.4).
My opinion is that this is not a good approach, since the output of
version includes a lot of information and can mismatch a poorly
written regular expression. It would be better to use pg_config
--version, byt again there is the needing for mangling the data.
Is there any way to get directly the PostgreSQL version number?

Thanks,
Luca



Re: select version() with internal number version?

2018-08-10 Thread Luca Ferrari
On Thu, Aug 9, 2018 at 3:51 PM Tom Lane  wrote:
> Also, if you're using libpq, see PQserverVersion() which
> (a) avoids a round trip to the server, and (b) works further back
> than server_version_num, though that issue is probably academic
> to most folk at this point (server_version_num appeared in 8.2).

Thanks but server_version_num is what I was looking for, since I need
to get the version number from outside PostgreSQL in foreign languages
(python, perl, etc).
I don't believe going back than 8.2 is a metter in this case, since
this is for supporting tools that should not...ehm...support any more
EOL versions.

Luca



help defining a basic type operator

2018-08-20 Thread Luca Ferrari
Hi all,
I'm trying to define a custom data type that would represent a number
of bytes in a lossy human way.
The type is defined as:

typedef struct HFSize
{
double  size;
int   scaling;
} HFSize;

and the operator function is defined as:

Datum
hfsize_add(PG_FUNCTION_ARGS)
{

  HFSize *first  = (HFSize *) PG_GETARG_POINTER(0);
  HFSize *second = (HFSize *) PG_GETARG_POINTER(1);
  HFSize *sum= new_HFSize();

  to_bytes( first );
  to_bytes( second );

  elog( DEBUG1, "sum %s + %s ", to_string( first ), to_string( second ) );
  sum->size = first->size + second->size;

  elog( DEBUG1, "Final sum %s ", to_string( sum ) );
PG_RETURN_POINTER( sum );
}



The problem is that, even if the last elog shows a correct result, the
final value returned via PG_RETURN_POINTER is something totally
different with the double value set to zero and an apparently random
'scaling':

# set client_min_messages to debug;
SET
testdb=# SELECT '1030'::hfsize + '2030'::hfsize;
DEBUG:  Converting to human text format 1030.00-bytes
LINE 1: SELECT '1030'::hfsize + '2030'::hfsize;
   ^
DEBUG:  Converting to human text format 2030.00-bytes
LINE 1: SELECT '1030'::hfsize + '2030'::hfsize;
^
DEBUG:  sum 1030.00-bytes + 2030.00-bytes
DEBUG:  Final sum 3060.00-bytes
 ?column?
--
 0.00-64


I've tried also to return one of the two operands from the add
function, so something like:
Datum
hfsize_add(PG_FUNCTION_ARGS)
{

  HFSize *first  = (HFSize *) PG_GETARG_POINTER(0);
  HFSize *second = (HFSize *) PG_GETARG_POINTER(1);
  PG_RETURN_POINTER( first );
}

but again the result has a zero value and a random scaling, and most
notably is not the first operand. Also memory addresses (used with %x)
are different from inside and outside the add function.
Is there something I'm missing?

Thanks,
Luca



Re: help defining a basic type operator

2018-08-20 Thread Luca Ferrari
On Mon, Aug 20, 2018 at 4:51 PM Tom Lane  wrote:
>
> Luca Ferrari  writes:
> > I'm trying to define a custom data type that would represent a number
> > of bytes in a lossy human way.
>
> You did not show us the SQL definition of the type.  I don't see anything
> obviously wrong in what you showed (other than hfsize_add not setting the
> result's scaling), so the problem is somewhere else.  Given this C
> declaration, the type probably needs to be size 16, double alignment,
> pass-by-reference; maybe you messed up part of that?
>

Shame on me: when I issued a create type I didn't realize that I was
miswriting the length attribute from 'internallength' to
'internalsize', and while an error was reported, the type was created.
Fixing the type creation into:

CREATE TYPE hfsize (
   internallength = 16,
   input  = hfsize_input_function,
   output = hfsize_output_function
);

solved the problem, so it was a length mismatch.

> >   HFSize *sum= new_HFSize();
>
> What is new_HFSize?

An helper function to allocate a new object (and that was why scaling
did not get referenced in the add function):


HFSize*
new_HFSize()
{
  HFSize *size = (HFSize*) palloc( sizeof( HFSize ) );
  size->scaling = 0;
  size->size = 0.0f;
  return size;
}


Thanks,
Luca



help understanding pgbench results

2019-07-12 Thread Luca Ferrari
Hi all,
I'm trying to understand some simple benchmarks but I need an hint.

=# select version();
 version
-
 PostgreSQL 11.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-28), 64-bit


shared_buffers = 1 GB
checkpoint_timeout = 5 min

I've created a pgbench database as follows (around 4.5 GB):
% pgbench -i -s 300 -F 100 --foreign-keys --unlogged-tables -h
127.0.0.1 -U luca pgbench

and I've tested three times (each time after a restart) with the following:
% pgbench  -T 600  -j 4 -c 4  -h 127.0.0.1 -U luca -P 60  pgbench


Since tables are unlogged, I was expecting no much difference in
setting checkpoint_completion_target, but I got (average results):
- checkpoint_completion_target = 0.1  ==> 755 tps
- checkpoint_completation_target = 0.5 ==> 767 tps
- checkpoint_completion_target = 0.9 ==> 681 tps

so while there is not a big different in the first two cases, it seems
throttling I/O reduces the tps, and I don't get why. Please note that
there is some small activity while benchmarking, and that's why I ran
at least three tests for each setting.
Am I looking at wrong (or meaningless) numbers?

Thanks for any hint.
Luca




Re: FATAL: invalid page in block 0 of relation global/1262

2019-07-12 Thread Luca Ferrari
On Fri, Jul 12, 2019 at 5:22 PM Ibrahim Edib Kokdemir
 wrote:
>
> Hi,
> I upgraded my cluster from 9.6 to 11 with pg_upgrade hardlink parameter two 
> days ago. Since then it has been working great as v11. Today while running 
> heavy update queries on it, I got the  "FATAL:  invalid page in block 0 of 
> relation global/1262" message and server crushed.
> Hard reset and trying to start, it start but shows the same error message and 
> do not let me to connect. "psql: FATAL:  invalid page in block 0 of relation 
> global/1262". I read that the filenode 1262 is for pg_database table.
>


The error message is indicating there is a page checksum error, i.e.,
the filesystem got corrupted.
I'm not aware of any way of turning it off without re-initdbing
().
Sorry, but seems to me a good backup is required to fix it.

Luca




Re: Issue: Creating Symlink for data directory of postgresql in CentOS7

2019-07-12 Thread Luca Ferrari
On Fri, Jul 12, 2019 at 4:47 PM Laurenz Albe  wrote:
> So there is some "postgresql-check-db-dir" (not part of PostgreSQL) that 
> complains
> that there is nothing in /data/postgresql/data.  Is that accurate?
>

According to this

the script should check PGDATA does exist and contain the PG_VERSION
file or complains as shown in the logs.

> It looks like you succeeded in getting the startup process to look for the
> PostgreSQL data directory in the new location, but - alas - there is no data
> directory there yet.What's the content of


What's the content of $PGDATA?

Luca




Re: Issue: Creating Symlink for data directory of postgresql in CentOS7

2019-07-13 Thread Luca Ferrari
On Sat, Jul 13, 2019 at 5:12 AM Chatterjee, Shibayan
 wrote:
> For sure there's all the necessary files in '/data/postgresql/data'. The 
> startup process cannot read it, because of sym link.
> Executing init-db return the below message:
>

I suspect this could be a systemd problem. What if you try to manually
start the postgres instance?

pg_ctl -D /data/postgresql/data start

What happens then?


>  # sudo postgresql-setup initdb
> Data directory is not empty!

This is of course to prevent you to destroy your system.
I would remove the link, create it to an empty directory and then
initdb such directory as if it is your new database. I imagine that
would start, in such case your data directory is wrong. Can you show
us the content of /data/postgresql/data?

Luca




Re: Testing an extension against multiple versions of Postgresql

2019-07-13 Thread Luca Ferrari
On Sat, Jul 13, 2019 at 8:03 AM Ian Barwick  wrote:
> > So the questions are:
> > Can I have make install & make installcheck run against multiple versions 
> > at the same time or how do I tell the installcheck to run against the 11 
> > server.
>
> Try setting relevant environment variables [*] for the target version before 
> running "make installcheck".
>
> [*] https://www.postgresql.org/docs/current//libpq-envars.html
>

Aside that, a possible easy way to test something against different
PostgreSQL instance is to use something like pgenv
, that is for instance used to test
sqitch.

Luca




Re: Issue: Creating Symlink for data directory of postgresql in CentOS7

2019-07-14 Thread Luca Ferrari
On Sun, Jul 14, 2019 at 5:05 PM Laurenz Albe  wrote:
> Since you say that there was a regular data directory there, that would point
> to permission problems.

I'm not a systemd expert, but since we are _before_ starting the
cluster, the checks should be run as superuser, so I don't see how a
permission problem could cause this. However, I would like to see the
output of an `ls -l` on such link and directory because I believe
there's rahter a nesting problem (i.e., the PGDATA is somehwere under
another directory).

Luca




Re: help understanding pgbench results

2019-07-14 Thread Luca Ferrari
On Fri, Jul 12, 2019 at 12:04 PM Luca Ferrari  wrote:
> Since tables are unlogged, I was expecting no much difference in
> setting checkpoint_completion_target, but I got (average results):
> - checkpoint_completion_target = 0.1  ==> 755 tps
> - checkpoint_completation_target = 0.5 ==> 767 tps
> - checkpoint_completion_target = 0.9 ==> 681 tps
>

I've repeated the test with normal (logged) tables, same
configuration, and the results are:
- checkpoint_completion_target = 0.1 ==> 560 tps
- checkpoint_completion_target  = 0.5 ==> 624 tps
- checkpoint_completion_target = 0.9 ==> 619 tps

so here forcing I/O on checkpoints reduces the tps, as I would be
expecting. However I'm still unable to get an interpretation of the
unlogged table results.
That also makes me think it could these tests have no meaning at all.

Luca




Re: How to run a task continuously in the background

2019-07-15 Thread Luca Ferrari
On Fri, Jul 12, 2019 at 7:06 AM Dirk Mika  wrote:
>
>
>
> A cron job will only run once a minute, not wake up every second.
>
>
>
> I would like to avoid external programs if possible. In the current Oracle 
> environment, there are potentially multiple schemas on a server in which 
> processing can be active. And processing can be started, monitored and 
> stopped from a client application. And only for the schema with which the 
> application is connected.
>

Creating a background worker that invokes a stored procedure once per
second? 
But this is not so simple to put in place.

Otherwise pg_cron with a function that performs a pg_sleep of one
second in a loop.

Anyway, it seems to me you are better refactoring your solution: it
seems you need to process data when _new data_ comes, not once per
second, so it sounds to me like a trigger could solve the problem.

Luca




Re: help understanding pgbench results

2019-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2019 at 1:35 PM Fabio Pardi  wrote:
> unlogged tables are not written to WAL, therefore checkpoints do not fit into 
> the picture (unless something else is writing data..).

That's my thought, and I was not expecting any big change in tps due
to checkpoint_completion_target on unlogged tables.

> It is not a good idea to have anything running in the background.

Yes, I know, but the activity in the database is a task importing data
on a per-schedule basis, always importing the same number of tuples
(and therefore the same data size). In other words, it is a very
constant and predictable workload.

>
> Also is always a good idea to run tests multiple times, and I think that 3 is 
> the bare minimum.
> You want to make sure your tests are as reliable as possible, means having 
> similar results between each other, therefore you might post all the results, 
> not only the average, so people can give their interpretation of the data.
>

I'm trying to prepare a virual machine to run more tests in a
completely isolated environment.
But I was not trying to benchmarking the database, rather guessing
what caused the different tps in such environment.


> Assuming that the 'background activity' writes data, a value of 
> (checkpoint_completion_target) 0.9 means that when your test starts, the 
> system might be still busy in writing data from the previous checkpoint 
> (which started before your pgbench test was launched). That is less likely to 
> happen with a value of 0.1

Uhm...but in the logged table tests a value of 0.9 increases the tps,
that as far as I understand is in contrast with what you are stating.

Anyway, I'll test more and report back some more results.

Thanks,
Luca




after restore the size of the database is increased

2019-07-15 Thread Luca Ferrari
Hi all,
this should be trivial, but if I dump and restore the very same
database the restored one is bigger than the original one.
I did vacuumed the database foo, then dumped and restored into bar,
and the latter, even when vacuumed, remains bigger then the original
one.
No other activity was running on the cluster.

What am I missing here?

% vacuumdb --full foo
vacuumdb: vacuuming database "foo"

% pg_dump -Fd -f backup_foo.d -U postgres foo

% createdb bar
% pg_restore -Fd backup_foo.d -U postgres -d bar

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2686571167
pg_database_size | 2690212355

% vacuumdb --full bar
vacuumdb: vacuuming database "bar"

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2686571167
pg_database_size | 2688193183

% psql -c 'select version();' -U postgres template1

version
-
 PostgreSQL 11.3 on amd64-portbld-freebsd12.0, compiled by FreeBSD
clang version 6.0.1 (tags/RELEASE_601/final 335540) (based on LLVM
6.0.1), 64-bit
(1 row)




Re: after restore the size of the database is increased

2019-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2019 at 7:07 PM Adrian Klaver  wrote:
> What does \l+ show?

The same as pg_size_pretty:

foo=# \l+
   List of databases
   Name|  Owner   | Encoding  | Collate | Ctype |   Access
privileges   |  Size   | Tablespace |Description
---+--+---+-+---+---+-++
 bar   | luca | SQL_ASCII | C   | C |
 | 2566 MB | pg_default |
 foo   | luca | SQL_ASCII | C   | C |
 | 2562 MB | pg_default |


foo=# SELECT pg_size_pretty( pg_database_size( 'foo' ) ) AS foo,
pg_size_pretty( pg_database_size( 'bar' ) ) AS bar;
-[ RECORD 1 ]
foo | 2562 MB
bar | 2566 MB

Luca




Re: after restore the size of the database is increased

2019-07-15 Thread Luca Ferrari
On Mon, Jul 15, 2019 at 7:21 PM Peter Geoghegan  wrote:
> Sometimes B-Tree indexes can be *larger* after a REINDEX (or after
> they're recreated with a CREATE INDEX). It's not that common, but it
> does happen. There isn't actually a very large size difference here,
> so it seems worth comparing index size in detail.

A very good guess, and effectively reindexing the databases the size
of the _restored_ one has shrinked being less than the original one
(as I would expect in first place):

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2685776543
pg_database_size | 2690269699

% vacuumdb --full foo
vacuumdb: vacuuming database "foo"
% psql -U postgres -c "REINDEX DATABASE foo;" foo
REINDEX
% vacuumdb --full bar
vacuumdb: vacuuming database "bar"
% psql -U postgres -c "REINDEX DATABASE bar;" bar
REINDEX

% psql -U postgres -c '\x' -c "SELECT pg_database_size( 'foo' ),
pg_database_size( 'bar' );" template1
Expanded display is on.
-[ RECORD 1 ]+---
pg_database_size | 2685817503
pg_database_size | 2685624835


However I still don't get why the size should not be the same after
such vacuum+reindex. If my brain serves me well, in this case we have
less than 0.2MB of difference, that compared to the database size
(~2.5GB) is more than acceptable. Nevertheless, I would have thought
that a restored database has been always smaller than the original
one.

Luca




Re: How to run a task continuously in the background

2019-07-16 Thread Luca Ferrari
On Tue, Jul 16, 2019 at 7:32 AM Dirk Mika  wrote:
> It's not really important that the job runs once a second, but that it starts 
> immediately when I want it to.
>
> If I start a job with pg_cron, it will not be executed until the next full 
> minute at the earliest.
>
> The processing of the data via a job is deliberately chosen so as to separate 
> the insertion of the data from their processing.

So, as far as I understand, you want asynchronously processing data
with a process that can be started manually and/or periodically.
I'm probably unable to see what is the goal, but I would go for a
combined solution:
1) a trigger that notifies an external process

2) the process runs when notified (by the trigger) or when started
manually or when started by pg_cron (one per minute).

Of course the process is "internal", so something like a stored
procedure (at least as entry point).
The problem with such solution is about race conditions (what if you
manually start something that is already running?), but I guess you
had this problem on the oracle side too.

Hope this helps.
Luca




Re: help understanding pgbench results

2019-07-16 Thread Luca Ferrari
I've done another set of tests, and effectively it seems that, with
unlogged tables, the checkpoint_completion_target does not influence
the final results.
I've increased the test duration in order to include several
checkpoints within each run.

First of all, initialization of the database:
% pgbench -i -s 300 -F 100 --foreign-keys --unlogged-tables -h
127.0.0.1 -U luca pgbench

Then the test I ran, six time after a restart between a batch and the other:
% pgbench  -T 720  -j 4 -c 4  -h 127.0.0.1 -U luca  pgbench


The average tps results always around 795, so I believe in the
previous batch of tests I was misleaded by a few wrong numbers that
made the average floating up and down.
The following is a detail about the runs. Note that this has been run
with the same external database activity as in the previous bunch of
tests.

# First batch

 name | setting
--+-
 checkpoint_completion_target | 0.5
 checkpoint_timeout   | 300
 shared_buffers   | 131072


runs = 765, 807, 781, 799, 822, 796
avg  = 795 tps


Sample checkpoint log:

08:43:06 LOG:  checkpoint starting: time
08:43:07 LOG:  checkpoint complete: wrote 13 buffers (0.0%); 0 WAL
file(s) added, 0 removed, 0 recycled; write=1.304 s, sync=0.032 s,
total=1.368 s; sync files=17, longest=0.006 s, average=0.001 s;
distance=9426 kB, estimate=9907 kB


# Second batch

 name | setting
--+-
 checkpoint_completion_target | 0.1
 checkpoint_timeout   | 300
 shared_buffers   | 131072

runs = 810, 777, 808, 774, 806, 798
avg  = 795 tps

Sample checkpoint log:

09:34:54 LOG:  checkpoint starting: time
09:34:55 LOG:  checkpoint complete: wrote 12 buffers (0.0%); 0 WAL
file(s) added, 1 removed, 0 recycled; write=1.204 s, sync=0.105 s,
total=1.363 s; sync files=13, longest=0.049 s, average=0.008 s;
distance=9431 kB, estimate=9716 kB


# Third batch


 name | setting
--+-
 checkpoint_completion_target | 0.9
 checkpoint_timeout   | 300
 shared_buffers   | 131072

runs = 809, 801, 772, 797, 789, 795
avg  = 793 tps

Sample checkpoint log:

12:17:28 LOG:  checkpoint starting: time
12:17:32 LOG:  checkpoint complete: wrote 39 buffers (0.0%); 0 WAL
file(s) added, 0 removed, 0 recycled; write=3.914 s, sync=0.004 s,
total=3.927 s; sync files=11, longest=0.001 s, average=0.000 s;
distance=393 kB, estimate=7606 kB




Re: How to run a task continuously in the background

2019-07-17 Thread Luca Ferrari
On Wed, Jul 17, 2019 at 9:38 AM Dirk Mika  wrote:
> That is basically still my main question. How do I start a background job 
> (e.g. a function) which waits by polling or LISTEN / NOTIFY for records in a 
> table to be processed.


You will have a trigger that, once new tuples are created (or older
update and so on) issues a NOTIFY.
Somewhere (within PostgreSQL or outside it) there will be a process
that issued a LISTEN and is locked until a notify comes in. Then it
does process whatever you need to do.
As an example your trigger function will be something like

and your listening process will be something like
.

This makes your processing fully asynchronous, and with some tune
allows you to decide the start/stop/resume policy as you need/wish.

Besides, it is quite hard for me to get to the point where you need to
check for new data every second, and therefore why you are not
satisfied with pg_cron or stuff like that.

Luca




Re: Issue related with patitioned table:How can I quickly determine which child table my record is in,given a specific primary key value?

2019-07-17 Thread Luca Ferrari
On Wed, Jul 17, 2019 at 9:19 AM 王旭  wrote:
> I tried something like this:  e.g., for symbol_id 6365,
> SELECT (hashint2(6365::SMALLINT)% 10)

shouldn't this be modulus 3 instead of 10?
The problem is that record 6365 is not where you expected to be?

As far as I know, there is no easy user-level way to get the route to
a table, but you can juggle with the expression that defined each
table and make a good guess.
However, your query should give a good idea:

# SELECT 'my_table_' || (hashint2(6365::smallint)% 3);
  ?column?

 my_table_2




Re: Issue related with patitioned table:How can I quickly determine which child table my record is in,given a specific primary key value?

2019-07-17 Thread Luca Ferrari
On Wed, Jul 17, 2019 at 11:41 AM James(王旭)  wrote:
> From these results I can tell the route to a table is not even related with 
> the mod function, right?
> So It's hard for me to do any kind of guesses...

Because it is the wrong function.
According to \d+ on a child table and partbounds.c the function called
is satisfied_hash_partition:

testdb=# select satisfies_hash_partition('153221'::oid, 3, 0, 6521);
 satisfies_hash_partition
--
 t
(1 row)

testdb=# select satisfies_hash_partition('153221'::oid, 3, 1, 6521);
 satisfies_hash_partition
--
 f
(1 row)

The first argument is the table id (partitioned one, the root), the
second is the reminder, third is the partition table, last is your
value.
Therefore I suspect you have to iterate on your partition numbers from
0 to x to see if a value fits in that partition, and then extract the
table name from that.

Hope its clear.

Luca




Re: Change in db size

2019-07-17 Thread Luca Ferrari
On Wed, Jul 17, 2019 at 7:33 PM Sonam Sharma  wrote:
>
> I took the backup using pg_dump with gzip option and restored it with psql.

It does not change the way you backed up, chances are when you
restored it the database cropped table and index bloating as already
mentioned.
The other, remote chance, is that you did not backed up all the
objects, so in this case you should show us your backup command line
(and restore one too), and also the output of commands like \l+, \dt+,
\di+ and differences you have found.

Luca




Re: Postgers 9.3 - ubuntu 16.04 - Are clogs entires automatically deleted?

2019-07-18 Thread Luca Ferrari
On Thu, Jul 18, 2019 at 10:34 AM Cumer Cristiano
 wrote:
> Today I wanted to dump a database but pg_dump is complaining about missing 
> clog files.
> I’m quite sure that nobody has deleted the files and that my filesystem is 
> consistent.
>
> If I look in my clog directory I can see the files starting from 0013. The 
> first ones are missing. I have checked the backups of a month ago and there I 
> can see previous clog files, 0010,0011. I don’t have older backups.

Uhm.. 9.3 is in end of life, ubuntu 16.04 refers to three years ago
and is approaching the extends security releases only, that is it
looks a little ancient combination.
I suspect pg_dump will provide you with an hint about which particular
object is causing it to fail, in the case I suggest to backup
everything except such object. At least you will have something backed
up.
Any chance your disk was full and something nasty happened?

Luca




Re: Rearchitecting for storage

2019-07-19 Thread Luca Ferrari
On Thu, Jul 18, 2019 at 10:09 PM Matthew Pounsett  wrote:
> That would likely keep the extra storage requirements small, but still 
> non-zero.  Presumably the upgrade would be unnecessary if it could be done 
> without rewriting files.  Is there any rule of thumb for making sure one has 
> enough space available for the upgrade?   I suppose that would come down to 
> what exactly needs to get rewritten, in what order, etc., but the pg_upgrade 
> docs don't seem to have that detail.  For example, since we've got an ~18TB 
> table (including its indices), if that needs to be rewritten then we're still 
> looking at requiring significant extra storage.  Recent experience suggests 
> postgres won't necessarily do things in the most storage-efficient way.. we 
> just had a reindex on that database fail (in --single-user) because 17TB was 
> insufficient free storage for the db to grow into.

This could be trivial, but any chance you can partition the table
and/or archive unused records (at least temporarly)? A 18 TB table
quite frankly sounds a good candidate to contain records no one is
interested in the near future.
In any case, if you can partition the table chances are you can at
least do a per-table backup that could simplify maintanance of the
database.

In desperate order, I would check also the log files (I mean, textual
logs, not wals) because occasionally I found them requiring a few GBs
on my disk, and that can be easily archived to gain some more extra
space.
Then I would go for some commodity NAS to attach as extra storage, at
least for the upgrade process.

If any of the following fails, I would probably drop all the indexes
to gain extra space, perform the upgrade, and then reindex (removing
the old cluster, in the case it has not been upgraded with the link
option).

Luca




Re: Rearchitecting for storage

2019-07-22 Thread Luca Ferrari
On Fri, Jul 19, 2019 at 4:41 PM Matthew Pounsett  wrote:
> My current backup plan for this database is on-site replication, and a 
> monthly pg_dump from the standby to be copied off-site.  Doing per-table 
> backups sounds like a great way to end up with an inconsistent backup, but 
> perhaps I misunderstand what you mean.
>
Well, my idea was that, as I was supposing and you confirmed, the
database is full also of historical data, that will not be updated in
the future. Therefore you could at least perform a partitioning,
backup historical data and, after having verified it, drop historical
data. This will lead you to have a "partially" online system (I mean
partially because it will not have let's say the last 20 years, but
only the last 10 years) and you will save space for upgrading. After
the upgrade is completed, you can restore the oldest data and you will
come back online with the full dataset. IN this scenario the backup is
not inconsistent, since old data is supposed to stay frozen. If this
is not true, my idea is completly wrong.
I know, this is the desperate-poor-man approach, but I have quite
frankly no other ideas if you are constrained on space, money and time
(because as much as you take, the much it becomes harder to upgrade,
in my opinion).
I would also inspect _now_ a possible refactoring of the database in
order to gain, if possible, some extra space. I mean, sorry to be
harsh, but a database with a huge large table has not been designed
efficiently, so chances are some columns can be shrinked (to the
correct data type could be?) and this could provide you some extra
space.
But without having a better understanding of the scenario and the
context, I think I cannot help very much.

Sorry,
Luca




Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Luca Ferrari
On Tue, Jul 23, 2019 at 3:56 PM Perumal Raj  wrote:
> could not connect to source postmaster started with the command:
> "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/data/db/data" 
> -o "-p 5432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c 
> unix_socket_directory='/var/lib/pgsql'" start
> Failure, exiting

Is /var/lib/pgsql directory on the system?
Can you start the instance with the above command?


Luca




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread Luca Ferrari
On Fri, Jul 26, 2019 at 1:01 AM PegoraroF10  wrote:
>
> Nope, no one message near those statements.
> I haven´t changed anything on Postgres.conf related with autovacuum.
>

Please take a look and post results of the following query:

select name, setting from pg_settings where name like 'autovacuum%';

any chance autovacuum is stopped?




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-26 Thread Luca Ferrari
On Fri, Jul 26, 2019 at 1:15 PM PegoraroF10  wrote:
>
> select count(*), count(*) filter (where last_autovacuum is not null) from
> pg_stat_all_tables
> count   count
> 36605   1178
>

What are the results of the same query against pg_stat_sys_tables and
pg_stat_user_tables? That's would help understanding which set of
tables are not being vacuumed.

Luca




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-28 Thread Luca Ferrari
On Fri, Jul 26, 2019 at 9:21 PM PegoraroF10  wrote:
> So, is it better to configure autovacuum properly to these tables or should
> I run vacuum periodically ?
> Obviously I´ll check our script too.
>

My guess would be that either you have disabled autovacuum on such
tables (I don't know if that is possible being system tables, but for
regular tables it is) or your script is running too frequently to let
autvacuum proceed on the sys tables. I would bet on the last one.
Seems to me you are also creating and deleting a lot of stuff to bloat
the catalog in such a way. Could it be your script is keeping a
trnsaction open (something like try in a loop)?




Re: Error

2019-07-29 Thread Luca Ferrari
On Mon, Jul 29, 2019 at 2:22 PM Sonam Sharma  wrote:
> Java.lang.illegalstateexception: 3 rows retrieved for single property server 
> is.Checksum.

This has nothing to do with PostgreSQL, at least I suspect you have
got a query that is not strict and is returning more than a row.
You need to provide to us (and yourself) some more information. A good
starting point could be to enable logging of queries.

Luca




Re: How do I create a Backup Operator account ?

2019-07-29 Thread Luca Ferrari
On Tue, Jul 30, 2019 at 2:50 AM Marcos Aurelio Nobre
 wrote:
> I was wondering if it was possible to configure the pgAdmin4 menus to be 
> available for a particular login, only Backup & Restore items. But I'm not 
> sure how to associate a bank login account with a specific menu setting of a 
> client tool.

I think that hiding features from a GUI is a poor idea to protect your
data (from what?).
If you are talking of a single database (or a restricted set of), you
can provide a role with the less privileges, like only SELECT and use
such account to do the backup. But you need all the grants on another
account to restore the backup.

Anyway, I think you should rethink about your aim: what's the point of
having a restricted user who can backup ALL the data?

Luca




Re: Rearchitecting for storage

2019-07-30 Thread Luca Ferrari
On Thu, Jul 18, 2019 at 10:09 PM Matthew Pounsett  wrote:
> That would likely keep the extra storage requirements small, but still 
> non-zero.  Presumably the upgrade would be unnecessary if it could be done 
> without rewriting files.  Is there any rule of thumb for making sure one has 
> enough space available for the upgrade?   I suppose that would come down to 
> what exactly needs to get rewritten, in what order, etc., but the pg_upgrade 
> docs don't seem to have that detail.  For example, since we've got an ~18TB 
> table (including its indices), if that needs to be rewritten then we're still 
> looking at requiring significant extra storage.  Recent experience suggests 
> postgres won't necessarily do things in the most storage-efficient way.. we 
> just had a reindex on that database fail (in --single-user) because 17TB was 
> insufficient free storage for the db to grow into.
>

I've done a test on a virtual machine of mine, with the following
three databases: one 0f 4.9 GB, one of 500 MB, one of 50 MB. I know
this is not even close to your environment, however upgrading with
pg_upgrade from 10.9 to 11.4 _without_ the link option ask for 85% of
space.

On a machine with a single database of 8.9 GB and a space occupation,
as reported by df, of 64% (mean 46% available) I was able to upgrade
from 10.9 to 11.4 without the link option. Space occupation increased
of 90%.
Using the link option on the same cluster required 1.1% of extra space
(around 100 MB).
Of course, these are poor-man results, but give you an advice on the
space required by pg_ugprade (which seems to be less than 100% or 2x).

Hope this helps.
Luca




Re: How do I create a Backup Operator account ?

2019-07-30 Thread Luca Ferrari
On Tue, Jul 30, 2019 at 2:22 PM Ron  wrote:
> Luca, it is common some large Enterprise environments to have Operations
> staff that can run backups without being able to do anything else.  For
> example, SQL Server has a per-database user mapping named db_backupoperator.


Yes, but I think here we have to solve it with external tools, e.g.,
sudo. I see, however, an hard time configuring an account to execute
only pg_dump without being able to "escape" into the database itself.

Luca




Re: How do I create a Backup Operator account ?

2019-07-30 Thread Luca Ferrari
On Wed, Jul 31, 2019 at 2:48 AM Marcos Aurelio Nobre
 wrote:
> But I don't know how to implement this on Linux, nor how to write this entry 
> in the pg_hba.conf file.

I would start with an entry in pg_hba.conf like the following:

hostall   pg_backup_usernamelocalhost   md5

or

hostall   pg_backup_usernamelocalhost   md5


The problem then comes on how to prevent the operating system user to
run psql. If you are doing backup from a backup machine, one solution
would be to remove the psql executable and leave the backup ones.
Again, this is  a poor practice to me. Even something like the
following (untested) in /etc/sudoers will NOT prevent the user to
access the database:


User_Alias PGBACKUPUSERS = pg_backup_username
Cmd_Alias PGBACKUP = /usr/local/bin/pg_dump,
/usr/local/bin/pg_restore, ! /usr/local/bin/psql
PGBACKUPUSERS backup_host = PGBACKUP


because the user could use another client to inspect the database.
And again, I don't see the point in not allowing an user to access the
database but to be able to take a full backup. Therefore, I would go
to revoke all write grants to such user and see if he can still do a
backup.

Luca




pgaudit.log_parameter

2019-07-31 Thread Luca Ferrari
Hello,
I'm a little confused about the setting pgaudit.log_parameter of the
pgaudit extension
(https://github.com/pgaudit/pgaudit/blob/master/README.md).
What's the purpose of this? AN example of query that will trigger such
parameter logging? Apparently I cannot get it providing me more
information than ''.

Thanks,
Luca




Re: Which version to upgrade upto

2019-07-31 Thread Luca Ferrari
On Wed, Jul 31, 2019 at 4:55 PM Vikas Sharma  wrote:
> Should I go for 10.9 or 11.2? The architects are suggesting 11.2

Moving fom 9.5 requires in any case a major version upgrade, therefore
I would go for the latest one, 11.4.
Are there any particular needs that feed your doubts about the version?

Luca




Re: Cursors for PGJDBC queries

2019-08-01 Thread Luca Ferrari
On Thu, Aug 1, 2019 at 9:10 AM Rashmi V Bharadwaj  wrote:
> I am trying to set the fetch size for my ResultSet to avoid Out of Memory 
> exception. I have created the Statement with ResultSet.TYPE_FORWARD_ONLY, 
> ResultSet.CONCUR_READ_ONLY and ResultSet.HOLD_CURSORS_OVER_COMMIT and I've 
> also disabled auto commit as mentioned in the link Getting results based on a 
> cursor. I am still getting Out of memory error. My SQL query is a simple 
> SELECT statement to retrieve all the rows from a table. According to 
> https://postgrespro.com/list/thread-id/2370772, the holdability must be 
> CLOSE_CURSORS_AT_COMMIT. Could you please confirm this is a requirement?

Hard to say without more information. Could it be something you need
to set on the jvm like
However, you should post on the JDBC mailing list
.




Re: Cursors for PGJDBC queries

2019-08-01 Thread Luca Ferrari
On Thu, Aug 1, 2019 at 9:30 AM Luca Ferrari  wrote:
> Hard to say without more information. Could it be something you need
> to set on the jvm like

sorry, I was intended to write
-Xms256m
-Xmx1024m
to adjust the jvm memory limits.
Again I believe that the jdbc mailing list is the right place to ask
for such a problem.

Luca




Re: Altering multiple column types

2019-08-01 Thread Luca Ferrari
On Fri, Aug 2, 2019 at 7:42 AM Bharanee Rathna  wrote:
>
> Hi,
>
> I'm encountering an issue altering multiple column types in a single ALTER 
> TABLE
>
> psql (12beta2, server 11.4)

it is working on my 11.4 with psql 11.4. Could it be a problem of psql
version 12? Can you try again with a psql "stable"?

testdb=> create table users(id serial primary key, name varchar(255),
age int, email varchar(255));
CREATE TABLE
testdb=> alter table users alter column name type text, alter column
email type text;
ALTER TABLE
testdb=> drop table users;
DROP TABLE
testdb=> create table users(id serial primary key, name varchar(255),
age int, email varchar(255));
CREATE TABLE
testdb=> create index users_email_idx on users(email);
CREATE INDEX
testdb=> alter table users alter column name type text, alter column
email type text;
ALTER TABLE
testdb=> \d users
Table "public.users"
 Column |  Type   | Collation | Nullable |  Default
+-+---+--+---
 id | integer |   | not null | nextval('users_id_seq'::regclass)
 name   | text|   |  |
 age| integer |   |  |
 email  | text|   |  |
Indexes:
"users_pkey" PRIMARY KEY, btree (id)
"users_email_idx" btree (email)

testdb=> select version();
version
---
 PostgreSQL 11.4 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
8.3.0-6ubuntu1~18.10.1) 8.3.0, 64-bit
(1 row)




Re: Altering multiple column types

2019-08-02 Thread Luca Ferrari
On Fri, Aug 2, 2019 at 9:39 AM Bharanee Rathna  wrote:
>
> Hi Luca,
>
> I've tried it with a different client and Postgres 10.9, no luck
>
> psql (10.3, server 10.9)

I've fired up a 12beta2 and it works, either with psql 12 or psql 11.4 on linux.
What if you run the statements within another client (pgadmin, a java
client or something else)?

% psql -U postgres testdb
psql (12beta2)
Type "help" for help.

testdb=# create table users(id serial primary key, name varchar(255),
age int, email varchar(255));
CREATE TABLE
testdb=# create index users_email_idx on users(email);
CREATE INDEX
testdb=# alter table users alter column name type text, alter column
email type text;
ALTER TABLE
testdb=# SELECT version();
 version
--
 PostgreSQL 12beta2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
8.3.0-6ubuntu1~18.10.1) 8.3.0, 64-bit
(1 row)




% ~/git/misc/PostgreSQL/pgenv/pgsql-11.4/bin/psql -U postgres testdb

psql (11.4, server 12beta2)
WARNING: psql major version 11, server major version 12.
 Some psql features might not work.
Type "help" for help.

testdb=# drop table users;
DROP TABLE
testdb=# create table users(id serial primary key, name varchar(255),
age int, email varchar(255));
CREATE TABLE
testdb=# create index users_email_idx on users(email);
CREATE INDEX
testdb=# alter table users alter column name type text, alter column
email type text;
ALTER TABLE




Re: Altering multiple column types

2019-08-02 Thread Luca Ferrari
On Fri, Aug 2, 2019 at 10:06 AM Bharanee Rathna  wrote:
>
> Hi Luca,
>
> testing this using docker images. I can replicate it with 10.9-alpine
>
> bash-5.0# psql -h127.0.0.1 -Upostgres test
> psql (10.9)
> Type "help" for help.
>
> test=# \d users
> Table "public.users"
>  Column |  Type  | Collation | Nullable |  Default
> ++---+--+---
>  id | integer|   | not null | 
> nextval('users_id_seq'::regclass)
>  name   | character varying(255) |   |  |
>  age| integer|   |  |
>  email  | character varying(255) |   |  |
> Indexes:
> "users_pkey" PRIMARY KEY, btree (id)
> "users_email_idx" btree (email)
> "users_name_idx" btree (name)

My fault!
I missed one index, so it is working with one index a two column alter
table, but not with two indexes:

testdb=# alter table users alter column name type text, alter column
email type text;
alter table users alter column name type text, alter column email type text;
psql: ERROR:  relation "users_name_idx" already exists


Therefore I think it is a strange behavior, I cannot explain.
I confirm the problem shows up in 11.4 and 12beta2, so I guess there's
must be a reason I don't understand.

Luca




Re: Does pgadmin4 work with postgresql 8.4?

2019-08-06 Thread Luca Ferrari
On Tue, Aug 6, 2019 at 7:02 AM Murtuza Zabuawala
 wrote:
> It is mentioned on the front page of https://www.pgadmin.org/
> (Check introduction texts on the elephant wallpaper)

"pgAdmin may be used on Linux, Unix, Mac OS X and Windows to manage
PostgreSQL 9.2 and above."

However, I would suspect it can work even on older versions.

Luca




vacuum & free space map

2019-08-06 Thread Luca Ferrari
Hi,
I'm not understanding why a vacuum full clears a FSM information while
a normal vacuum does not. On a table with fillfactor = 50 I've got:

testdb=> vacuum full respi.pull_status;
VACUUM
testdb=> SELECT * FROM pg_freespace( 'respi.pull_status', 0 );
 pg_freespace
--
0
(1 row)

testdb=> vacuum respi.pull_status;
VACUUM
testdb=> SELECT * FROM pg_freespace( 'respi.pull_status', 0 );
 pg_freespace
--
 4096
(1 row)

So why is vacuum full not considering the fillfactor and clearing the
FSM? Or am I misunderstanding the results from pg_freespace?
This is on postgresql 11.4.

Thanks,
Luca




Re: Preventing in-session 'set role' commands

2019-08-06 Thread Luca Ferrari
On Tue, Aug 6, 2019 at 10:26 AM VO Ipfix  wrote:
>
> Is this something that can be accomplished with PostgreSQL? Any suggestions 
> thoughts are welcome, however tangential

Perhaps SET SESSION AUTHORIZATION?


Luca




Re: vacuum & free space map

2019-08-06 Thread Luca Ferrari
On Tue, Aug 6, 2019 at 3:50 PM Tom Lane  wrote:
> VAC FULL builds a new physical table, which has no FSM to start with.

Thanks, that was I was suspecting.

Luca




Re: Does pgadmin4 work with postgresql 8.4?

2019-08-06 Thread Luca Ferrari
On Tue, Aug 6, 2019 at 2:46 PM Benedict Holland
 wrote:
>
> To me, there is a huge difference between unsupported and wont work. ote:

Thta' why the only truly working software you has is psql 8.4.
As other has stated, it is too far in the past to say what is working
and what not, you have to walk thru the release notes and find
yourself which features are not going to work. Or stick with psql.

Good luck,
Luca




Re: Why must AUTOCOMMIT be ON to do txn control in plpgsql procedure?

2019-08-07 Thread Luca Ferrari
On Wed, Aug 7, 2019 at 12:19 AM Bryn Llewellyn  wrote:
> 1. my call p2() starts a txn.

In my opinion this is the point: a procedure must be "owner" of the
transaction to issue transaction control statements. You can watch
different behaviors placing here and there txid_current() before and
within p2 and re-run with autocommit off an on.
On autocmmmit = on the call opens a transaction and the procedure is
the only thing within the transaction, so it has full control. With
autocommit off you gain an implicit begin and the procedure is
possibly not the only thing you have within the transaction.
You can test it with:

testdb=# begin;
BEGIN
testdb=# call p2();
INFO:  TXID 134994
ERROR:  invalid transaction termination
CONTEXT:  PL/pgSQL function p2() line 9 at ROLLBACK
testdb=# rollback;
ROLLBACK


Having said that, I believe there is no reason ever to begin a
procedure with a rollback.

Here's a small example that performs a few commits and rollback


Luca

P.S:
I don't believe that asking on a public mailing list for a company to
answer is the right thing to do, you can always ask themselves on your
own.




Re: pg_wal fills up on big update query

2019-08-07 Thread Luca Ferrari
On Wed, Aug 7, 2019 at 3:34 PM Daniel Fink (PDF)  wrote:
> My current idea is to lock both tables completely from access (the queried 
> and the updated one) so that postgresql does not have to ensure isolation for 
> concurrent queries by keeping a copy of each row.

I'm not sure that locking will prevent the snapshotting and the WAL
machinery, but someone more expert on the are could clarify this.
Since the column is nullable, I would apply it outside of the
transaction, and then do the update. If that still fails, I would try
to split the update on small chunks (after all, it's an update, so it
is smething you can line up data).

Luca




Re: Why must AUTOCOMMIT be ON to do txn control in plpgsql procedure?

2019-08-07 Thread Luca Ferrari
On Wed, Aug 7, 2019 at 8:28 PM Bryn Llewellyn  wrote:
> About your “I believe there is no reason ever to begin a procedure with a 
> rollback”, I already explained why I did that. My higher goal is to take 
> advantage of the “serializable” isolation level to safely enforce a multi-row 
> data rule. And I want my pspgsql proc—following the time-honored philosophy 
> for stored procs—to own the complete implementation. I discovered that doing 
> “rollback” as the first executable statement in my proc allowed me to do “set 
> transaction isolation level serializable”. And I’ve found no other way to do 
> this. As I mentioned, the “real” version of my proc, written this way does 
> pass my functionality tests.

I'm sorry, I still don't get the point in issuing a rollback as first
instruction because it restricts, at least in my opinion, the use case
of your procedure, that in turns restrict the mean of a procedure
(reusability). However, since you are dwealing with it, I'm fine.

>
> B.t.w., I noticed that “set transaction isolation level serializable” must be 
> the very first statement after “rollback” (or “commit”). Even an invocation 
> of txid_current() after the rollback and before the ““set transaction” causes 
> this runtime error: “SET TRANSACTION ISOLATION LEVEL must be called before 
> any query”.
>

Well, SET TRANSACTION ISOLATION must be the very first instruction of
every transaction, not only within the case you describe.

> About your PS, I’m new to this list—so forgive me if I didn’t follow proper 
> etiquette. But as adrian.kla...@aklaver.com pointed out, the functionality 
> under discussion here is part of the core PostgreSQL implementation.

Sorry, but in your original post you placed the sentence: "I’m hoping
that someone from 2ndQuadrant can answer my questions", that's why I
pointed out thrat, as people at 2ndquadrant have already told you,
this has nothing to do with 2ndquadrant specifically. And that's why I
replied that "hoping" for an answer is not as good as asking directly
to them.
And please stop quote posting and jumping to different part of the
message, because it makes reading it very hard.

Luca




Re: Why must AUTOCOMMIT be ON to do txn control in plpgsql procedure?

2019-08-08 Thread Luca Ferrari
On Wed, Aug 7, 2019 at 11:36 PM Benedict Holland
 wrote:
>
> Also, I have never seen a rollback at the start of a proc. A pure 
> hypothetical is that it is doing nothing or definitely not what you think it 
> is.

That's my point, thanks.

Luca




Re: Having the issue in while selecting the data and feltering in order by.

2019-08-08 Thread Luca Ferrari
On Thu, Aug 8, 2019 at 11:20 AM nikhil raj  wrote:
> Same when i run this in linux machine i am getting this out in  different 
> sort order on the same query.

A collation problem?
What does this query do?
SELECT *
FROM (VALUES ('a'), ('___b1'), ('_a1'),('a2'),('a3'),('a5'), ('a2')) t
(val) order by val COLLATE "C";




Re: Understanding PostgreSQL installer debug log

2019-08-09 Thread Luca Ferrari
On Fri, Aug 9, 2019 at 8:46 AM Ramesh Maddi  wrote:
> ./postgresql-9.6.6-1-linux-x64.run --installer-language en --serviceaccount 
> postgres --servicename postgresqld  --datadir "/home/postgres/" --prefix  
> "/home/postgres" --superpassword 1234 --serverport 5432 --debuglevel 4 
> --debugtrace ./postgresql-debug.log --mode unattended
>

I suspect this is a dump produced by Qt used by EDB installer, maybe
you should ask support to them for this:


18.8.0
Linux 4.18.0-25-generic x86_64


Please note that, my case, I was able to get a dump immediatly because
the data directory did not exist. Why don't you use at least attended
mode to see if it is something as trivial as in my case?

Beside, is there a specific reason why you are not using
distro-specific packages? See the note here
.
Luca




Re: Understanding PostgreSQL installer debug log

2019-08-09 Thread Luca Ferrari
On Fri, Aug 9, 2019 at 11:01 AM Ramesh Maddi  wrote:
> Thanks in advance and appreciate your response. Please consider this as a 
> high priority for us.



I don't think re-sending the message will either lower your priority
nor making someone else reply to you quicker.

Luca




Re: ...

2019-08-09 Thread Luca Ferrari
On Fri, Aug 9, 2019 at 12:10 PM Daniel Vos  wrote:
> I use pgadmin 4.11 on windows 10 ( firefox, opera),
> with a table more 3000 rows
> if I use the contextual menu  "view / edit data -> all rows", the grid
> result show 1000 rows.

Works fine for me on linux/firefox 68.
How can you say not all tuples are loaded?
I suggest you to ask on the pgadmin mailing list providing more
details (and a decent subject).

Luca




Re: Updating 3-table dataset

2019-08-09 Thread Luca Ferrari
On Fri, Aug 9, 2019 at 2:29 PM Moreno Andreo  wrote:
>
> Hi all,
>  I don't know if that's the heat burning my brain but I can't find a
> solution to what seemed a simple operation to me.
>
> I have 3 tables
> create table t_all
> {
> id uuid,
> ref_id uuid (FK to t_ana.id)
> };
> create table t_ana
> {
> id uuid,
> code text
> };
> create table t_app
> {
> id uuid,
> code text(subset of t_ana.code)
> }
> I need to update t_all set t_all.id = t_app.id having t_ana.code in
> t_app.code (I wrote it in some kind of meta-sql but I hope it's clear)
> I tried to create a view but I need an INSTEAD OF trigger, since it
> spreads among 3 tables so I hope there's some faster path to achieve the
> solution
>

Not sure I got what you need, and I've not tested, but something like
the following:

WITH must_update AS (
SELECT app.id AS app_id, ana.id AS ana_id
FROM t_app app, t_ana ana
WHERE app.code = ana.code
)

UPDATE t_all
SET id = ( SELECT app_id FROM must_update WHERE ref_id = must_update.ana_id );

I've written the CTE because it is a little clearer in my mind, but
you can push down as a subquery of course.

Luca




Re: ORA-24345: A Truncation or null fetch error occurred -ora2pg

2019-08-13 Thread Luca Ferrari
On Tue, Aug 13, 2019 at 10:23 AM Daulat Ram  wrote:
> Initially did not have LongReadLen set, so I thought this was the cause. But, 
> I have set LongReadLen, on the db handle, equal to 9000.

Apparently this is an oracle problem because it acceppted data longer
than its type, so my guess would be that in your table you have a
char(n) column that could be enlarged before the migration.

Hope this helps.
And please report the version of ora2pg when asking for help.

Luca




Re: Changing work_mem

2019-08-13 Thread Luca Ferrari
On Tue, Aug 13, 2019 at 5:59 PM rihad  wrote:
> [dbname] LOG:  temporary file: path
> "base/pgsql_tmp/pgsql_tmp93683.257381", size 594
>

The setting 'work_mem' is within context 'user', that means it will
affect running sessione unless the session itself has already issued a
SET work_mem to xxx.
So this could be a reason why you don't seem to see any change.

Also keep in mind that work_mem work on a connection basis, so you are
going to possibly see 521MB x num_connections if all your clients are
doig the same kind of sort concurrently, which probably causes
PostgreSQL to go to disk due to memory unavailable.

Hope this helps.
Luca




Re: Question on pgwatch

2019-08-16 Thread Luca Ferrari
On Wed, Aug 14, 2019 at 5:10 PM Bikram MAJUMDAR
 wrote:
> From where did you download docker for linux,  and , the docker with the 
> pgwatch container?
> And, any installation/configuration tips for pgwatch running on linux?

I'm not sure what you are effectively askin to me, however the
starting point for bth projects are the related web sites:
- docker 
- pgwatch  and here you can find detailed
instruction from dowloading to run it


Luca




Re: Variable constants ?

2019-08-16 Thread Luca Ferrari
On Thu, Aug 15, 2019 at 11:27 PM Rich Shepard  wrote:
> create table labor_rate_mult (
>rate real primary_key,
>start_date   date not null,
>end_date date
> )

I think the rate should not be the primary key, since that would
prevent keeping the whole history when the value is resetted to a
previous one. Probably here a surrogate key will make the trick.

The range solution is probably a more elegant one.

I would also propose the over-complicated possibility of making an
extension wrapping functions that return each single constant value.
In this way, changing the value means upgrading the extension and is
another way to keep history of changes, but probably is because I
don't like one-raw tables so much.

Luca




Re: Question on pgwatch

2019-08-17 Thread Luca Ferrari
On Fri, Aug 16, 2019 at 11:57 PM Bikram MAJUMDAR
 wrote:
> When I go to the URL link for pgwatch that you have given I see the following 
> :  How do I get to pgwatch docker download and install on my linux server?

I don't want to be harsh, but you should start doing your homework and
report about problems.
Since I'm not a pgwatch user, as already stated, I believe that you
should get the image with the following:

docker pull cybertec/pgwatch2

as reported here .

Luca




question about zeroes in the wal file names

2019-08-18 Thread Luca Ferrari
I'm just curious to better understand the naming convention behind wal
files, because I've seen on a system of mine that the wals created
were:

0005020E00FF
 0005020F

while I was expecting 20E0x100. So I digged into the code and I've
seen, from the XLogFileName macro, that the last part is built as the
reminder of the number of segments per wal file:

#define XLogFileName(fname, tli, logSegNo, wal_segsz_bytes)\
snprintf(fname, MAXFNAMELEN, "%08X%08X%08X", tli,\
 (uint32) ((logSegNo) / XLogSegmentsPerXLogId(wal_segsz_bytes)), \
 (uint32) ((logSegNo) % XLogSegmentsPerXLogId(wal_segsz_bytes)))


and with the default wal size of 16 MB that gives a remainder of 256
(FF + 1). Assuming I haven't missed anything, this means that there
are 6 zeroes that will never change in the last 8 chars of the wal
filename.  Is therefore this only done to handle PostgreSQL WAL sizes
of 4 GB each?

Thanks,
Luca




Re: Rename a column if not already renamed.?

2019-08-20 Thread Luca Ferrari
On Tue, Aug 20, 2019 at 9:07 PM Day, David  wrote:
> EXECUTE format ('ALTER TABLE %s RENAME %s TO %s', schema_table_, 
> old_name_, new_name);

Hard to say without the error, but any chance there is a quoting problem?
EXECUTE format ('ALTER TABLE %s RENAME %I TO %I', schema_table_,
old_name_, new_name);

Luca




Re: Permission for not Django app to do Write and Read

2019-08-24 Thread Luca Ferrari
On Sat, Aug 24, 2019 at 11:53 AM Peter Wainaina  wrote:
>
> Thanks much for the response. This is what I mean am a database administrator 
> for a production company and the product owner doesn't want me to be able to 
> either read or write information that will come from Django application.

I personally hate this kind of setup, because it does not make any
sense to me that a developer must setup an application that must
interact with a database that the developer himself cannot interact
with.
However, keep it simple: define a django user, assign each object to
such user, revoke any permission from public.
And then let's the product owner setup a password.
Or look at pgcrypto, because the ony reliable way to deal with "don't
look at my data" setup is cryptography.

Luca




Re: For SELECT statement (just a reading one, no 'FOR UPDATE'), is COMMIT or ROLLBACK preferred?

2019-08-26 Thread Luca Ferrari
On Sun, Aug 25, 2019 at 10:12 PM David Wall  wrote:
> The main issue is that if
> we do a SELECT and get a ResultSet that has no rows, if we do a commit
> or a rollback, it seems reasonable that these are identical as no
> changes were made.  My inclination is to do a Connection.commit() on the
> connection because it wasn't in error or anything even if no rows were
> found, but wondered if a Connection.rollback() has any difference
> (positive/negative) in such a scenario.

Quite frankly I would redesign your application workflow. Sounds like
you are building a framework to issue queries, and I suggest you to
clearly mark transactions only when needed because, disregarding
performances, it does not make much sense to commit/rollback on a
"data quantity" discrimintation. At least, as far as you described it.

Moreover, as Tom pointed out, there could be a SELECT against a
function (that could return nothing at all) with side effects. How are
you going to discriminate such case?

Luca




Re: cannot CREATE INDEX because it has pending trigger events

2019-08-27 Thread Luca Ferrari
On Tue, Aug 27, 2019 at 9:33 AM Simon Kissane  wrote:
> If I swap the order of the CREATE UNIQUE INDEX and the INSERT, so the index 
> gets created first, the error doesn't happen.

It also works removing the INITIALLY DEFERRED from the foreign key,
since it seems you are creating tuples in the right order (at least in
this simplified model).
The index is not created because the foreign key is validated at the
transaction commit.

>
> I don't understand why having a deferred FK to check should stop me from 
> creating a unique index. I also don't understand why this worked in 9.6 but 
> not anymore.

I've tested against 12beta2, it would be nice to understand what
changed in (I suspect) SET TRANSACTION.

Luca




Re: psql \copy hanging

2019-08-27 Thread Luca Ferrari
On Tue, Aug 27, 2019 at 9:54 AM Arnaud L.  wrote:
> Any other idea ? I'll change the lines order for tonight's run, but that
> is not what I'd call a solution...

Does it hangs against the same line content or the same line number?
Are you able to run the script automatically during working hours (to
avoid firewalling or upgrades running in parrallel to your nightly
script execution)?
Any chance something is querying the same data and a lock blocks the
transaction (pg_locks)?

Just throwing on the table some desperate ideas

Luca




Re: psql \copy hanging

2019-08-27 Thread Luca Ferrari
On Tue, Aug 27, 2019 at 10:48 AM Arnaud L.  wrote:
> I can run the script just fine during working hours.

I meant thru your scheduler (cron or something).

> It hangs against the same line in the sql script, all lines being "\copy
> (select ) to 'file on unc share'".

This is a new detail to me: what if you output to a local file and
move it after on the share?


> This line is simply the longest running query because the view inside
> the select outputs almost 1M rows and does some subqueries.
>

I still don't get: is the line content the longest or the highest
numbered in the output?

> Also when the script is hung, output has not started (file size is 0).

This makes me think there is some problem with the share, not the
script. Again: test against a local file that you move (rsync?) to the
share after.

Luca




Re: psql \copy hanging

2019-08-27 Thread Luca Ferrari
On Tue, Aug 27, 2019 at 12:34 PM Arnaud L.  wrote:
> I could do this but it would make the script a lot more complicated.
> There are a lot of views that I \copy directly to this share, and this
> is the only one that poses any problem.

I would however give it a try. I would schedule a nightly execution
with a local path to see if that completes.

> Sorry, I don't undertand either. Your question was "Does it hangs
> against the same line content or the same line number?".
> I run an sql script that contains a bunch of \copy commands. It is
> always the same \copy that is hanging.

Now I got it: your script is hanging against a specific \copy command,
while I was thinking it was hanging once it was dumping a specific
table line from your copy.

> I could write the file locally, but for robustness I'm trying not to
> rely on the local FS so that this script could be run from any machine
> on the network without modifications.

My personal experience with shares and network that are not so much
stable is that they can block, and that is why I'm suggesting to try
on the local filesystem to see if that is a share-related problem or a
resource problem. Also scheduling the script at a different time could
help, because it could be at that time the machine (and it could mean
the sharing machine) is busy at the point it does not respond.

Luca




Re: Pointers towards identifying bulk import bottleneck (walwriter tuning?)

2019-08-27 Thread Luca Ferrari
On Tue, Aug 27, 2019 at 12:06 PM Holtgrewe, Manuel
 wrote:
> iotop tells me that walwriter does not go beyond ~35MB/s so maybe this is the 
> culprit? Is there a way to tune walwriter I/O performance?

As far as I know, walwriter is there to help background processes, so
in the case it cannot keep up with WALs the backends will write on
their own. If my understanding is correct, I don't think that could be
the bootleneck.
I've seen you have checkpoints at 4h, that's quite huge to me. Do you
have any hint that checkpoints are happening too frequently?
Any chance you can turn fsync off (only if this is a testbed)?

Also this  could
be a better place to ask for help.

Luca




Re: psql \copy hanging

2019-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2019 at 9:09 AM Arnaud L.  wrote:
> OK, so this was enough for last night's schedule to run without problem.
> I still don't get it so I'm not satisfied with this solution, but at
> least it works.
> I'll keep the list informed if something new arises.

I don't want to be pedantic, but I would have tried with a single
change at a time.
And my bet is: the local file would do the trick (i.e., it is a weird
share problem).

If you are going to invest some time, you could also try to write a
small file on the share just before the copy starts, so that you are
guaranteed the share is working. Something like:

echo $(date) >> $SHARE/log.txt
psql 'copy ...'
echo 'done' >> $SHARE/log.txt

Luca




Re: Work hours?

2019-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2019 at 12:27 AM stan  wrote:
> Any thoughts as to the best way to approach this?

I've written a couple of functions to compute working hours depending
on a possible per-day hour template.
Another possible implementation besides the other proposed solutions.


Luca




Re: Question about password character in ECPG's connection string

2019-08-28 Thread Luca Ferrari
On Wed, Aug 28, 2019 at 10:47 PM Alban Hertroys  wrote:
> Perhaps it helps to URL-encode the & in the password as %26?

The OP already did without success.
Could it be needed to escape the & with the backslash or single ticks?

Luca




Re: Question about password character in ECPG's connection string

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 1:08 PM Egashira, Yusuke
 wrote:
> According to my tests, I think that the ECPG's connection_option seems not to 
> accept '&' character as password anyhow...
> ECPG CONNECT's connection_option seems to have some restricted characters.


As far as I understand from

unwanted characters are '&' and '='. Even if the connetion string
seems an URL, it is managed in plain text without any sort of
escaping.



> I hope to this limitation will be documented because it causes confusion.

If this is confirmed, I agree this should be documented.

Luca




Re: Security patch older releases

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 2:05 PM Erika Knihti-Van Driessche
 wrote:
> So, I have postgres 9.6.9 and customer wants it to be updated to 9.6.11, 
> because that is their current testing "sandbox" version, which was not 
> installed by me and is also another linux distro.

Binary packages are always at the latest minor version, so you have to
either force your client to test against latest version or install it
by your own downloading the tarball or using tools like pgenv and
alike.

Luca




Re: implicit transaction changes trigger behaviour

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 2:16 PM Willy-Bas Loos  wrote:
> delete from b;
> --DELETE 3

Here the trigger is fired 3 times (for each row), and on every single
test it finds a row in 'a', that is your variable n_b_type1 is always
1, that causes the trigger (fired on each row) to not abort. If you
delete first the row that makes the trigger fail, you will not be able
to do the deletion happen outside an explicit transaction:

testdb=# delete from b where type = 1;
DELETE
testdb=# delete from b;
ERROR:  Each record of a must have exactly 1 corresponding records in
b of type 1. But after this delete the a-record with id 5 would have 0
b-records of type 1, so the operation has been cancelled.


So it seems to me a problem within the trigger: when executing outside
the transaction your row is deleted as last, and this makes the
deletion "iterate" and remove all the rows. Within the transaction,
when the trigger fires, no rows are there, so it fails. Either this is
what you have to do or your query within the trigger is wrong.

Luca




literal vs dynamic partition constraint in plan execution

2019-08-29 Thread Luca Ferrari
Ok, the title is a little buzz, however I've got a partitioned table
and one "leaf" has a set of checks against a timestamp field to ensure
that tuples within such table belongs to the year and month:

testdb=# \d respi.y2019m08
...
Partition of: respi.y2019 FOR VALUES IN ('8')
Check constraints:
"y2019_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2019::double precision)
"y2019_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2019, 1,
1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2019, 12,
31, 23, 59, 59::double precision))
"y2019m08_mis_ora_check" CHECK (date_part('month'::text, mis_ora)
= 8::double precision)
"y2019m08_mis_ora_check1" CHECK (date_part('year'::text, mis_ora)
= 2019::double precision)
"y2019m08_mis_ora_check2" CHECK (mis_ora >= make_timestamp(2019,
8, 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2019,
8, 31, 23, 59, 59::double precision))
"y2019m08_mis_ora_check3" CHECK (mis_ora >= make_timestamp(2019,
8, 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2019,
8, 31, 23, 59, 59::double precision))

So y2019m08 accepts only tuples where 'mis_ora' has a timestamp that
is contained into the eigth month of the year.
Now if I look at the plan for this query everything works as expected
(I disabled parallel scans for better see the plan):

testdb=# explain select * from respi.root where ts >= '2019-08-28
23:35:00.007245' and mis_ora >= '2019-08-29 16:28:48.711482'   order
by ts;
--
 Sort  (cost=353986.27..353991.59 rows=2129 width=40)
   Sort Key: y2019m08.ts
   ->  Append  (cost=0.00..353868.58 rows=2129 width=40)
 ->  Seq Scan on y2019m08  (cost=0.00..353409.93 rows=1 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m09  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m10  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m11  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))
 ->  Seq Scan on y2019m12  (cost=0.00..28.00 rows=133 width=40)
   Filter: ((ts >= '2019-08-28 23:35:00.007245'::timestamp
without time zone) AND (mis_ora >= '2019-08-29
16:28:48.711482'::timestamp without time zone))


The "as I expected" means that the system starts scanning from
y2019m08 and following (in time) tables, and does not scan previous
time tables.
This works if the mis_ora is compared against a literal timestamp, but
if I simply change it with a dynamic timestamp:

testdb=# explain select * from respi.root where ts >= '2019-08-28
23:35:00.007245' and   mis_ora >= current_timestamp   order by ts;
   QUERY PLAN

 Sort  (cost=4654860.37..4654865.25 rows=1952 width=36)
   Sort Key: r.ts
   ->  Nested Loop  (cost=0.00..4654753.69 rows=1952 width=36)
 Join Filter: (r.sen_id = s.sen_id)
 ->  Append  (cost=0.00..4638927.56 rows=3204 width=32)
   ->  Seq Scan on y2018m01 r  (cost=0.00..31.00 rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))
   ->  Seq Scan on y2018m02 r_1  (cost=0.00..31.00
rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))
   ->  Seq Scan on y2018m03 r_2  (cost=0.00..31.00
rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))
   ->  Seq Scan on y2018m04 r_3  (cost=0.00..31.00
rows=133 width=32)
 Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND (mis_ora >=
CURRENT_TIMESTAMP))

also the tables for the past year are scanned. Moreover, the planner
thinks I will get 133 rows out of, for instance, y2018m01 which is
impossible.
So, do I have defined the constraint on each table in a wrong manner?


testdb=# select version();
 version
--

Re: literal vs dynamic partition constraint in plan execution

2019-08-29 Thread Luca Ferrari
On Thu, Aug 29, 2019 at 4:45 PM Luca Ferrari  wrote:
>
> Ok, the title is a little buzz, however I've got a partitioned table
> and one "leaf" has a set of checks against a timestamp field to ensure
> that tuples within such table belongs to the year and month:

Of course, all the siblings have similar constraints. So my partition
starts at a table named "root", then it it has a level for the year,
and each year has subpartitions for months:
- root
   - y2018
 - y2018m01, y2018m02, ...
  - y2019
   - y2019m01, y2019m02 

All partitions have been created equally, and constraints seem fine to me:

testdb=# \d respi.y2018m01
...
Partition of: respi.y2018 FOR VALUES IN ('1')
Check constraints:
"y2018_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2018::double precision)
"y2018_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018, 1,
1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018, 12,
31, 23, 59, 59::double precision))
"y2018m01_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2018::double precision)
"y2018m01_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018,
1, 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018,
1, 31, 23, 59, 59::double precision))



testdb=># \d+ respi.y2018
...
Partition of: respi.root FOR VALUES IN ('2018')
Partition constraint: ((date_part('year'::text, mis_ora) IS NOT NULL)
AND (date_part('year'::text, mis_ora) = '2018'::double precision))
Partition key: LIST (date_part('month'::text, mis_ora))
Check constraints:
"y2018_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
2018::double precision)
"y2018_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018, 1,
1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018, 12,
31, 23, 59, 59::double precision))
Partitions: respi.y2018m01 FOR VALUES IN ('1'),
respi.y2018m02 FOR VALUES IN ('2'),
respi.y2018m03 FOR VALUES IN ('3'),
respi.y2018m04 FOR VALUES IN ('4'),
respi.y2018m05 FOR VALUES IN ('5'),
respi.y2018m06 FOR VALUES IN ('6'),
respi.y2018m07 FOR VALUES IN ('7'),
respi.y2018m08 FOR VALUES IN ('8'),
respi.y2018m09 FOR VALUES IN ('9'),
...

With the above constraint, all the branch starting at y2018 should be
excluded when selecting with
mis_ora >= CURRENT_TIMESTAMP
(the date of the server is right, of course).
Why is instead scanned (as reported by the execution plan in the
previous email)?

Thanks,
Luca




Re: literal vs dynamic partition constraint in plan execution

2019-08-30 Thread Luca Ferrari
On Fri, Aug 30, 2019 at 8:29 AM Luca Ferrari  wrote:
> testdb=># \d+ respi.y2018
> ...
> Partition of: respi.root FOR VALUES IN ('2018')
> Partition constraint: ((date_part('year'::text, mis_ora) IS NOT NULL)
> AND (date_part('year'::text, mis_ora) = '2018'::double precision))
> Partition key: LIST (date_part('month'::text, mis_ora))
> Check constraints:
> "y2018_mis_ora_check" CHECK (date_part('year'::text, mis_ora) =
> 2018::double precision)
> "y2018_mis_ora_check1" CHECK (mis_ora >= make_timestamp(2018, 1,
> 1, 0, 0, 0::double precision) AND mis_ora <= make_timestamp(2018, 12,
> 31, 23, 59, 59::double precision))
> Partitions: respi.y2018m01 FOR VALUES IN ('1'),
> respi.y2018m02 FOR VALUES IN ('2'),
> respi.y2018m03 FOR VALUES IN ('3'),
> respi.y2018m04 FOR VALUES IN ('4'),
> respi.y2018m05 FOR VALUES IN ('5'),
> respi.y2018m06 FOR VALUES IN ('6'),
> respi.y2018m07 FOR VALUES IN ('7'),
> respi.y2018m08 FOR VALUES IN ('8'),
> respi.y2018m09 FOR VALUES IN ('9'),
> ...
>


While the condition
mis_ora >= current_timestamp
does not cut off the 2018 branch, the following does

=# explain select * from respi.root where ts >= '2019-08-28 23:35:00.007245'
and  extract( year from mis_ora ) = extract( year from current_timestamp )
and extract( month from mis_ora ) >= extract( month from
current_timestamp )order by ts;

 Sort  (cost=7246692.21..7246692.28 rows=26 width=36)
   Sort Key: r.ts
   ->  Nested Loop  (cost=0.00..7246691.60 rows=26 width=36)
 Join Filter: (r.sen_id = s.sen_id)
 ->  Seq Scan on sensori s  (cost=0.00..13.57 rows=329 width=16)
   Filter: interesting
 ->  Materialize  (cost=0.00..7246465.93 rows=43 width=32)
   ->  Append  (cost=0.00..7246465.72 rows=43 width=32)
 Subplans Removed: 31
 ->  Seq Scan on y2019m08 r  (cost=0.00..623008.30
rows=2 width=32)
   Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND
(date_part('month'::text, mis_ora) >= date_part('month'::text,
CURRENT_TIMESTAMP)) AND (date_part('year'::text, mis_ora) =
date_part('year'::text, CURRENT_TIMESTAMP)))
 ->  Seq Scan on y2019m09 r_1  (cost=0.00..49.00
rows=1 width=32)
   Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND
(date_part('month'::text, mis_ora) >= date_part('month'::text,
CURRENT_TIMESTAMP)) AND (date_part('year'::text, mis_ora) =
date_part('year'::text, CURRENT_TIMESTAMP)))
 ->  Seq Scan on y2019m10 r_2  (cost=0.00..49.00
rows=1 width=32)
   Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND
(date_part('month'::text, mis_ora) >= date_part('month'::text,
CURRENT_TIMESTAMP)) AND (date_part('year'::text, mis_ora) =
date_part('year'::text, CURRENT_TIMESTAMP)))
 ->  Seq Scan on y2019m11 r_3  (cost=0.00..49.00
rows=1 width=32)
   Filter: ((ts >= '2019-08-28
23:35:00.007245'::timestamp without time zone) AND
(date_part('month'::text, mis_ora) >= date_part('month'::text,
CURRENT_TIMESTAMP)) AND (date_part('year'::text, mis_ora) =
date_part('year'::text, CURRENT_TIMESTAMP)))

The fact that making explicit the condition against the year and the
month, which are the top level partition constraint, makes me think
that the executor will try to go down all the branches to the leaf if
the condition is not filtered at the top level. Even if I don't
understand why.

Luca




Re: Regarding db dump with Fc taking very long time to completion

2019-08-30 Thread Luca Ferrari
On Fri, Aug 30, 2019 at 11:51 AM Durgamahesh Manne
 wrote:
>  Logical dump of that table is taking more than 7 hours to be completed
>
>  I need to reduce to dump time of that table that has 88GB in size

Good luck!
I would see two possible solutions to the problem:
1) use physical backup and switch to incremental (e..g, pgbackrest)
2) partition the table and backup single pieces, if possible
(constraints?) and be assured it will become hard to maintain (added
partitions, and so on).

Are all of the 88 GB be written during a bulk process? I guess no, so
maybe partitioning you can avoid locking the whole dataset and reduce
contention (and thus time).

Luca




Re: "storing" a calculated value in plsql function ?

2019-08-30 Thread Luca Ferrari
On Fri, Aug 30, 2019 at 12:48 PM stan  wrote:
> In the resultant table, I have raw data, and adjusted data. The adjusted data 
> i
> all adjusted by a common factor, which is calculated in the select. 
> Presently, I
> calculate this same adjustment factor several times in the select.

Is it possible to add the computed column as output of your query?
Even define a rowtype that includes such column?


> Is there a way to reference this value, multiple times, once it is 
> calculated? Or
> would I have to create a 2nd select that calculates this adjustment factor, 
> and
> stores it in a PLSQL variable< and if I do that, can I reference this stored 
> value
> in the select?

Yes, you can references variables as values on queries.
As an example 
.

Luca




Re: How to get RAISE messges displayed?

2019-09-01 Thread Luca Ferrari
On Sat, Aug 31, 2019 at 12:35 AM stan  wrote:
> Got it working.
>
> Not 100% sure what I had wrong.

You can also do, in your session:
set client_min_messages to notice;
without having to change it in the configuration (for all sessions).

Luca




partition by range or by list constraint check (was Re: literal vs dynamic partition constraint in plan execution)

2019-09-02 Thread Luca Ferrari
I've done a simple test case, and find out that probably the problem I
got was due to the partition schema I'm using.
I want a table to be partitioned by a timestamp field with a first
level partition by year, and a second level by month. Therefore, I did
a BY LIST partitioning, but that produces a wrong constraint check
when executing a query.
This is a reproducible example.

BEGIN;

CREATE TABLE root( pk int generated always as identity, v int, ts
timestamp default current_timestamp )
PARTITION BY LIST( extract( year from ts ) );

CREATE TABLE y2018
PARTITION OF root
FOR VALUES IN ( 2018 );

CREATE TABLE y2019
PARTITION OF root
FOR VALUES IN ( 2019 );

 ALTER TABLE y2018ADD CHECK( ts >= make_timestamp( 2018, 1, 1, 0, 0, 0 )
 AND ts <= make_timestamp( 2018, 12, 31, 23, 59, 59 ) );

 ALTER TABLE y2019ADD CHECK( ts >= make_timestamp( 2019, 1, 1, 0, 0, 0 )
 AND ts <= make_timestamp( 2019, 12, 31, 23, 59, 59 ) );

INSERT INTO root( v )
SELECT generate_series( 1, 100 ); -- same ts here

COMMIT;

Now if I try to explain a query with the current timestamp (which is
of course in 2019):

testdb=# explain select * from root where ts = current_timestamp;
QUERY PLAN
-
Append  (cost=0.00..75.59 rows=18 width=16)
->  Seq Scan on y2018  (cost=0.00..37.75 rows=9 width=16)
Filter: (ts = CURRENT_TIMESTAMP)
->  Seq Scan on y2019  (cost=0.00..37.75 rows=9 width=16)
Filter: (ts = CURRENT_TIMESTAMP)
(5 rows)

I got y2018 scanned too, which of course could not be the case since
y2018 cannot contain values that are equal to current_timestamp.
However, if I use a literal the query works fine:

testdb=# explain select * from root where ts = '2019-09-01 09:00:00.00';
QUERY PLAN
---
Append  (cost=0.00..33.17 rows=9 width=16)
->  Seq Scan on y2019  (cost=0.00..33.12 rows=9 width=16)
Filter: (ts = '2019-09-01 09:00:00'::timestamp without time zone)
(3 rows)


Now, if I change the partition schema using a range, the query works
fine with current_timestamp too:

CREATE TABLE root( pk int generated always as identity, v int, ts
timestamp default current_timestamp )
PARTITION BY RANGE( ts );

CREATE TABLE y2018
PARTITION OF root
FOR VALUES FROM ('2018-01-01 00:00:00.00')
TO ('2018-12-31 23:59:59.00');

CREATE TABLE y2019
PARTITION OF root
FOR VALUES FROM ('2019-01-01 00:00:00.00')
TO ('2019-12-31 23:59:59.00');


testdb=# explain select * from root where ts = current_timestamp;
 QUERY PLAN
-
 Append  (cost=0.00..75.59 rows=18 width=16)
   Subplans Removed: 1
   ->  Seq Scan on y2019  (cost=0.00..37.75 rows=9 width=16)
 Filter: (ts = CURRENT_TIMESTAMP)
(4 rows)



So my end with this is that:
- the list partitioning prevents the current_timestamp to be compared
against the list of possible values (extract year from
current_timestamp) and therefore the planner has no chance but to get
into all the tables, even if the constraints on the ts field
explicitly state some tables can be removed;
- in range partitioning, since the partition is built on the very
range of values, the planner gets the correct path.

I still don't get why using a literal in the first case can lead to a
"more correct" plan.
And I'm curious to know if there's a way to force constraints in the
list partitioning to make the planner really aware of tables that can
be excluded.

Luca




Re: restore and recovery using WAL: unkown messages in serverlog

2019-09-03 Thread Luca Ferrari
On Tue, Sep 3, 2019 at 9:57 AM Matthias Apitz  wrote:
>  2019-09-03 09:18:46.024 CEST [25388] LOG:  Logdatei 
> »00010001008F« aus Archiv wiederhergestellt
> ***> cp: der Aufruf von stat für 
> '/data/postgresql11/wal_archive/000100010090' ist nicht möglich: 
> Datei oder Verzeichnis nicht gefunden

It's not an error, it is searching the next wal that after 0x8F is
0x90, but the log is not there, so the recovery has finished. The
cluster does not know where to stop because your recovery.conf did not
specified, so it tries to do as much wal processing as it can and
stops when it finds no more.

>  2019-09-03 09:18:47.838 CEST [25388] LOG:  Redo fertig bei 1/8F000140

Redo is done.


> ***> cp: der Aufruf von stat für 
> '/data/postgresql11/wal_archive/0002.history' ist nicht möglich: Datei 
> oder Verzeichnis nicht gefunden

having finisched the wals it is searching for a branch on timeline 2

>  2019-09-03 09:18:47.900 CEST [25388] LOG:  gewählte neue Zeitleisten-ID: 
> 2

and it switches on timeline 2 autonomously

Pretty much that should be what you see in the logs.

> Btw: Is there away to run the server for German UTF-8, but with English
> messages in the log?

Set lc_messages in the postgresql.conf to something in english, I
think 'en_US.UTF-8' could do the trick, but haven't tried it.

Luca




Re: Postgres HA issue - Standby server does not start after Master compute host is shut down

2019-09-04 Thread Luca Ferrari
On Tue, Sep 3, 2019 at 6:08 PM Nagendra Bandi  wrote:
> Standby is built from the primary using pg_basebackup. i.e. Initial copy of 
> the primary database is taken with pg_basebackup command and then restarted 
> the server.

This puzzles me:

< 2019-09-01 15:43:56.440 UTC >LOG:  trigger file found:
/var/opt/rh/rh-postgresql94/lib/pgsql/trigger_switch
< 2019-09-01 15:43:56.440 UTC >FATAL:  terminating walreceiver process
due to administrator command

< 2019-09-01 15:43:56.443 UTC >DEBUG:  switched WAL source from stream
to archive after failure
< 2019-09-01 15:43:56.443 UTC >LOG:  record with zero length at C/95193A70
< 2019-09-01 15:43:56.443 UTC >LOG:  redo done at C/95193A38
< 2019-09-01 15:43:56.443 UTC >LOG:  last completed transaction was at
log time 2019-09-01 15:39:18.804265+00
< 2019-09-01 15:43:56.443 UTC >DEBUG:  resetting unlogged relations:
cleanup 0 init 1
< 2019-09-01 15:43:56.448 UTC >FATAL:  WAL ends before end of online backup
< 2019-09-01 15:43:56.448 UTC >HINT:  All WAL generated while online
backup was taken must be available at recovery.

so the standby found the trigger file and terminated the wal receiver,
but then tried to get wals from local archive, why?
Seems to me the local WALs are not completed or removed, and so the
standby cannot get the last part, I suspect this is because there's a
restore_command that is not streaming (and that should be fine, since
it is optional for streaming).
Also the version is quite old.

Luca




Re: PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-04 Thread Luca Ferrari
On Wed, Sep 4, 2019 at 5:15 PM Jason Ralph  wrote:
> The question I have is, for a production database not doing replication, can 
> I safely set the following parameters, I understand that minimal will also 
> disable wal_archiving so I am concerned about that as well.
>
> wal_level = minimal
>
> max_wal_senders = 0
>
>
>
> I have been looking at the documentation and I guess I am looking for a yes 
> this is ok, I figured it’s on by default for a reason so I was hesitant to 
> change it.

I don't see why you should not set them as you described.
The idea is that the overhead of having replica over minimal is today
enough "low" in contrast to the need for replication (e.g., backup
tools like pgbackrest).

Luca




Re: PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-04 Thread Luca Ferrari
On Wed, Sep 4, 2019 at 10:44 PM Jason Ralph
 wrote:
>
> Thank you Luca,
> Can I ask one more related question, I have no need for replication as noted 
> below, can I safely disable the worker process via the setting below? In my 
> sandbox it does turn off the logical replication launcher, I just wanted to 
> be sure I am not affecting anything other than replication with this setting.
>
>
> max_logical_replication_workers = 0 # taken from max_worker_processes
> # (change requires restart)
>

Quite frankly I've never done, but I don't see why you should not turn it off.

Luca




Re: floating point output

2019-09-05 Thread Luca Ferrari
On Thu, Sep 5, 2019 at 6:14 AM Adrian Klaver  wrote:
> Some examples would help explain your concerns.

I guess the problem is with extra_float_digits. If greater then zero,
the string representation of a real is at its minimum, that is only
significative digits are there:

restoredb=# set extra_float_digits to 1;
SET
restoredb=# select 2.001230::real;
 float4
-
 2.00123
(1 row)


If lower than zero, the output switch to "precise" mode that is
extra_float_digits are subtracted from the number of available digits
for a number:

restoredb=# set extra_float_digits to -2;
SET
restoredb=# select 2.001230::real;
 float4

  2.001
(1 row)

restoredb=# set extra_float_digits to -3;
SET
restoredb=# select 2.001230::real;
 float4

  2
(1 row)


However, this has some more obscure to me behaviors when the value is
greater than 1:

restoredb=# set extra_float_digits to 1;
SET
restoredb=# select 2.1::real;
 float4

2.1
(1 row)

restoredb=# set extra_float_digits to 2;
SET
restoredb=# select 2.1::real;
  float4
---
 2.099
(1 row)


>
> Bottom line, if you want precision use numeric.

Yeah, totally agree. I would also add that numeric is a little more documented.

Luca




Re: PG11.2 - wal_level =minimal max_wal_senders = 0

2019-09-09 Thread Luca Ferrari
On Sat, Sep 7, 2019 at 2:15 AM Jason Ralph  wrote:
> I currently have the systems running a parallel pg_dump each night to a 
> separate partition mounted on the VM. Then I perform a full backup of the VM 
> and all mounted drives each night. Would this be affected by disabling wal 
> archiving? I noted that I understood wal archiving was affected in my initial 
> question. But I was not able to see how it could affect my setup. Please 
> school me if I am being naive.

No, if you are using pg_dump you are not affected by archiving.
Archiving is used to backup with tools like pgbackrest, barman, wal-e
and alike. If you are running your database isolated (i.e., no slaves,
no advanced backups), running it without replication level in wal and
using only pg_dump is fine.

Luca




Re: Recover data from aborted transactions

2019-09-11 Thread Luca Ferrari
On Tue, Sep 10, 2019 at 7:18 PM Aaron Spike  wrote:
> I'm pretty sure that the records I'm looking for are part of one of these 
> aborted transactions. From what I read online, it seems that data from 
> uncommitted transactions exists in the Write-Ahead Logs. Is there anyway to 
> access this data?

Interesting, what did you read? I don't think that aborted transaction
information is as complete as a committed one on the logs.

Luca




Re: update returning order by syntax error question

2019-09-11 Thread Luca Ferrari
On Thu, Sep 12, 2019 at 5:45 AM raf  wrote:
>   ERROR:  syntax error at or near "t"
>   LINE 2:  tblname t

This works on 9.6.9 and 11.4:

luca=> with u as ( update t_all set id = 5 where id <= 5 returning *)
select * from u;
 id | ref_id
+
  5 |  1
(1 row)

luca=> select version();
 version
--
 PostgreSQL 9.6.9 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu
7.2.0-8ubuntu3.2) 7.2.0, 64-bit
(1 row)


However, I know for sure that UPDATE has some restrictions on the
table aliasing (at least, they are not used as for a SELECT), so the
problem could be in the real query you are executing.
It works with or without the order by.

Luca




Re: pgbackrest restore to new location?

2019-09-17 Thread Luca Ferrari
On Tue, Sep 17, 2019 at 3:09 AM Ron  wrote:
>
> Hi,
>
> In order to do this, do I create a new stanza in config file which has
> pg1-path point to the new/empty directory structure while still pointing to
> the existing backup directory, and restore that stanza?

No, I would do this:
1) execute stop for that stanza on the backup machine
2) change the pg1-path for that stanza on the target machine
3) execute the restore command on the target machine
4) adjust backup machine stanza path and 'start' it again.

If you create a new stanza, pgbackrest will not find it in the backup
repository.

Luca




Re: pgbackrest restore to new location?

2019-09-17 Thread Luca Ferrari
On Tue, Sep 17, 2019 at 12:00 PM Ron  wrote:
> The real problem is that after doing that, "pg_ctl start -D
> /path/to/new/data" fails with "PANIC: could not locate a valid checkpoint
> record".

Hard to say what's going wrong without logs/configs. Do you have any
other backup to try to restore from? As a sidenote, in this kind of
restore you should not be using --delta, but I guess pgbackrest will
prevent you to do so.

Luca




Re: install pgcrypto module to existing postgreSQL

2019-09-18 Thread Luca Ferrari
On Tue, Sep 17, 2019 at 11:19 PM Pavan Kumar  wrote:
> once configure is done, used make and make install to install postgres.

cd contrib && make && make install
that should work.

Luca




unable to drop index because it does not exists

2019-09-23 Thread Luca Ferrari
Hi,
running 11.5 I've got a partitioned table where I want to destroy an
index (on a column that has nothing to do with the partition):


respidb=# \d+ respi.root

Column|Type |
--+-+-
 sen_id   | integer |
 mis_flusso   | integer |
 mis_tasso| integer |
 mis_velocita | integer |
 mis_ora  | timestamp without time zone |
 pk   | bigint  |
 ts   | timestamp without time zone |
Partition key: LIST (date_part('year'::text, mis_ora))
Indexes:
"idx_root_sensore" btree (sen_id)
"idx_ts" btree (ts)
Partitions: respi.y2018 FOR VALUES IN ('2018'), PARTITIONED,
respi.y2019 FOR VALUES IN ('2019'), PARTITIONED,
respi.y2020 FOR VALUES IN ('2020'), PARTITIONED

respidb=# drop index idx_root_sensore;
ERROR:  index "idx_root_sensore" does not exist

Of course the index exists:

SELECT oid, relname, relkind FROM pg_class WHERE relname = 'idx_root_sensore';
-[ RECORD 1 ]-
oid | 40950
relname | idx_root_sensore
relkind | I


I already did a manual vacuum on the table.
Any idea?

Luca




Re: pg_receivexlog or archive_command

2019-09-23 Thread Luca Ferrari
On Mon, Sep 23, 2019 at 10:55 AM Andreas Kretschmer
 wrote:
> you can use both of them, and you should consider "Barman".

If I remember well Barman uses pg_receivexlog when streaming, and
archive_command when doing a "normal" backup.
Also pgbackrest is another good tool for backup.
The idea here should be not reinventing the wheel.

Luca




  1   2   3   >