Re: Drop Default Privileges?

2018-06-19 Thread Victor Yegorov
вт, 19 июн. 2018 г. в 21:32, Pavan Teja :

> In order to remove the default privileges for any particular user/role, we
> should know the list of default privileges.
>

`psql` allows you to check default privileges via `\ddp` command (per
database). You can start `psql` with `-E` switch that will show you
internal queries used for displaying this information, or you can `\set
ECHO_HIDDEN on` with the same effect.

Also, you can do `pg_dumpall -s | grep -E 'DEFAULT PRIVILEGE|\\connect' and
it'll produce a list of all entries for all databases, along with database
name.


-- 
Victor Yegorov


Re: Drop Default Privileges?

2018-06-19 Thread Victor Yegorov
вт, 19 июн. 2018 г. в 18:20, Louis Battuello :

> Is it possible to drop default privileges?
>
> I’m attempting to run a pg_restore into an RDS instance, which doesn’t
> have a “postgres” user.
>
> I encounter many messages like so:
>
> ALTER DEFAULT PRIVILEGES...
>
> pg_restore: [archiver (db)] Error from TOC entry 10182; 826 253752252 DEFAULT
> ACL DEFAULT PRIVILEGES FOR TABLES postgres
>
> pg_restore: [archiver (db)] could not execute query: ERROR:  role
> "postgres" does not exist
>
> Command was: ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA
> abc_schema REVOKE ALL ON TABLES  FROM PUBLIC;
>
> I’d like to remove these default privileges on the source database to
> avoid this error message, but I can’t find the syntax in the documentation
> (or if it’s possible). I only see GRANT/REVOKE options.
>

If you see `ALTER DEFAULT PRIVILEGES … REVOKE …` and want to undo it, you
will have to GRANT corresponding privilege.


-- 
Victor Yegorov


Re: POLL: Adding transaction status to default psql prompt

2020-02-06 Thread Victor Yegorov
чт, 6 февр. 2020 г. в 04:55, Vik Fearing :

> Please answer +1 if you want or don't mind seeing transaction status by
> default in psql or -1 if you would prefer to keep the current default.
>


+1

-- 
Victor Yegorov


Re: Performance penalty during logical postgres replication

2020-12-09 Thread Victor Yegorov
ср, 9 дек. 2020 г. в 10:21, Lars Vonk :

> We are doing a logical postgres replication from Postgres 11 to 12. Our
> database is around 700GB (8 cpu's, 32 GB).
> During the replication process, at some point, we see a huge performance
> penalty on a particular table. This table acts as a queue with lots of
> inserts and deletes happening throughout the day. For most of the time this
> table is empty, but during this performance penalty the number of rows in
> this table grows to 10.000 rows, and processing is not fast enough to empty
> this table. Main reason for this (as far as we see) is that the performance
> of the query for selecting the next row to process drops from < 10MS to
> 400MS. This eventually causes too much cpu load on the Primary and we have
> to cancel the replication process.
>
We already tried the initial load three times, and it consistently fails
> with the same "error". Last try was a per table approach and excluding this
> "queue" table.
> After cancelling the replication the query is fast again and the load on
> the Primary goes back to normal. We see that this happens when replicating
> large tables (> millions of rows). During this performance penalty the
> explain of the query selecting the next row from this table tells us it is
> doing a sequential scan (there is an index but it is not used).
>
> - What could cause this performance penalty?
> - Is this something other people experienced as well during the initial
> load of a logical replication with large tables?
> - We are now thinking of temporarily increasing the number of CPU's and
> RAM for the migration. Would this help in this case?
>

I've seen similar symptoms in cases with (a) home-made queues in the tables
and (b) long transactions.
Unfortunately, queue requires frequent vacuuming to preserve more or less
constant size of the queue and it's indexes.
And long transactions prevent the vacuum from cleaning up the queue.
Initial synchronization phase of the logical replication is in fact such a
transaction.

I would recommend doing the following:
- avoid adding ALL tables to the publication
- instead, split all tables in a batches in such a way, that initial batch
processing takes limited time (say, 15-30 minutes at most)
- of course, this leaves the biggest tables alone — add those one by one to
the publication, preferably at the time slot with minimal load on the queue.
- make sure to catch up on the queue processing and vacuum it between
batches
- on the receiving side, avoid creating indexes on the tables: create just
a necessary PK or UK, wait for the initial load to complete and then add
all the rest ones

As for the queue, PGQ from skytools is using different approach to maintain
queue tables:
- once in a while (2 hours by default) processing is switched to a new
table, tab_1, tab_2, tab_3 are used in a round
- after the switch, any remaining entries can be moved from previous to the
live table (shouldn't be necessary if switch is done properly, although
might be tricky in a presence of a long transactions)
- previous table is TRUNCATEd

In your case, you can do `VACUUM FULL` between replicating each batch of
tables.

-- 
Victor Yegorov


Re: Code of Conduct: Russian Translation for Review

2021-02-27 Thread Victor Yegorov
сб, 27 февр. 2021 г. в 01:51, Stacey Haysler :

> If you have any comments or suggestions for the translation, please bring
> them to our attention no later than 5:00 PM PST on  Friday, March 5, 2021.
>

Greetings.

I looked through the text and made some comments.


-- 
Victor Yegorov


PostgreSQL Code of Conduct - Russian Translation Feb 26 2021 - review.docx
Description: MS-Word 2007 document


Unexpected results from CALL and AUTOCOMMIT=off

2024-06-03 Thread Victor Yegorov
Greetings.

I am observing the following results on PostgreSQL 15.7
First, setup:

create table t_test(x bigint);
insert into t_test values(0);

create or replace function f_get_x()
returns bigint
language plpgsql
stable
as $function$
declare
l_result bigint;
begin
select x into l_result from t_test;
--raise notice 'f_get_x() >> x=%', l_result;
--raise notice 'f_get_x() >> xact=%', txid_current_if_assigned();
return l_result;
end;
$function$;

create or replace procedure f_print_x(x bigint)
language plpgsql
as $procedure$
begin
raise notice 'f_print_x() >> x=%', x;
--raise notice 'f_print_x() >> xact=%', txid_current_if_assigned();
end;
$procedure$;


Now, the case:
\set AUTOCOMMIT off
do
$$ begin
--raise notice 'do >> xact=%', txid_current_if_assigned();
update t_test set x = 1;
--raise notice 'do >> xact=%', txid_current_if_assigned();
raise notice 'do >> x=%', f_get_x();
--raise notice 'do >> xact=%', txid_current_if_assigned();
call f_print_x(f_get_x());
end; $$;
NOTICE:  do >> x=1
NOTICE:  f_print_x() >> x=0
DO

I don't understand why CALL statement is not seeing an updated record.
With AUTOCOMMIT=on, all goes as expected.

I tried to examine snapshots and xids (commented lines), but they're always
the same.

Can you explain this behavior, please? Is it expected?

-- 
Victor Yegorov


Re: Unexpected results from CALL and AUTOCOMMIT=off

2024-06-03 Thread Victor Yegorov
пн, 3 июн. 2024 г. в 20:40, Pierre Forstmann :

> You declared function f_get_x as stable which means:
>
> …
>
> If you remove stable from function declaration, it works as expected:
>

Well, I checked
https://www.postgresql.org/docs/current/xfunc-volatility.html
There's a paragraph describing why STABLE (and IMMUTABLE) use different
snapshots:

> For functions written in SQL or in any of the standard procedural
languages, there is a second important property determined by the
volatility category, namely the visibility of any data changes that have
been made by the SQL command that is calling the function. A > VOLATILE
function will see such changes, a STABLE or IMMUTABLE function will not.
This behavior is implemented using the snapshotting behavior of MVCC (see
Chapter 13): STABLE and IMMUTABLE functions use a snapshot established as
of the start of the
> calling query, whereas VOLATILE functions obtain a fresh snapshot at the
start of each query they execute.

But later, docs state, that

> Because of this snapshotting behavior, a function containing only SELECT
commands can safely be marked STABLE, even if it selects from tables that
might be undergoing modifications by concurrent queries. PostgreSQL will
execute all commands of a STABLE function using the snapshot established
for the calling query, and so it will see a fixed view of the database
throughout that query.

And therefore I assume STABLE should work in this case. Well, it seems not
to.

I assume there's smth to do with implicit BEGIN issued in non-AUTOCOMMIT
mode and non-atomic DO block behaviour.


-- 
Victor Yegorov


Re: CSV From Oracle with timestamp column getting errors

2021-03-22 Thread Victor Yegorov
пн, 22 мар. 2021 г. в 21:38, Saha, Sushanta K <
sushanta.s...@verizonwireless.com>:

> \COPY table1 FROM '/tmp/Oracle_2020_06.csv' DELIMITER ',' CSV HEADER;
> ERROR:  invalid input syntax for type timestamp: "01-JUN-20
> 06.04.20.634000 AM"
> CONTEXT:  COPY table1, line 2, column last_update_timestamp: "01-JUN-20
> 06.04.20.634000 AM"
>
> Appreciate any help with this psql command.
>

I would recommend issuing one of these on the Oracle side *before* taking
the CSV snapshot.
export NLS_DATE_FORMAT="-MM-DD HH24:MI:SS"
ALTER SESSION SET nls_date_format='-MM-DD HH24:MI:SS';

Otherwise, you have to load this CSV file in a table, that has `text` type
for the column and do a post-processing,
smth like:

INSERT INTO permanent_tab
SELECT *, to_timestamp(col, 'DD-MON-YY HH12.MI.SS.S AM') FROM
temp_table;

Hope this helps.

-- 
Victor Yegorov


Intersection or zero-column queries

2017-12-21 Thread Victor Yegorov
Greetings.

One can issue an empty `SELECT` statement and 1 row without columns will be
returned:

postgres=# select;
--
(1 row)

However, if I'll do `EXCPET` or `INTERSECT` of such queries, I'll get 2
rows:

postgres=# select except select;
--
(2 rows)
postgres=# select intersect all select;
--
(2 rows)

Why is it so?
Should this be reported as a bug?.. ;)


-- 
Victor Yegorov


Re: Intersection or zero-column queries

2017-12-21 Thread Victor Yegorov
2017-12-22 2:03 GMT+02:00 David G. Johnston :

> On Thu, Dec 21, 2017 at 4:53 PM, Victor Yegorov 
> wrote:
>
>> postgres=# select except select;
>> --
>> (2 rows)
>> postgres=# select intersect all select;
>> --
>> (2 rows)
>>
>> Why is it so?
>> Should this be reported as a bug?.. ;)
>>
>
> ​The intersection case seems correct - one row from each sub-relation is
> returned since ALL is specified and both results as the same.
>

Actually, result will not change with or without `ALL` for both, EXCEPT and
INTERSECT.

Also, intersection should not return more rows, than there're in the
sub-relations.


-- 
Victor Yegorov


Re: change JSON serialization for BIGINT?

2024-11-26 Thread Victor Yegorov
вт, 26 нояб. 2024 г. в 14:34, Tim McLaughlin :

> Is there a way to have Postgres serialize BIGINT as a string rather than
> number in JSON?  By default it does this:
>
>
> select row_to_json(row(500::bigint));
>  row_to_json
> -
>  {"f1":500}
>
> But I want it to do this (note that "500" is quoted):
>
> select row_to_json(row(500::bigint));
>  row_to_json
> -
>  {"f1":"500"}
>

Will this work?

select row_to_json(row(500::text));

-- 
Victor Yegorov


Re: alter system appending to a value

2025-04-30 Thread Victor Yegorov
ср, 30 апр. 2025 г. в 14:15, Luca Ferrari :

> as trivial as it sounds, is there a smart way to use ALTER SYSTEM to
> append to a value?
> Something like: ALTER SYSTEM shared_preloaded_libraries =
> current_setting( 'shared_preloaded_libraries' ) || ',foo';
>

I would do smth like:
SELECT format( 'ALTER SYSTEM SET shared_preload_libraries = %L;', setting )
  FROM pg_settings WHERE name = 'shared_preload_libraries' \gexec

Of course, you should add new value to the existing setting, making sure
there are no duplicates and the format is correct.

-- 
Victor Yegorov