Hi,
I've got cutomer with really huge RAM, now it's:
total used free sharedbuffers cached
Mem: 31021113052596 49515 2088019922961185
-/+ buffers/cache: 904183011693
Swap: 8191 1 8190
(free -m)
and
Andres Freund wrote
> With a halfway modern PG I'd suggest to rather tune postgres settings
> that control flushing. That leaves files like temp sorting in memory for
> longer, while flushing things controlledly for other sources of
> writes. See *_flush_after settings.
>
> Greetings,
>
> Andres
Laurenz Albe wrote
> Yes, you should set vm.dirty_background_bytes and vm.dirty_bytes
> and not use the *_ratio settings.
>
> 2 GB for vm.dirty_background_bytes and 1 GB for vm.dirty_bytes sounds
> fine.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
Thank you L
Bugzilla from scher...@proteus-tech.com wrote
> Oh - and lots of memory is always good no matter what as others have said.
I'm probably "the others" here. I have seen already really large
instalations like with 6TB of RAM. Dealing with it is like completely other
universe of problems, because of N
you probably need to change pg_hba.conf. set the authentication method to
trust for your user, reload the server with pg_ctl, go to psql and change
the passwords. more details you will find here:
https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html
--
Sent from: http://www.postg
Check out here: https://wiki.postgresql.org/wiki/Lock_Monitoring
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi All!
I've been experimenting with track_functions options and what I've saw it's
really puzzling me.
Documentation says:
/ SQL-language functions that are simple enough to be "inlined" into the
calling query will not be tracked, regardless of this setting./
But it came up, it depends on much
Adrian Klaver-4 wrote
> https://www.postgresql.org/docs/10/static/monitoring-stats.html#PG-STAT-USER-FUNCTIONS-VIEW
>
> "...But if you want to see new results with each query, be sure to do
> the queries outside any transaction block. Alternatively, you can invoke
> pg_stat_clear_snapshot(), whi
I mean this part describing track_function:
https://www.postgresql.org/docs/10/static/runtime-config-statistics.html
Enables tracking of function call counts and time used. Specify pl to track
only procedural-language functions, all to also track SQL and C language
functions. The default is none,
I mean this part describing track_function:
https://www.postgresql.org/docs/10/static/runtime-config-statistics.html
Enables tracking of function call counts and time used. Specify pl to track
only procedural-language functions, all to also track SQL and C language
functions. The default is none,
and to be clear I was experimenting with pg_stat_clear_snapshot() after your
answer, but it doesn't change behaviour of track_functions.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Edson Carlos Ericksson Richter wrote
> I don't know if there are best practices (each scenario requires its own
> solution), but for plain complex SELECT queries, I do use "WITH"
> queries... They work really well.
Be cautious with CTE's. They weren't meant to be an alternative to
subqueries and
Hi All!
First time I see that the sum of written buffers:
3765929+878326121 = 882092050 = select
pg_size_pretty(882092050::numeric*8192) = 6730 GB
is bigger than buffers_alloc value in pg_stat_bgwriter view:
buffers_alloc | 20426161 = 156 GB
buffers_checkpoint| 878599316
buffers_cle
Hi!
I would like to know how postgres will behave with a big amount of cached
plans from prepared statements on 1 connection. Let's say there is an
application level connection pooling and one connection can last for many
weeks. Many plans are cached from many prepared statements, is there any
post
As far as I know PostgreSQL does only OUTER JOIN Elimination, with inner join
it doesn't work.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi!
Something strange happened to me right now. I'm trying to compare results
from one query with rewritten version and everything is ok with this order:
WITH abc AS (SELECT 1) SELECT 1
except all
SELECT 1
but when I'm trying other way around it throws an error:
SELECT 1
except all
WITH abc AS (S
thank you for the answer, had no idea about "syntactic precedence" thing.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
David Rowley-3 wrote
> I don't think there were any actual roadblocks, it was more of just
> not enough time in the cycle to make it work due to a few technical
> details that required extra effort to make work.
>
> Alvaro managed to simplify the problem and allow foreign keys to be
> defined on p
I think this one will give you report you need:
select schema_name,
roleid::regrole,
string_agg(member::regrole::text,',' order by member::regrole::text) users
from information_schema.schemata s, pg_user u
JOIN pg_auth_members a ON u.usename::text=a.roleid::regrole::text
WHERE s.schema_name not
You might find this comparision useful:
https://en.wikipedia.org/wiki/Comparison_of_relational_database_management_systems
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Which version are you running?
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi!
There is second time I see that somebody uses pg_sleep function inside
plpgsql block. This case is quite similar to the last one - it's some kind
of wait for data to be loaded. After pg_sleep there is a check if some
condition is true, if not procedure goes to sleep again. As a result an
averag
Francisco Olarte wrote
> I do some similar things, but I sleep outside of the
> database, is there a reason this can not be done?
>
> Francisco Olarte.
Yes, I do try to convince them to do it outside the db, that's the reason
I'm looking for some support here :) I'm not sure those 2 reasons are e
Hi, do you have maybe idea how to make loading process faster?
I have 500 millions of json files (1 json per file) that I need to load to
db.
My test set is "only" 1 million files.
What I came up with now is:
time for i in datafiles/*; do
psql -c "\copy json_parts(json_data) FROM $i"&
done
wh
Christopher Browne-3 wrote
> Well, you're paying for a lot of overhead in that, as you're
> establishing a psql command, connecting to a database, spawning a backend
> process, starting a transactions, committing a transaction, closing the
> backend
> process, disconnecting from the database, and c
Ertan Küçükoğlu wrote
> However, if possible, you may think of using a local physical computer to
> do all uploading and after do backup/restore on cloud system.
>
> Compressed backup will be far less internet traffic compared to direct
> data inserts.
I was thinking about that but data source is
it's a cloud and no plpythonu extension avaiable unfortunately
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
there is no indexes nor foreign keys, or any other constraints
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi,
json_parts it's just single table with 2 column:
Table "public.json_parts"
Column | Type | Collation | Nullable |Default
| Storage | Stats target | Description
---+-+---+
it's in a blob storage in Azure. I'm testing with 1m that I have locally
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
hmm now I'm thinking maybe setting up pgbouncer in front of postgres with
statement mode would help?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi,
is this limit for maintenance work mem still there? or it has been patched?
https://www.postgresql-archive.org/Vacuum-allow-usage-of-more-than-1GB-of-work-mem-td5919221i180.html
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi,
I'm running standard pgbench and what's kinda strange copy pgbench_accounts
from stdin is blocking my other query which is \dt+.
Does copy hold any exclusive lock or there is something wrong with my
system?
i'm using pgbench=> SELECT version();
-[ RECORD 1 ]
thank you David.
So it would need to run inside single transaction to cause lock, right? do
you know if pgbench is opening transaction?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
is there any way to distinct between updated and inserted rows in RETURNING
clause when ON CONFLICT UPDATE was used?
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
thank you Adrian,
the background of it is that I have already written the python script that
translates Oracle MERGE clause to Postgres INSERT ... ON CONFLICT, but in
order to be able to add DELETE part from MERGE i need to distinct those
operations.
thank you for the idea with trigger, i haven't
how about this solution?
Does it have any caveats?
WITH upsert AS (INSERT INTO GUCIO (ID, NAZWA)
SELECT A.ID, A.NAZWA
FROM ALA A
ON CONFLICT (ID) DO UPDATE SET
nazwa = excluded.nazwa
RETURNING xmax,xmin, *)
select xmax as xmax_value
into txmaxu
from upsert;
delete
from guc
Od course inside transaction block
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Hi,
i need to emulate oracle's savepoint behaviour inside of the plpgsql
function.
This function is able to insert all the rows that weren't caught on the
exception, but i need also to rollback the insert that happens before the
exception.
So let's say the exception is thrown when j=3 so i need a
thank you Luis, but this is not supported in plpgsql
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
Any chance this situation has something to do with the latest reveal of
meltdown vulnerability?
https://googleprojectzero.blogspot.com/2018/01/reading-privileged-memory-with-side.html
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I've run once a test on my laptop because was curious as well. From my
results (on laptop - 16GB RAM, 4 cores) the upper limit was 12k. Above it
planning time was unbearable high - much higher than execution time. It's
been tested on 9.5
--
Sent from: http://www.postgresql-archive.org/PostgreSQL
yes, it doesn't look good. and it seems that statistics aren't accurate:
GroupAggregate (cost=271794.39..330553.67 rows=215630 width=152) (actual
time=30.641..37.303 rows=2792 loops=1)
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
It will not solve the problem, but maybe try with --checkpoint=fast option.
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
I would like to refresh the topic and add another report about the issue that
just happened to me.I'm sure it's the toast table that cannot be opened
inside the function.I have added following RAISE NOTICE clauses to it and
run analyze inside of the function:
analyze verbose temp_table;
45 matches
Mail list logo