m.dirty_background_bytes and 1 GB for vm.dirty_bytes sounds fine.
>
It should be the other way around: dirty_background_bytes < dirty_bytes
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ircuit_layout cl ON cl.segment_id = sg.id <http://sg.id> AND cl.ordinal
> = 1
> JOIN circuit c ON c.id <http://c.id> = cl.circuit_id
> JOIN vendor v ON v.id <http://v.id> = c.pop_support_vendor_id
> ) seg ON seg.circuit_id = c.id <http://c.id>
> JOIN vendor_gtt_pop on vendor_gtt_pop.g
re a
self-contained test case that we might use to reproduce the issue? For
example by dumping the relevant part of the tables?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
nd share the report.txt with us (well, if it's very large you may need
to only cut the first 1000 lines or so).
That should tell us in which functions most of the time is spent. That
will give us some hints, hopefully.
>
> For what concerns the self-contained test case -
On 03/22/2018 11:29 PM, Alessandro Aste wrote:
> Thanks Tomas. We're currently building postgres from source. In order to
> enable symbols, you want me to re-configure postres with --enable-debug
> then run perf?
>
Yes.
regards
--
Tomas Vondra http://ww
>
>
> *Time: 2245.073 ms (00:02.245)*
>
>
>
>
> On Fri, Mar 23, 2018 at 9:31 AM, Alessandro Aste
> mailto:alessandro.a...@gmail.com>> wrote:
>
> Tomas, I'm attaching a 4MB file with the perf report. Let me
> know
tuples to rather different estimates. That is fixed now
and should be in the next minor release.
Hard to say if this is a case of that, but judging by the number of dead
tuples chances are it is.
It's probably better to use n_live_tup instead, though. I'd say that's
closer to th
t the data pages it's nearly impossible to determine
what went wrong.
We'd also need more information about what happened to the hardware and
cluster before the issues started to appear - crashes, hardware issues.
And so on.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/06/2018 04:29 PM, Alexandre Arruda wrote:
> 2018-04-06 9:39 GMT-03:00 Tomas Vondra <mailto:tomas.von...@2ndquadrant.com>>:
>
>
>
> On 04/06/2018 02:09 AM, Alexandre Arruda wrote:
> > Hi,
> >
> > Some time ago, I had this e
On 04/09/2018 01:49 PM, Alexandre Arruda wrote:
>
>
> 2018-04-06 13:11 GMT-03:00 Tomas Vondra <mailto:tomas.von...@2ndquadrant.com>>:
>
>
>
> On 04/06/2018 04:29 PM, Alexandre Arruda wrote:
> > 2018-04-06 9:39 GMT-03:00 Tomas Vondra
be granted to others.)||
>
It might be better to note current values of the counters somewhere, and
compute a delta later (and use that to compute the cache hit ratio). The
issue is that pg_stat_reset() throws away all sorts of interesting and
important stats, including those driving autovacuum/a
be due
to all the autovacuum workers being constantly busy, unable to cleanup
all tables in a timely manner. In that case lowering the threshold is
not going to help, on the contrary.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
/ more often, so it may
have adverse impact, of course. It's hard to say if it's going to be a
win overall, because we don't know how serious is the bloat.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e done per second. Increasing the
number of workers is akin to allowing more cars on a highway, but also
lowering the speed limit.
You need to increase the limit on amount of work, and lowering naptime
is one way to do that.
regards
--
Tomas Vondra http://www.2ndQuadra
of the people involved in both groups I'm not
worried about this part.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
t;Bitmap Index Scan” on “users_lower_idx”, I
> see the plan rows are 50, but actual rows it got 1. In the second node
> type of "Bitmap Index Scan” on “ lower_state_users_idx”, I see the plan
> rows are 211, and actual rows 211. Both are same. Based on what
> conditions pl
does not really solve the issues with having to pass the
password to the query, making it visible in pg_stat_activity, various
logs etc.
Which is why people generally use FDE for the whole disk, which is
transparent and provides the same level of protection.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
s discussion, it
does not seem very polished / resilient.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
x27;s not a project/page managed by the PostgreSQL
community, you need to report the issues to the authors (most likely
through github issues).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
eems like an interesting idea - I wonder if it could be done e.g. in
psycopg as an extension, or something like that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
7;s no easy way to distinguish them.
I'm no psycopg2 expert, but it does have some infrastructure for casting
PostgreSQL types to Python types, and I guess that could be used for the
encryption.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development,
size. That seems as if a single query was executed
repeatedly. Maybe try looking into what query that is.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ystem view. And it does conflict
with the ACCESS EXCLUSIVE mode, used by the second query.
Could you suggest me which part of pg_dump (which query) might cause that
behaviour.
It's this line:
https://github.com/postgres/postgres/blob/master/src/bin/pg_dump/pg_dump.c#L6676
regards
--
To
timescaleDB
extention on the master side?
AFAIK timescale is "just" a regular extension, in which case yes, all
you need to do is installing it on the master side.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA
all.
I do mostly agree with this. The json[b]_set behavior seems rather
surprising, and I think I've seen a couple of cases running into exactly
this issue. I've solved that with a simple CASE, but maybe changing the
behavior would be better. That's unlikely to be back-patchable, though,
so maybe a better option is to create a non-strict wrappers. But that
does not work when the user is unaware of the behavior :-(
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
topped, etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
rsions (~30 months) of fixes behind. You might want to consider
upgrading ...
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
#x27;t think that's entirely necessary, thoug - we could
use the table directly. Yes, that would be slower, but maybe it'd be
sufficient.
But I think the idea is ultimately that we'd implement a new dict type
in core, and people would just specify which table to load data
ot;...", NULL)
should do the same thing as
jsonb_set(..., "...", 'null':jsonb)
I'm not entirely surprised it's what MySQL does ;-) but I'd say treating
it as a deletion of the key (just like MSSQL) is somewhat more sensible.
But I admit it's quit
ferently. And at the same time one can argue, that people, who are
not aware about this caveat with jsonb_set and NULL values, will most likely
use it due to a bit simpler syntax (more similar to some popular programming
languages).
This seems like an entirely independent thing ...
Right. Useful,
On Sat, Oct 19, 2019 at 12:47:39PM -0400, Andrew Dunstan wrote:
On 10/19/19 12:32 PM, David G. Johnston wrote:
On Sat, Oct 19, 2019 at 9:19 AM Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>>
wrote:
>
>We invented jsonb_set() (credit to Dmitry Dolgov). And we've
or version upgrade.
FWIW Julie, please don't top post - it just makes it harder to follow
the discussion. Also, this seems like a completely separate question,
unrelated to the DROP DATABLASE one. It might be better to start a new
thread instead of repurposing an existing one.
regards
onb_set and NULL in the archives), but I'm not sure that's enough to
justify any changes in backbranches. I'd say no, but I have no idea how
many people are affected by this but don't know about it ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
execution is using CPU, so yes - it can cause a spike in CPU
usage. The question is whether the query plan is efficient, or whether
it's using the CPU unenecessarily. You need to show us EXPLAIN ANALYZE
and perhaps explain why you think it shouldn't use this much CPU.
regards
--
To
On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:
On 10/20/19 4:18 PM, Tomas Vondra wrote:
On Sun, Oct 20, 2019 at 03:48:05PM -0400, Andrew Dunstan wrote:
On 10/20/19 1:14 PM, David G. Johnston wrote:
On Sun, Oct 20, 2019 at 5:31 AM Andrew Dunstan
mailto:andrew.duns
On Mon, Oct 21, 2019 at 08:06:46AM -0700, Adrian Klaver wrote:
On 10/20/19 11:07 PM, Tomas Vondra wrote:
On Sun, Oct 20, 2019 at 06:51:05PM -0400, Andrew Dunstan wrote:
True. And AFAIK catching exceptions is not really possible in some code,
e.g. in stored procedures (because we can'
io 1" in the barman docs, i.e.
pg_basebackup (backup_method = postgres) and WAL streaming.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ed and probably significantly less
efficient than COPY. So if you have a lot of these parquet files, I'd
probably use the COPY. But maybe the ability to query the parquet files
directly (with FDW) is useful for you.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Devel
ly you mean when
you say "portable option". Can you explain?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
warnings
check-world passed without any problems
I'll mark this patch as ready for commiter
Thank you for your work
Thanks for the review. I propose to commit this shortly.
Now that this was committed, I've updated the patch status accordingly.
Thanks!
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
rming the root cause - you'll see the query plan,
which should give you enough context.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
cases you're thinking about?
FWIW I think it's pretty bad idea to post questions about three very
different topics into a single pgsql-hackers thread. That'll just lead
to a lot of confusion.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ed) and do
(gdb) p MemoryContextStats(TopMemoryContext)
(gdb) q
and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.
It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.
rega
(b it not null and b = true) and (c is not null)
Or something like that.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
es. That should consider the column-level correlation even
when the combination of values is not in the MCV. It might make the
"good" estimate worse, but that's an inherent trade-off.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
s generated per day, or something like that.
Maybe there was some I/O issue in that time period and a couple writes
got lost, or something like that.
Are there any other corrupted indexes on the table?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
chema
versioning tools available - we have a list on the wiki:
https://wiki.postgresql.org/wiki/Change_management_tools_and_techniques
I'm sure it's incomplete, but it's helpful nevertheless.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
gt;
I think it's be interesting to get memory context stats from the
processes consuming a lot of memory. If you know which processes are
suspect (and it seems you know, bacause if a reconnect helps it's the
backend handling the connection), you can attach a debugger and do
$ gdb -p $PID
call MemoryContextStats(TopMemoryContext)
which will log info about memory contexts, just like autovacuum.
Hopefully that tells us memory context is bloated, and that might point
us to particular part of the code.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
it could be build-dependent
> and a bit hard to track down if one doesn't control the build flags
>
Yeah, this seems like a clear bug - we should not fail queries like
this. It's a sign statext_is_compatible_clause() and the MCV code
disagrees which clauses are compatible.
Can you share an example triggering this?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 8/7/22 19:28, Tom Lane wrote:
> Tomas Vondra writes:
>> Yeah, this seems like a clear bug - we should not fail queries like
>> this. It's a sign statext_is_compatible_clause() and the MCV code
>> disagrees which clauses are compatible.
>
> Indeed. I at
ticular thing makes it less extensible?
That being said, we also have RELOPT_KIND_BRIN, and that seems to be
missing from RELOPT_KIND_INDEX too (and AFAICS the optimization works
for all index types).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
right thing is to
send it to pgsql-general. Likewise, it makes little sense to send
questions to the "owner". I've removed the other lists from CC.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Index Cond: (value % 'lorem'::text)
> Buffers: shared hit=109
> Planning:
> Buffers: shared hit=3
> Planning Time: 2.394 ms
> Execution Time: 2273.256 ms
> (13 rows)
>
>
> Thank you for any sort of insight into this.
e you into what to do instead.
>
>
> I know that page. obviously, as I have to kill the request, I cannot
> provide a explain analyze...
>
It's a bit weird the "victor" table is joined seemingly without any join
conditions, leading to a cross join (which massively inflates the cost
for joins above it). Maybe the anonymized plan mangles it somehow.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
questions
about practical issues you run ran into.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
jumble.
So "jumbling" is a process to generate a "normalized" query.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
the DATE_TRUNC.
> How can I optimize?
>
It's always going to be slow with the COUNT(DISTINCT), I'm afraid.
Not sure how much you can modify the query / database, and how accurate
results you need. If you're OK with estimates, you can try postgres-hll
extension [2] which estimates count(distinct). For exact reaults, I
wrote count_distinct extension [2] that uses hashtable. Might be worth a
try, I guess.
Another option is to precalculate stuff, and do queries on that. That
is, you're not going to add data with old timestamps, so you can
pre-aggregate that.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 7/17/23 13:20, Tomas Vondra wrote:
> ...
>
> It's always going to be slow with the COUNT(DISTINCT), I'm afraid.
>
> Not sure how much you can modify the query / database, and how accurate
> results you need. If you're OK with estimates, you can try postgres-hl
ege),
then the function can't do any writes either.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
n for the
replication slot peerflow_slot_wal_testing_2 (especially the restart_lsn
value). Also, maybe show the contents of pg_wal (especially for the
segment referenced in the error message).
Can you reproduce this outside Google cloud environment?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
debug
> this further?
>
Hard to say. The best thing would be to have a reproducer script, ofc.
If that's not possible, the information already requested seems like a
good start.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ter. The postgres 8kB
page? The filesytem page? The storage block/sector size?
FWIW I think for SSDs this matters way more than for HDD, because SSDs
have to erase the space before a rewrite, which makes it much more
expensive. But that's not just about the alignment, but about the page
size
rade?
3) Did you transfer the configuration, or did you just create a new
cluster with the default values?
4) What exactly is slower? Queries? Inserts?
5) Can you quantify the impact? Is it 2x slower? 100x slower?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
t to "off".
>
> We need help to understand this unexpected behaviour.
>
Would be good to have some sort of reproducer - ideally a script that
sets up an instance + replication, and demonstrates the issue. Or at
least a sufficiently detailed steps to reproduce it without having to
guess what exactly you did.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
igate issues
on a proprietary product.
regards
[1]
https://www.postgresql.org/message-id/CAH2-WznuNGSzF8v6OsgjaC5aYsb3cZ6HW6MLm30X0d65cmSH6A%40mail.gmail.com
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ion 9.2 are supported.)
I'd probably use pg_dump from the target version (PG16), but it should
not matter.
regards
--
Tomas Vondra
g ends up
spilling the decoded transaction to disk.
If you want a better answer, I think you'll have to provide a lot more
details. For example, which PostgreSQL version are you using, and how is
it configured? What config parameters have non-default values?
regards
--
Tomas Vondra
es it easier to
hit, but it can happen without it.
> Is there something I can do to avoid this? Is my understanding of how
> the catalog tables work wrong?
>
I believe you could actually lock the pg_class rows for update. Just add
FOR UPDATE at the end of the query.
regards
--
Tomas Vondra
ve no idea what you mean - what would be filtered out, etc. But you
can give it a try - you have a reproducer, so testing if it fails should
not be that difficult.
Other than that, I think you can do two things:
1) Make the application to simply retry in it sees this error.
2) Lock the objects using LOCK TABLE before running the query.
regards
--
Tomas Vondra
dom data, because the whole idea is
about eliminating large blocks of data (e.g. 1MB). But with random data
that's not going to happen, because each such range will match anything.
Which is why seqscan is a bit faster than when using BRIN index.
regards
--
Tomas Vondra
EnterpriseDB: http:
of the columns, in
which case we don't know how many groups we actually matched, etc.
TBH I'm not sure how much of those implementation details we want to put
into the user docs - it may be a bit too much, and we may need to change
some of it.
regards
--
Tomas Vondra
EnterpriseDB
On 5/5/21 3:23 PM, Pavel Luzanov wrote:
Hello,
It is very likely that the date_trunc function in the following example
is executed for each line of the query. Although it marked as a STABLE
and could only be called once.
It could, but that's just an option - the database may do that, bu
rfaces to read
chunks of data from large objects - see this:
1) lo_get (loid, offset, length)
https://www.postgresql.org/docs/13/lo-funcs.html
2) lo_seek() + lo_read()
https://www.postgresql.org/docs/13/lo-interfaces.html
Obviously, you can't do "loread(fd, size)" because that's going to
attempt building one large bytea, failing because of the alloc limit.
You have to stream the data into the hash.
Doing that in plpgsql is possible, although possibly somewhat slow.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ion (file:line)
where the error is printed, and then set a breakpoint to that place.
Makes it easier to generate the backtrace.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ause, but we can only speculate why and the query
conditions look entirely reasonable.
I suggest you do this:
1) start a session, identify the PID of the backend
select pg_backend_pid();
2) attach a debugger (e.g. gdb) to the pid
gdb -p $PID
3) set breakpoint to the l
ID
(gdb) c
3) run the CREATE INDEX query
4) get backtrace from the debugger
(gdb) bt
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
you
looking for a faster / more efficient way to transfer the data?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
directly to
the other person.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
s an explicit 1GB limit per value, but having rows close
to the 1GB limit is going to cause all sorts of unpredictable and rather
painful issues :-(
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
probably
ask them.
4) Don't post backtraces as screenshots. Copy them as text, attach them
as a text file.
5) Continue in the thread you already started, otherwise it's impossible
to follow the discussion.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ny joins as possible until the issue
disappears. The simpler the query, the easier it is to investigate.
And yes, replacing parts of a query with a temporary table is a common
solution, because it's possible to collect statistics on it, build
indexes etc. That usually solves estimation i
//lists.osgeo.org/mailman/listinfo/postgis-users
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
On 1/5/22 14:17, Shaozhong SHI wrote:
Any examples in Postgres to create a dictionary type to store and access
key value pairs?
I'd say JSONB can be used as an associative array, and e.g. in Python
can map to dict data type.
regards
--
Tomas Vondra
EnterpriseDB:
On 1/5/22 17:24, Ron wrote:
On 1/5/22 7:22 AM, Tomas Vondra wrote:
On 1/5/22 14:17, Shaozhong SHI wrote:
Any examples in Postgres to create a dictionary type to store and
access key value pairs?
I'd say JSONB can be used as an associative array, and e.g. in Python
can map to dict
r application to use what's available.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
data_pkey on data t (cost=0.57..21427806.53 rows=58785023
width=131) (actual time=0.024..0.482 rows=854 loops=1)
Index Cond: (id > 205284974)
It'd be interesting to see the plans without the LIMIT, as that makes
the "actual" values low simply by terminating early.
plan look like? Is it using the merge sort
of nested loop? I wonder if this might be formulated as a costing issue,
pushing the planner to use the nested loop.
[1]
https://www.postgresql.org/message-id/flat/CAFQUnFhqkWuPCwQ1NmHYrisHJhYx4DoJak-dV%2BFcjyY6scooYA%40mail.gmail.com
regards
--
Toma
0.48 rows=1000 width=183) ...
So it seems *more* expensive than the first plan. Taken to the extreme
the planner could theoretically have chosen to use the first plan (and
delay the condition until after the join).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
install debuginfo first, so that the backtrace makes sense.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
de FROM pg_class WHERE relname = 'earlyprinttuples';
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
> migration utility to generate multiple files each with a limited
> number of INSERTs to get around this issue but maybe there's
> another/better way?
>
The question is what exactly runs out of memory, and how did you modify
the configuration (particularly related to memory)
On 11/28/2017 06:54 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
> wrote:
>> Hi,
>>
>> On 11/28/2017 06:17 PM, Ted Toth wrote:
>>> I'm writing a migration utility to move data from non-rdbms data
>>> source to a p
On 11/28/2017 07:26 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
> wrote:
>>
>> ...
>>
>> That is, most of the memory is allocated for SPI (2.4GB) and PL/pgSQL
>> procedure (500MB). How do you do the load? What libraries/drivers?
>
e
policy? That typically affects short CPU-bound queries.
Other than that, I recommend performing basic system benchmarks (CPU,
memory, ...) and only if those machines perform equally should you look
for issues in PostgreSQL. Chances are the root cause is in hw or OS, in
which case yo
sed to know? Not only that depends on your internal
deployment policies, but it's also much more a CentOS/RedHat question
than PostgreSQL.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
it's
unclear why the directory got so large.
You could try setting autovacuum_multixact_freeze_max_age=0 and
vacuuming the database, but it's hard to say if that will help.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
a single process, it might be a poor plan choice (e.g. hash
aggregate may easily cause that).
If all processes combined, then perhaps it's due to work_mem being too high.
> what should I do to deal with this problem?
>
Hard to say, until you provide enough information.
regards
--
Tom
ither find out what your actual needs are (by looking at
the current system and extrapolating it in some way) and sizing the
hardware accordingly. Or you can squeeze as much money from the
management as possible, and buying the shiniest stuff out possible.
> Oh yeah, apparently we're ma
1 - 100 of 126 matches
Mail list logo