mn.
> So, creating the index itself took ~2hrs+ and the index size now shows as
> ~116GB.
>
> Create index idx1 on TAB1(ID)
Are your duplicates exact duplicates? Or is there an order of preference among
them?
And if so, what really makes those rows unique?
That matters for soluti
On Mon, 4 Mar 2024 at 13:46, Francisco Olarte
wrote:
> On Mon, 4 Mar 2024 at 13:10, wrote:
> > According to the documentation, Table 9.31, IMHO both comparisons should
> > produce the same results, as
>
> > timestamp - timestamp → interval
> > timestamp + interval → timestamp
> Your problem may
On Fri, 22 Mar 2024 at 15:01, Nick Renders wrote:
>
> We now have a second machine with this issue: it is an Intel Mac mini
> running macOS Sonoma (14.4) and PostgreSQL 16.2.
> This one only has a single Data directory, so there are no multiple
> instances running.
>
I don't think that having a
the next attempt.
In PostgreSQL this also works for almost all DDL statements (CREATE TABLE, DROP
TABLE, TRUNCATE TABLE, etc.), which is one of the features about this database
that I really appreciate - some big names don’t have that.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
OR (coalesce(upper(i.dates), 'infinity') =
coalesce(upper(e.dates), 'infinity')
AND coalesce(lower(i.dates), '-infinity') >
coalesce(lower(e.dates), '-infinity'))
)
);
id | value | dates
+---+-
1 | b | [2010-01-01,)
2 | d | [2010-01-01,2021-01-01)
3 | g | [2013-01-01,)
4 | j | [2010-01-01,2015-01-01)
(4 rows)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
nk there are any open-source initiatives (unfortunately), they’re all
commercial products AFAIK, and not cheap. With a suitable use-case they can be
rather valuable tools too though.
Regards,
Alban Hertroys
--
Als je de draak wilt steken met iemand,
dan helpt het,
als die een punthoofd heeft.
entire database from that backup
> and then insert all new table rows since I have saved all the scripts.
If you end up in the same situation again after doing that, then you know at
least it’s repeatable and can analyse how you got there.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
nfortunately, there’s no mode in psql that allows you to import
an SQL file and step through the statements one by one. That would be helpful
in your case I think. But maybe someone on the list has ideas about that?
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut
>> Craig
>
Is this a single INSERT statement with multiple tuples after VALUES? Then
perhaps an earlier line (my bet would be on line 487) accidentally ends with a
semi-colon instead of a comma?
Something like this:
INSERT INTO table (col1, col2, ..., coln) VALUES
(..., ..., ),
(..., ..., ),
(..., ..., ); -- <-- This terminates the INSERT
(..., ..., ); -- <-- Now this line make no sense
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
e transactions get aborted and rolled back,
putting us back at the 1/4th of space in use situation.
Have you been able to catch your shared memory shortage in the act? I suspect
that the stats you showed in your message were those after rollback.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
aints (which you
should probably have on this table anyway), which would allow to handle such in
the application. Such constraints can raise exceptions in your code, that need
handling.
So I say, at least put an exclusion constraint on that table if you didn’t
already, and then decide what appro
nd), or something in
your data changed. My bet is on the latter.
For example, in FY25 the value of NCD_EXCL."Order Quantity" is 'NCD Valid
FY25’, which doesn’t match your LIKE expression. Even something like a trailing
space to the value could be enough.
Alban Hertroys
--
There is always an exception to always.
w format, while
you can add any new partitions in the new format.
I suspect it’s not allowed, but perhaps worth a try.
Alban Hertroys
--
There is always an exception to always.
ion for PostgreSQL says this: "You should ensure that the
join produces at most one candidate change row for each target row.”, which
also seems to imply that you shouldn’t have duplicates.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
‘default’ partition that
gets detached at step 7, after which you can insert+select those from the
default into the appropriate partitions?
But you were going to test that first anyway, obviously.
Alban Hertroys
--
There is always an exception to always.
;2023-06-12T19:54:39Z"
}
]$$::text) replacement
)
select *
from dollar6
cross join lateral jsonb_array_elements(replacement) r
where (r->>'start')::timestamptz <= current_timestamp;
There are probably other ways to attack this problem, this is the one I came up
with.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
------
[{"a": 1, "b": -2, "c": 1}, {"a": 2, "b": -2, "c": 2}, {"a": 2, "b": -1, "c":
3}, {"a": 2, "b": -2, "c": 4}]
(1 row)
For understanding both queries better, it probably helps to take out the
jsonb_agg calls to see the separate objects from the array. Add the original
obj back in for comparison, if you like.
I typically use the documentation pages for the JSON functions and the one on
aggregate functions, where the JSONB aggregates are located:
https://www.postgresql.org/docs/16/functions-json.html
https://www.postgresql.org/docs/16/functions-aggregate.html
And if you’re not familiar with dollar quoting:
https://www.postgresql.org/docs/16/sql-syntax-lexical.html#SQL-SYNTAX-DOLLAR-QUOTING
Alban Hertroys
--
There is always an exception to always.
ary, I'm looking for a generic approach here.
We're on PG 11(.9), so the enhancements made to JSONB operations in PG12
are not (yet) available to us. This is a 3TB database w/o replica's, so
upgrading it is a little scary and requires a maintenance window that
would be sufficient t
foo;
>
> This test is equivalent to "col1 = null" which will always fail.
> You could try something like
>
> CASE WHEN col1 IS NULL THEN ... ELSE ... END
>
> Although I think the particular thing you're doing here would
> be better solved with COALESCE(col1, 'z').
>
> regards, tom lane
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
---
PostgreSQL 11.11 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-44), 64-bit
(1 row)
Regards,
Alban Hertroys
P.S. Sorry about below company disclaimer, there is nothing I can do
about that.
Alban Hertroys
D: 8776 |M: |T: +31 (0)53 4888 888 | E
y key column
candidate.
Now, of course, the OP could have a case where their geometries are guaranteed
to be unique regardless, but they’d better make sure before adding them to the
PK.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
ring(qw.mm, 1, 1), '') as mm
Or even:
select qw.*, coalesce(left(qw.mm, 1), '') as mm
Regards,
Alban Hertroys
--
There is always an exception to always.
ot;,"2021-06-20 16:00:00+01")
You could probably achieve this by using window function lag() over (order by
booking_id), in combination with a case statement when the range from the
previous row overlaps the current range.
That would only solve the case for immediately subsequent ro
On 2021-07-08 13:30, Ron wrote:> Thus, the bigTable PK must be on id, columnX, (No, I don't like it > either.)That's not entirely true. You can keep the PK on id if you additionally create a unique constraint on (id, columnX).That way, you can at least be certain that the uniqueness of the PK remai
e, but that’s the gist of it.
If that project_csv column gets populated by some external application, you
could keep the link-table updated with insert/update/delete triggers.
Alternatively, a writable view replacing public.projects may be a possibility.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
t.id, t.parent
, f.id || case f.children_ids when '' then '' else ',’ end ||
f.children_ids
from foo f
join tree t on f.parent = t.id
where f.parent <> 0
;
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> On 26 Jul 2021, at 17:52, Alban Hertroys wrote:
> Something like this:
>
> with recursive foo (id, parent, children_ids) as (
> select id, parent, null::text
> from tree t
>where not exists (
> select 1 from tree
hierarchy on a reference timestamp. That
performed adequately on a production data warehouse, as long as you
sufficiently constrained the inputs. You can join such a function (laterally)
to some other data set too.
Regards,
Alban Hertroys
--
There is always an exception to always.
single-row example, but if you had a table full of data, you
would now have locked all rows with the value t1c2 = 100 for update. If that
update never happens (or the locking doesn’t get rolled back), well…
Regards,
Alban Hertroys
--
There is always an exception to always.
on downloading this data to the
R script, would it help to rewrite it in PL/R and do (part of) the ML
calculations at the DB side?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
instant| empty range result |
instant range result
---+---++--
empty | ["2000-01-10 00:00:00","2000-01-10 00:00:00"] | false |
true
(1 row)
As I re
on’t use this approach with JSON (as opposed to JSONB) type fields though, a
single extra space in the JSON structure would already lead to a difference, as
would other formatting differences.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
s:
=> with x as (
select '{ "x": 1, "y": 2 }'::jsonb
union all
select '{ "y": 2, "x": 1 }'::jsonb
)
select row(x.jsonb)::text, md5(row(x.jsonb)::text) from x;
row| md5
--+--
("{""x"": 1, ""y"": 2}") | d5a6dbdec7a5bfe0dc99e090db30322e
("{""x"": 1, ""y"": 2}") | d5a6dbdec7a5bfe0dc99e090db30322e
(2 rows)
Alban Hertroys
--
There is always an exception to always.
be a good idea
to store that in a way optimised for that. TimescaleDB comes to mind, or arrays
as per Pavel’s suggestion at
https://stackoverflow.com/questions/68440130/time-series-data-on-postgresql.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
unsubscribe properly - would
make that person use the correct method for unsubscribing rather sooner than
later.
Please, as tempting as that sounds, be gentle to the list server and do not do
that.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
J.
Just what I was thinking. Are you looking in the correct database?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
the trick.
Possibly you're waiting on an uncommitted transaction from a process in state
"Idle in transaction" from pg_stat_activity.
If that doesn't get you anywhere, an explain plan of your query would make it
easier to help you.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
Did you run ANALYZE on your tables before the test?
On 4 December 2017 at 16:01, Tomas Vondra wrote:
>
> On 12/04/2017 02:19 PM, Nicola Contu wrote:
> ...>
>> centos 7 :
>>
>> dbname=# \timing Timing is on. cmdv3=# SELECT id FROM
>> client_billing_account WHERE name = 'name'; id --- * (1
ad
statistics result in non-optimal query plans and therefore could very well
cause your timing differences.
An easy way to verify, since you still have access to both versions of the
database, is to compare the statistics of the relevant tables between the two.
They should be similar.
Alban
id);","See server log for query details.""CREATE INDEX CONCURRENTLY
> index_foo_on_created_at ON foo USING btree (created_at);",,,""
If I read these logs correctly, you are creating the exact same index on foo
(created_at) in both processes, which is just what
ARY KEY (stat_login),
> UNIQUE KEY user (stat_login)
> );
Just a minor observation, but a PRIMARY KEY is UNIQUE by definition. There's no
need for that UNIQUE key "user" on the same field.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
nna need
> same flags if you want to use your existing data directory.
Does that mean that at step 3 one could issue this?:
./configure `pg_config —configure`
If I had the sources at hand, I'd try that myself, but I don't and getting
those is frankly a bit of a hassle to just tes
the slave to if the file is larger on the master.
If CRC's can be relied on to detect corruption (which they were designed to
do), then that answers Edson's question.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
I'm trying to update an existing table that was created in PG9.6 using
the old approach with manual inheritance and check constraints to make
use of the new approach in PG 10 using 'partitioned by', 'attach
partition', etc.
I ran into some how-to's on the internet, but they all assume you
start pa
dices on lower(g.name) and lower(t.city) from your query would be
useful, but in that case make sure you take the concatenation of 'city' out of
the lower()-call in your query.
Just reading your latest addition - using lower() on constants is just a waste
of cycles. It won't hurt your query much though.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
and 4TB of memory (if that's even
for sale) would probably do the trick, but even I think that might be going a
little overboard ;)
Oh yeah, apparently we're married to HP or something… At least, IT management
told me to look at their offerings.
Regards,
Alban Hertroys
--
If you can
erals as the same
thing. Just to say that the "big names" aren't without flaws - they're
kind of hard to fix when users probably depend on their behaviour
though.
Alban Hertroys
--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.
ame | other_data
> +-+
> 3 | XXx | YY
> 7 | XXx | YY
> 11 | XXx | YY
> 15 | XXx | YY
> 19 | XXx | YY
> 23 | XXx | YY
> (6 rows)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
ing fine.
Seems to me you are forgetting to execute the query, like in your Oracle
example. IIRC, calling execute() is how to execute a statement that returns a
ResultSet in Java.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
101 - 149 of 149 matches
Mail list logo