d
be useful in case specific rows need updating or deleting without also
modifying the other rows with that same data - normally, only insertions and
selections happen on such tables though, and updates or deletes are absolutely
forbidden - corrections happen by inserting rows with an opposite tran
On 1 March 2018 at 17:22, Steven Lembark wrote:
>
>> On 03/01/2018 02:20 AM, Alban Hertroys wrote:
>> [snip]
>> > Not to mention that not all types of tables necessarily have
>> > suitable candidates for a primary key. You could add a surrogate
>> > key
ontains a string
such as 'The building is now open to public.'. Regular expressions don't know
the difference between data and identifiers in a dump file - pg_restore does.
Whether psql needs the same treatment? I'd qualify this as "advanced" use and
limit it to p
I suppose MySQL has the concept of views. Perhaps you can create a
view over the table that translates those 0-datetimes to NULL and have
the FDW connect to the view instead?
On 23 March 2018 at 14:27, Patricia DECHANDOL
wrote:
> Thanks a lot Adrian.
> We can't update the mysql columns values, so
#x27;2010' BETWEEN il.beginyear AND
> il.endyear)
> AND il.type = '536')
> )
> SELECT * from the_tree;
>
>
> Can anyone help me out here?
Your JOIN in the recursive part of the CTE should probably be ON g.parent =
t.treelevel AND g.indexlistid = t.indexlistid. Since both the initial and
recursive part use the same value for indexlistid, the subquery in the
recursive part is superfluous.
You should also take the order in seq into account for items on the same node,
if that is possible in your data, otherwise there is no guarantee that you'll
get your string components in the correct order.
I think you need something closer to this:
WITH RECURSIVE the_tree AS (
SELECT g.seq, g.app::TEXT, g.price, g.ic, g.treelevel::INTEGER,
g.indexlistid
FROM indexlistapp g
WHERE g.indexlistid IN (
SELECT il.indexlistid
FROM indexlist il
WHERE il.model = 'CTS'
AND '2010' BETWEEN il.beginyear AND il.endyear
AND il.type = '536'
)
AND g.parent = 0
UNION ALL
SELECT t.seq, t.app || ', ' || g.app AS app, t.price, t.ic,
g.treelevel, g.indexlistid
FROM the_tree t
INNER JOIN indexlistapp g ON g.parent = t.treelevel AND g.indexlistid =
t.indexlistid
ORDER BY g.seq
)
SELECT * FROM the_tree
Another observation is that you only seem interested in the leaf nodes of your
tree. Recognising leaf nodes in your hierarchical query can be achieved by
adding a column that tests the existence of a node with the current node as its
parent - if it does not exist, this node is a leaf node.
The result of SELECT * FROM the_tree can then be filtered to return leaf nodes
only.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
(AVG(c.diff)),
ROUND(AVG(m.score), 1)
FROM words_moves m
JOIN cte c using(mid)
JOIN words_in_social s USING(uid)
WHERE m.action = 'play'
GROUP BY c.day
ORDER BY c.day;
That may look cleaner, but your original query probably performs better, since
CTE's also act as an op
stem_id, student_id, campus_name)
DO
UPDATE SET modified = EXCLUDED.modified,
balance = EXCLUDED.balance,
balance_as_of = EXCLUDED.balance_as_of
Instead, you were re-assigning the keys (school_system_id, student_id,
campus_name) to the same values again.
Al
d apparently clean up shared memory
belonging to a user when it detects the user logs out. ISTR that we had to
disable that on our CentOS 7 server to stop crashes from happening.
More details here: https://wiki.postgresql.org/wiki/Systemd
Alban Hertroys
--
If you can't see the forest for t
x27;.
Is that possible?
I've seen a couple of "solutions" on the internet that just summed up the
results of the CTE, but that won't do as it would not put the correct weights
onto intermediate levels of the tree as far as I can see (in above, the weight
of 'dough')
is tends to be kept
> as a temporary materialized result set with no indices, that's not performing
> great and it adds a fair amount of complexity to the query too.
>
> Then I realised that if we somehow could track the sum() of 'weight'
> throughout exploding the
that as separate insert
statements with pg_dump —inserts.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
1.00 |1.00 | pcs || 313.00
>> 1.1 | tomato | 100.00 | 100.00 | g | 100.00 | 313.00
>> 1.2 | basil|10.00 | 10.00 | g | 10.00 | 313.00
>> 1.3 | salt | 3.00 |3.00 | g | 3.00 | 313.00
>
ovacuum parameters per
table (see Storage parameters). That probably won't put you at 4GB, more around
double that size, but it should stay a reasonable size that way.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
018-02-01 | [2018-01-07,2018-01-08) | [2018-01-08,2018-02-01)
2018-02-01 | 2018-02-01 | [2018-01-09,2018-02-01) | empty
(3 rows)
It can probably be optimized a bit, I haven't played with ranges much yet.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
Cut the trees and you'll see there is no forest.
equest_sla;
Is all_days a table? Or is it perhaps another view, possibly materialized even?
> The relation is there, in fact if I go there when I get in to the office, the
> same command works.
This sounds to me like you may be using a different version of pg_restore in
the office. Are bo
12-Sep-2018, at 9:13 PM, David G. Johnston
>> wrote:
>>
>> On Wednesday, September 12, 2018, Arup Rakshit wrote:
>> IN is OR, I want the AND logic. Select posts which has tag 1, 2 and 3 ( tag
>> ids )
>>
>> Build arrays and then use the “contains” ope
The results of explain analyze would shed light on the core problem.
My guess is that your conditions are not very selective - ie. most
records in both tables have bFetch = false - and therefore you are
retrieving most of your data and that is what's taking 7 minutes. No
index is going to fix that
e boolean fields, even though they are
performed in seq-scans, hardly take any time at all. The hash join to
combine them takes a bit over 6s.
> so i am unable to reduce the query execution time as it is taken around 7
> minutes to execute with indexes & without indexes
>
>
Your plan is not readable to me (perhaps because of gmail). Does
https://explain.depesz.com/ give you any useful insights?
On Fri, 21 Sep 2018 at 16:15, Durgamahesh Manne
wrote:
>
>
> On Fri, Sep 21, 2018 at 7:38 PM Durgamahesh Manne
> wrote:
>>
>> Hi
>>
>> Complex query taken around 30 minutes
re both are
justified in the same (sub-)query.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
On Wed, 26 Sep 2018 at 14:08, Wanglin wrote:
>
> Hi, all:
> PostgreSQL version : 10.3. I use "nextval" in the sql , but I think the
> result is not right, maybe it is a bug.
>The test case as bellow:
>create sequence seq1;
> select nextval('seq1');
> create table tx1(id1 int, id2 int
On Thu, 11 Oct 2018 at 16:38, Abhishek Tripathi
wrote:
> Actually I have acquired a "Select for Update" on a table whose id is
> refrence as a foreign key on another table So I want those table won't update
> until there is lock. Is it possible? Becuase postgres is acquiring lock but
> AccessSh
You forgot to CC the list. Also, top-posting is generally not appreciated here.
> On Thu, Oct 11, 2018 at 8:26 PM Alban Hertroys wrote:
> On Thu, 11 Oct 2018 at 16:38, Abhishek Tripathi
> wrote:
> > Actually I have acquired a "Select for Update" on a table whose id is
cant amount of rows. That
probably costs a significant amount of time to do.
It looks like you don't have any indices on the underlying table(s) at all. I'd
start there and then look at the ILIKE problem again. By that time, Pavel's
suggestion for a trigram index on that text fiel
www.postgresql.org/docs/11/queries-table-expressions.html#QUERIES-GROUPING-SETS
> It runs well, took 1s and returns 4000 rows.
I won’t go into the performance issue ash this point, other more knowledgeable
people already did.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
your transaction cannot be allowed to commit
without a response from the other peer. In such cases it is appropriate to use
plpython, plperl, etc
All that said, I am talking about PostgreSQL here. If you’re instead using
EnterpriseDB, which does have an Oracle compatibility layer that could pe
ly removed a database from both a and c,
you still have replicas to recover it from. And the backups, of course, but
that will not contain the data that came in after replication was paused.
I do hope the remaining 3% disk space is enough to cover all that, though...
Regards,
Alban Hertroys
--
If
ons
I hope it all makes sense...
Thank you
It does to me. Now would be a good time for people to chime in if they
don't agree ;)
From: Alban Hertroys
Sent: Saturday, August 3, 2019 3:15 AM
To: Julie Nishimura
Cc: Adrian Klaver ;
pgsql-general@lists.postgresql.org ;
pgsql-general
Subject:
on its lag. You probably have at least that much down-time to
change the connections anyway.
Regards,
Alban.
> From: Alban Hertroys
> Sent: Monday, August 5, 2019 5:01 AM
> To: Julie Nishimura
> Cc: Adrian Klaver ;
> pgsql-general@lists.postgresql.org ;
> pgsql-genera
From experience, that either results in people reporting the wrong
financial results (several orders too high) or blaming your query.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> On 13 Aug 2019, at 15:19, David G. Johnston
> wrote:
>
> On Tuesday, August 13, 2019, Alban Hertroys wrote:
>
> > On 13 Aug 2019, at 13:10, stan wrote:
> >
> > select
> > project.proj_no ,
>
> Removed columns that get in the way o
> 2) "tcp:postgresql://localhost?user=myuser&password=password" looks like
>
> "tcp:postgresql://localhost?user=myuser&password=my&pwd"
>
> and password is parsed on the & and you also end up with an extra parameter
> pwd
Perhaps it he
. Since the left-hand array has only 1
item and the right-hand one has two, there’s not much equality between them.
You probably meant:
select name from table_name_ds_tmp where categoryid = ANY ( select
string_to_array( '200,400', ',')::bigint[] );
Alban Hertroys
--
There is always an exception to always.
> On 25 Sep 2019, at 22:50, Alban Hertroys wrote:
>
>
>> On 25 Sep 2019, at 22:25, David Salisbury wrote:
>> db=# select name from table_name_ds_tmp where ARRAY[categoryid] = ANY (
>> select string_to_array( '200,400', ',')::bigint[] );
>&g
ing an attribute of that
structure. Personally, I would invert the test like so (and then move it before
the idxused test:
if (idxcd == NULL) {
elog( INFO, "idxcd IS NULL" );
continue; /* Or is that fatal enough to break instead? */
)
if (!idxcd->idxused)
continue;
Alban Hertroys
--
There is always an exception to always.
recreate table files, such as CLUSTER.
Then again, if you’re just looking for the table you created last Wednesday,
that’s probably not a major concern.
Another option is to add a DDL Event trigger on create table statements and log
that to some table[3].
Regards,
Alban Hertroys
[1]: ht
ent_timestamp <= t from (values
('-infinity'::timestamp), ('infinity'::timestamp)) x(t);
t | ?column? | ?column?
-------+--+--
-infinity | t| f
infinity | f| t
(2 rows)
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
On Mon, 2 Dec 2019 at 12:11, Laura Smith
wrote:
>
> My initial idea was something along the lines of :
> select (select sum(statcount) from stats_residence) as
> aggstat,statcount,short_name_en from stats_residence where
> aggstat>some_number;
>
One option is to move the aggregate to the where-
> FROM land
LEFT JOIN devisen ON land.wkz = devisen.wkz AND land.brgroup =
devisen.brgroup
> WHERE land.brgroup = :brgroupHost_for_helpland_cursor
> ORDER BY stammprio, landbez;
> #endif
Alban Hertroys
--
If you can't
hat to do about that; if it
fails it will probably fail fast, and if not, then you’re looking at a one-off
situation that won’t require more than a few workarounds - after which you can
just run the update again.
Ad 1). No harm has been done, it’s a single transaction that rolled back.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
uery in a more readable
and consistent way, they would spot their error pretty quickly. It’s a simple
typo.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
y to commit, and the central machine returns an ‘acknowledged’ once the
last server sent it’s ‘done’ event.
The challenge there is that the ‘ack’ needs to be caught and processed within
the same waiting transaction… Not sure how to do that right now - maybe through
web services, MQTT or similar.
A
idea
how to write that in Django):
select snapshot->’pay_definition’->k.value->’name’
from MyModel
join lateral jsonb_object_keys(snapshot->’pay_definition’) k(value) on true
I don’t know how that compares performance-wise to using jsonb_each, but
perhaps worth a try. Obviously, the way it’s written above it doesn’t return
distinct values of ’name’ yet, but that’s fairly easy to remedy.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
;table function". Can you kindly clarify?
Correct. Thomas already explained the return type, but the plural form of the
function name is also an indication that it returns multiple results.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
it also makes
the identifier case-sensitive. That’s probably why Paul suggested to rename the
table to no longer require identifier quoting - many people consider it a PITA,
but it can be used to get out of trouble like yours - some people insist on it,
for example because it makes using camel-caps in identifiers meaningful.
Regards,
Alban Hertroys
--
There is always an exception to always.
ls_price_and_date, 7, 13 )
from articulos art
cross join lateral f_art_get_price_str( art.id ) p(ls_price_and_date);
Alban Hertroys
--
There is always an exception to always.
mestamps.
> I've looked into this topic and found statements about that one
> shouldn't put too many rows into the index[1] and stuff like that or
> it will be ignored at all. But that doesn't seem to be the case for me
> according to the plan. OTOH, my index really simply is about the
> column containing the timestamp, no function reducing things to dates
> or stuff like that to reduce the number of rows.
>
>> CREATE INDEX idx_clt_rec_captured_at ON clt_rec USING btree ( captured_at );
Try this:
CREATE INDEX idx_clt_rec_captured_at ON clt_rec USING btree ( captured_at, id );
Alban Hertroys
--
There is always an exception to always.
9 | coffee | cappucino | 15:09 | 3 | 0.45
2020-09-03 15:10 | coffee | cappucino | 15:09 | 3 | 0.43
etc.
Is there a way to achieve this through window functions, or do we need to
wrap the thing in a subquery to achieve this?
Regards,
Alban Hertroys
--
If you can't see the forest for the trees
On Thu, 3 Sep 2020 at 16:01, Tom Lane wrote:
> Alban Hertroys writes:
> > As stated above, I want to enumerate the runs, starting at 1 and
> > incrementing by 1 every time a partition from the 'run' window closes,
> > Is there a way to achieve this through win
On Thu, 3 Sep 2020 at 20:59, Michael Lewis wrote:
> It seems like you are maybe wanting this- If the previous row is the same,
> then get the previous row's run_nr. If it is different, then increment.
>
> case when lag( property_A ) over() = property_A and lag( property_B )
> over() = property_B
r 1530646postmaster
6482 postmaster 1530645postmaster
6482 postmaster 1530659postmaster
6482 postmaster 1530658postmaster
6482 postmaster 1530657postmaster
6482 postmaster 15 30656postmaster
6482 postmaster 1530655postmas
"Srinivasa T N" wrote on 01/10/2020 11:47:33:
> On Thu, Oct 1, 2020 at 2:47 PM Alban Hertroys <
> alban.hertr...@apollovredestein.com> wrote:
> Hi all,
>
> We're seeing the FATAL error message from the subject pop up in our
> logs at regular intervals, b
uld also do this using subqueries instead of CTE’s, that may perform
better as CTE’s act as optimisation fences.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
On 11 Nov 2020, at 11:15, Jitendra Loyal wrote:
>
>
> Thanks Nikolay
>
> I read that but is there a way to meet the above requirement. And I will like
> to add that IS NULL and IS NOT NULL should evaluate to true/false. These
> operators are made for this and should not be returning NULL.
a function or in a do-block.
We do this in plpgsql, but that’s mostly because this code is part of our ETL
process and it has to perform some other logic on the same data anyway. I could
look up our code for you tomorrow, but that looks to be a busy day, so I can’t
promise.
Alban Hertroys
--
If
pg_hab_file_rules:
>
> postgres=# table pg_hba_file_rules;
Also:
postgres=# show listen_addresses;
postgres=# show port;
Those will tell you whether the server is listening on the network and on the
expected port.
Alban Hertroys
--
There is always an exception to always.
lap and assign a group number but i'm not sure how to accomplish
> that.
You could date_trunc those timestamps to the minute and group on that.
> there's also no guarantee an entire rack is full of samples so some "cells"
> of display might be null. i think that makes the use of tablefunc crosstab a
> little harder. if i remember correctly, it does not handle missing values
> well. i'm open to any pivoting strategy.
Many reporting tools have features to support just that. We use WebFOCUS, which
calls those ACROSS columns. It’s a common requirement in reporting.
Alban Hertroys
--
There is always an exception to always.
the common approach
> to this problem?
>
> BR
> Dirk
Can't you use column defaults to handle these cases?
Alban Hertroys
--
There is always an exception to always.
know
about the existence of said column; overwriting an existing value that some
other application put there looks like a problem to me. But of course, that
depends a lot on what you’re trying to achieve.
What is your use-case that that is not the desired behaviour? Or are we talking
about a mixe
ly trivial to
> create 50 or 1000 partitions to break up the range of values allowed by your
> field definition.
Alban Hertroys
--
There is always an exception to always.
ain, this is what I
>> get:
>>
>>
>>
>> CREATE OR REPLACE VIEW myview
>>
>> AS SELECT product_id,
>>
>> product_acronym
>>
>>FROM products
>>
>> WHERE product_acronym = 'ABC'::text;
>>
>>
>>
>> So, the formatting changed, keywords are capitalized, the comment I added in
>> the from-part has gone and the text constant 'ABC' changed to 'ABC'::text.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> Verizon Deutschland GmbH - Sebrathweg 20, 44149 Dortmund, Germany -
>> Amtsgericht Dortmund, HRB 14952 - Geschäftsführer: Detlef Eppig -
>> Vorsitzender des Aufsichtsrats: Francesco de Maio
>>
>
Alban Hertroys
--
There is always an exception to always.
> On 18 Apr 2022, at 11:56, Pól Ua Laoínecháin wrote:
(…)
> All of the code below is available on the fiddle here:
>
> https://dbfiddle.uk/?rdbms=postgres_13&fiddle=0cc20c9081867131260e6e3550bd08ab
(…)
> OK, grand, now I wish to perform a RECURSIVE CTE on it. So, I start by
> trying someth
select *
from cte
order by hierarchy;
Function parent() is made-up. It would return the parent node of a node, so
that there is some way to connect the different parts in the hierarchy. I guess
xpath() could fulfil that purpose, but I have no way of testing that hypothesis.
I hope that’s a good enough starting point for you?
Alban Hertroys
--
There is always an exception to always.
ee section 9.16.2 on:
https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-SQLJSON-OP-TABLE
A recursive query is another possible solution. It would probably perform far
worse, but I find them more rewarding to write. Some people prefer Sodoku.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
u could either pass them on or they (probably) don’t make sense in
the context of the generic function.
Alban Hertroys
--
There is always an exception to always.
er1 to
> Server2.
Bit of a wild idea, I haven’t used SSH tunnels in years, so a bit fuzzy on the
details:
Can you create an SSH tunnel to the new machine from the VM, then pipe that to
an SSH connection from a machine that does have enough space to dump?
And then vice versa to the new mac
ert that to a character string yourself.
See for example:
https://stackoverflow.com/questions/341384/how-to-convert-an-interval-like-1-day-013000-into-253000
That seems unnecessarily complicated, perhaps there is/could be a more
convenient method? I’m sort of thinking of a "relative timestamp offset" type,
that tracks an exact difference relative to a given timestamp?
Alban Hertroys
--
There is always an exception to always.
after of the output of pg_get_functiondef, applied to
the stored diff?).
I’m not so sure that would work for auditing, but that seems to have been
tackled down-thread.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
enised version after the upgrade.
It is a bit of a hassle, as you need to remember to do that before an upgrade,
but at least you’d have something…
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
u could just calculate them and even omit storing
them in a physical table.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
t a 'message' was sent or none received
it.
Algorithms like Tarry, Lai-Yang or the Echo algorithm solve this by adding
communication between those processes about messages in transit.
Alban Hertroys
--
There is always an exception to always.
ibpq, but a pure SQL or PL/pgSQL demo would
> still help (I think).
This works:
=> select (1, 'one'::text) in ((1, 'two'::text), (2, 'one'::text), (1,
'one'::text), (2, 'two'::text));
?column?
--
t
(1 row)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> On 9 Feb 2023, at 18:35, Dominique Devienne wrote:
>
> On Thu, Feb 9, 2023 at 5:37 PM David G. Johnston
> wrote:
> On Thu, Feb 9, 2023 at 9:28 AM Alban Hertroys wrote:
> > On 9 Feb 2023, at 16:41, Dominique Devienne wrote:
> > Now we'd like to do the same
;from_hour',
> jsonb_build_object(
> 'weekday', cot.weekday,
> 'from_hour', cot.from_hour,
> 'to_hour', cot.to_hour)->>'to_hour')
> )
>
> Isn’t that more like it?
Perhaps you can use a lateral cross join to get the result of
jsonb_build_object as a jsonb value to pass around?
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
> On 3 Mar 2023, at 0:02, Thorsten Glaser wrote:
>
> On Tue, 28 Feb 2023, Alban Hertroys wrote:
>
>> Perhaps you can use a lateral cross join to get the result of
>> jsonb_build_object as a jsonb value to pass around?
>
> I don’t see how. (But then I’ve not
> On 3 Mar 2023, at 20:32, Thorsten Glaser wrote:
>
> On Fri, 3 Mar 2023, Alban Hertroys wrote:
>
>> You can rewrite that into something like this:
>>
>> select jsonb_build_object('opening_times’,
>> obj
>> ORDER BY
>>
the same minimum number (namely 0).
Should the result be just the first of the maximums (or minimums) through some
selection criterium (such as their alphabetical order), should that give each
of the tied results, or should there be a means to define that behaviour?
I suppose a combination with FIRST and LAST could solve that issue?
Regards,
Alban Hertroys
--
There is always an exception to always.
e: Fix your design.
Regards,
Alban Hertroys
--
There is always an exception to always.
r statement violating a not null
constraint.
It doesn’t matter whether you insert first or update first, either operation is
going to violate that constraint. You’re specifying a NULL value for a column
that doesn’t accept that because it has a NOT NULL constraint. That is your
problem.
Alban
current/2023-April/003446.html
Regards,
Alban Hertroys
--
There is always an exception to always.
ee how this would be possible to do with the existing type - too
> much potential breakage of existing data. Your example itself shows why
> using dash as a separator is a bad idea.
David,
I don't think he's talking about separators ;)
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
tion.
Alphabetically, '1' comes before '5', so '100' comes before '51'.
Numerically, you would get what you intended, but you're not partitioning on
numbers but on strings instead and those sort alphabetically: '1', '10', '100',
'11', '2', '3', '51', etc.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
2)/$3) *$3';
If the difference of (wac - rate) = -0.5, and you multiply that by 4 (divide by
.25), ceil will round that correctly to -2. If you then divide by 4 again, you
get -0.5 back (which is twice your bin size).
The result matches your function definition with the given parameters.
urn values) and the
language the function is implemented in. The documentation will show you that
there are several options you can provide too.
And lastly, a function requires an implementation.
Regards,
Alban Hertroys
--
There is always an exception to always.
./10/data/pg_replslot
0 ./10/data/pg_tblspc
3.5M./10/data/pg_stat
0 ./10/data/pg_stat_tmp
4.0K./10/data/pg_logical
96K ./10/data/log
688K./10/data/pg_xact
376G./10/data/
How do I reclaim that space?
Regards,
Alban Hertroys
Alban Hertroys
D: +31 (0)53 4 888 888
"Tom Lane" wrote on 2019-01-23 16:02:01:
> Alban Hertroys writes:
> > Our current development database server is running a bit low on
diskspace,
> > so I dropped an old but rather large database with the intention of
> > claiming back some space. However, the
Is there a reason not to use a relational model instead of json(b) here? I
think that is in fact considered best practice.
On Fri, 8 Mar 2019 at 15:40, Alexandru Lazarev
wrote:
> I am working on product managing and monitoring Network (NMS-like
> products).
>
> Product manages configuration of n
That seems to be a misconfigured client application that explicitly tries
to connect to a non-existent database 'admin' (via db=admin).
Instead of adding that database, it seems more logical to fix the client
configuration.
On Tue, 2 Apr 2019 at 09:53, Foo Bar wrote:
> Hello All,
>
> Ok, so mayb
s.
You could also get two random values in the lower range and only get a match
against ‘A’.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
he data changes seldom.
Not in practice, AFAIR, but I was thinking of a solution like this for small
matrices (Sudoku’s, I hate the things, but I need some solutions to a few to
help me with a number of test scripts).
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
’re simply trying to connect to the database named postgres as database
user postgres, you can instead use the command 'psql -U postgres postgres’.
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
could
get a higher cost than using a less optimal (costlier) index that’s already
cached.
Regarding lowering random_page_cost; If your index files are on SSD storage,
lowering that sufficiently (to a realistic value) could then sufficiently lower
the cost of loading that uncached index into memory, evicting the index it was
using in above plan to make room (unless other active sessions are using it).
Alban Hertroys
--
There is always an exception to always.
> On 26 Aug 2023, at 11:31, pan snowave wrote:
(…)
> pg_indent.conf
>
> test rootcce
If that is indeed the name of the file, that would explain your problem. No
doubt that it should be named pg_ident.conf instead, without the ’n'.
Alban Hertroys
--
If you can
_administrative takes up a large part of those 164G, then you probably
don’t have enough space for a 10x multiplication in size from the original
table to the new table. And that happening looks entirely possible from the
information you provided.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
o i get that all these columns that are joined are aligned, meaning if
> it starts with 1 in one column it must be 1 in the other columns. Or how
> would you assign unique keys in Postgres?
Are you perhaps asking how to define FOREIGN KEY CONSTRAINTs?
https://www.postgresql.org/docs/15/
alendar and an (quarter of an) hour per UTC day
table, but materialising that with some indexes may perform better (at the cost
of disk space). I do materialise that currently, but our database server
doesn’t have a lot of memory so I’m often not hitting the cache and performance
suffers a bit (infrastructure is about to change for the better though).
Regards,
Alban Hertroys
--
There is always an exception to always.
obviously.
Now of course there are only 2 hours a year where this happens. Our data
scientists chose to ignore the problem for simplicity’s sake and be slightly
off with their numbers on those dates.
Regards,
Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.
migrated onto a new ARM-based architecture? In that case the Homebrew uninstall
scripts won’t even work anymore - at least not w/o Rosetta 2 - as they’re
Intel-based too.
A migration assistant can also work too well, I found.
Alban Hertroys
--
There is always an exception to always.
t; "
I didn’t succeed in calling SET LOCAL TIMEZONE from within the function. Could
be I missed something, then Google (stackoverflow) pointed me to set_config().
I did manage to apply it to the second function header, which I think behaves
such that the time zone change stays within func
de to recreate the caches from
scratch from source data. Having custom code in there not under control of the
3rd party application breaks that guideline.
If they’re necessary, then so be it, but I can’t shake the feeling that we can
achieve this without custom code in the database.
Regards,
Alban Hertroys
--
There is always an exception to always.
1 - 100 of 149 matches
Mail list logo