id"
to be unsatisfied.
QUESTION
Is there a way to improve this attempt and close the gap? Or a completely
different strategy? I was brainstorming how to lock all rows where columns
have the same value or using ARRAY but struggle to put together a reliable
solution.
Thank you,
Alex
Hello,
we run multiple versions of PostgreSQL instances on production. Some time ago
we add new physical servers and decided to go with latest GA from pgdg APT
repository, that is PostgreSQL 16.
We encounter slow `GRANT ROLES` only on PostgreSQL 16 instances up to 42 seconds
in production, the cl
Great, thanks a lot!
I will test it on my system.
Myself, I tried to do it in C with libpq, but got stuck at reading a LO...
On Sat, 29 Jul 2023 at 19:57, Erik Wienhold wrote:
> > “SELECT md5(lo_get(loid));” doesnt work — “large object is too large”.
> >
> > Is there any other way to do it?
Hello,
In my DB I have a large object over 4GB in size.
I need to get its MD5 or SHA256 from within psql query, i.e. without
exporting it to FS first.
“SELECT md5(lo_get(loid));” doesnt work — “large object is too large”.
Is there any other way to do it?
Regards,
Al
I want to make a service that gives each of my users their own PG user and
database. I want to keep them isolated from each other. There are no
special extensions installed, it's a pretty vanilla PG cluster.
Are there any considerations beyond making each person their own user and
owner of their o
region that only
hold these materialized views. (trying to avoid Memcached, Redis)
Is there a way to refresh a materialized view across servers? Maybe using
DB Link?
What would be the most efficient way to keep these in sync?
Any suggestions? Would appreciate your thoughts on this.
Thanks
Alex
notice that
the order total doesn't include the new item until it hits production.
ср, 19 апр. 2023 г. в 11:46, Tom Lane :
> Alex Bolenok writes:
> > I get why it's not working (because the statement is not allowed to see
> the
> > tuples with its own cmin), but I
Hi list,
This popped up yesterday during a discussion at the Boston PostgreSQL group
meetup, and Jesper Pedersen had advised that I post it here.
Imagine this setup:
CREATE TABLE IF NOT EXISTS mytable (id BIGSERIAL PRIMARY KEY, value TEXT
NOT NULL);
WITHinsert_cte AS
(
INSER
and see both set of object in
information_schema.
--
Alex Theodossis
a...@dossi.info
347-514-5420
You mentioned testing, and reminds me of another benefit. Way faster, more
reliable, cheaper to test on the DB side. Testing logic in SPs or SQL is
much easier, especially when testing requires a sequence of calls for a use
case. It is easier because of the DBs support for transactions. With
tr
:
> On Apr 20, 2022, at 13:43 , Alex Aquino wrote:
>
>
> Agree on the lock in comment, however, can't we say that of anything one
> is dependent on in the tech stack, whether that be at the java vs
> javascript vs python, or now aws vs azure vs gcp?
>
> Have always wonder
Agree on the lock in comment, however, can't we say that of anything one is
dependent on in the tech stack, whether that be at the java vs javascript
vs python, or now aws vs azure vs gcp?
Have always wondered that lock in concern seems to be only mentioned in
light of dbs, but not any other piece
. I also tried to give different names the
fields returned in the view eg. checks2, uptime2 etc... so that there
won't be a conflict but SET checks = V.checks2 or checks = checks2 also
did not work.
All works now as intended. Thanks for the hint!
Alex
INSERT INTO http_ping_uptime_stats
S
Hi,
I am trying to do an upsert using a view but for some reason get errors.
All works fine without the ON CONFLICT
INSERT INTO http_stats
SELECT * FROM view_http_stats AS V WHERE month =date_trunc('month',now())
ON CONFLICT (url,ip,month) DO UPDATE
SET last_update = now(),
checks
4:17 PM, Alban Hertroys wrote:
> > On 12 Jan 2021, at 20:54, Alex Williams valencesh...@protonmail.com wrote:
> > Hi Ingolf,
> > For comments in views, I create a unused CTE and put my comments there, e.g.
> > WITH v_comments AS (
> > SELECT 'this is my comment'
Hi Ingolf,
For comments in views, I create a unused CTE and put my comments there, e.g.
WITH v_comments AS (
SELECT 'this is my comment' AS comment
)
Alex
Sent with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Thursday, January 7, 202
Thanks.
El 03/08/2020 a las 16:04, David Rowley escribió:
On Mon, 3 Aug 2020 at 21:26, alex m wrote:
I'm writting a function/extention in C for a trigger. Inside a trigger, in C, I
want to get the name of the current database. However, not via SPI_exec(),
SPI_prepare() and the like
I'm writting a function/extention in C for a trigger. Inside a trigger,
in C, I want to get the name of the current database. However, not via
SPI_exec(), SPI_prepare() and the like, but more directly, in a more
faster way.
I'm aware of "current_database()" but it'll require calling it via
SP
;s defined as (-7) and in some
as (-8)? Which should I use?
El 28/07/2020 a las 03:20, David Rowley escribió:
Hi Alex,
On Tue, 28 Jul 2020 at 05:47, alex maslakov wrote:
I was suggested to use `get_primary_key_attnos` from
`src/include/catalog/pg_constraint.h`
extern Bitmapset *get_primary_key_
w I try to remove the last field and comma ",Class"
To get Class V,Class VI,Class VII,Competitive Exam,Class VIII
Is there a function or easy way to do this?
Any help would be appreciated.
Thank you
Alex
thanks for the suggestion. tablefunc extension might be the easiest one
On Thu, Apr 16, 2020 at 9:46 PM Edward Macnaghten
wrote:
> On 16/04/2020 14:36, Edward Macnaghten wrote:
> > On 16/04/2020 09:35, Alex Magnum wrote:
> >> Hi,
> >> I have a simple table with singu
Hi,
I have a simple table with singup timestamps
What I would like to do is to create a table as shown below that displays
the counts per our for the past n dates.
I can do this with a function but is there an easy way to use recursive
queries?
* Counts per hour for given date*
*HR 2020-
Is there anything I can to increase insert speeds for bytea? Currently
running postgres 9.6.15
I have a few tables without a bytea and a few with bytea. There is a large
performance difference with inserts between the two. I'm inserting a byte[]
that's usually less than 1MB on content. The content
for example we have table t1 under schema s1. can I rename it to s2.t2
with one command.
currently I can do:
alter table s1.t1 set schema s2;
alter table s2.t1 rename to t2.
pg_default tablespace.
Thanks again to both of you!
Alex
(Just a note: The name of the actual DB / objects manually moved were renamed
for this public post)
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, July 15, 2019 8:33 PM, Adrian Klaver
wrote:
> On 7/15
the queries I've used from various
sources like stackoverflow don't provide the correct named tablespace.
Thanks,
Alex
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, July 15, 2019 3:22 PM, Adrian Klaver
wrote:
> On 7/15/19 11:35 AM, Alex Willia
s on each table (too many) I just
want to run a query that will insert into a table all the tables and their
tablespace names and when the above two commands (3rd will be moving indexes)
run the query again and verify everything has moved from data2 to pg_default.
Thanks for your help in advan
ace = pg_default;" did it restore to the pg_default tablespace.
Thanks again for your help!
Sent with ProtonMail Secure Email.
‐‐‐ Original Message ‐‐‐
On Wednesday, July 10, 2019 10:20 AM, Tom Lane wrote:
> Ian Barwick ian.barw...@2ndquadrant.com writes:
>
> > On 7/10/19 2:
default_tablespace = pg_default;' -f - mydatabase_test >
/tmp/mydatabase_test.log
What happens during the restore is that all tables are created on data2, not
pg_default.
Any help would be greatly appreciated.
Thanks,
Alex
Sent with [ProtonMail](https://protonmail.com) Secure Email.
Yes, they are.
On Tue, Jun 25, 2019 at 4:33 AM Rob Sargent wrote:
>
>
> On Jun 24, 2019, at 2:31 PM, Alex Magnum wrote:
>
> Hi,
> I have two arrays which I need to combine based on the individual values;
> i could do a coalesce for each field but was wondering if the
Hi,
I have two arrays which I need to combine based on the individual values;
i could do a coalesce for each field but was wondering if there is an
easier way
array_a{a, null,c, d,null,f,null} primary
array_b{null,2 ,null,4,5 ,6,null} secondary
result {a, 2, c, d,5, f,null)
Any
>> CREATE TABLE public.test1 (
>> x1 integer NOT NULL,
>> x2 integer NOT NULL,
>> CONSTRAINT test1_pkey PRIMARY KEY (x1) INCLUDE(x2)
>> ) PARTITION BY RANGE (x2);
>> This query works in 11.1 but fails in 11.3 with messages:
>> ERROR: insufficient columns in PRIMARY KEY constraint d
Jeremy Schneider - Thanks for that psqlrc file. Pretty informative. :-)
On Wed, May 8, 2019 at 11:55 AM Jeremy Schneider
wrote:
> On 5/6/19 23:27, Rashmi V Bharadwaj wrote:
> > Is there a SQL query or a database parameter setting that I can use from
> > an external application to determine if t
On 1/23/19 19:15, Stephen Frost wrote:
Greetings,
* Alex Morris (alex.mor...@twelvemountain.com) wrote:
This question may simply be my ignorance of what piece of the systemd /
systemctl puzzle needs attention. Any clues are appreciated.
The simplest approach is to just modify the
ss. Postgres will start but it's the
default install values, and not my needed command line.
Seems like there ought to be a way to do what I need. I just haven't
found it yet. Suggestions on what systemctl magic fruits or other
system startup tool needs attention are most welcome.
Thanks in advance,
alex
rom a table check that suddenly
stopped accepting rows valid in the older version during the migration. Making
it select 'abcd ' ~ E'abcd\\s' doesn't modify the outcome, unsurprisingly.
Is it reproducible for others here as well? Given that it is, Is there a way to
make both versions behave the same?
Cheers,
Alex
Thanks for the clarification
On Wed, Jun 13, 2018 at 9:32 AM, Adrian Klaver
wrote:
> On 06/13/2018 06:21 AM, Alex O'Ree wrote:
>
>> Desired behavior is to just log the error and continue the import using
>> pgdump based copy commands
>>
>
> Each COPY is at
Desired behavior is to just log the error and continue the import using
pgdump based copy commands
The servers are not on the same network. Sneaker net is the only way
On Wed, Jun 13, 2018, 7:42 AM Andreas Kretschmer
wrote:
>
>
> Am 13.06.2018 um 13:17 schrieb Alex O'Ree
I have a situation with multiple postgres servers running all with the same
databases and table structure. I need to periodically export the data from
each of there then merge them all into a single server. On occasion, it's
feasible for the same record (primary key) to be stored in two or more
se
function but I wonder if there is s simpler way.
Thanks for any help on this
Alex
40 matches
Mail list logo