Greetings Listees (frist time poster!)
Considering the difference between 64 bit and 32 bit numeric datatypes. We have
source databases that are running 32 bit. They send their data to a larger
cluster that is running 64 bit. Should there be something special done in order
to accommodate the di
Thanks Bruce, I suppose you mean n32 -> n64 OK, but n64->n32 chance of 32-bit
overflow...
pg
From: Bruce Momjian
Sent: Monday, August 31, 2020 11:54 AM
To: Godfrin, Philippe E
Cc: pgsql-gene...@postgresql.org
Subject: [EXTERNAL] Re: Numeric data types
On Mon, Aug 31, 2020 at 04:38:05PM
Fabulous, thanks much.
From: Bruce Momjian
Sent: Monday, August 31, 2020 4:56 PM
To: Godfrin, Philippe E
Cc: pgsql-gene...@postgresql.org
Subject: Re: [EXTERNAL] Re: Numeric data types
On Mon, Aug 31, 2020 at 05:32:23PM +, Godfrin, Philippe E wrote:
> Thanks Bruce, I suppose you mean
Frankly, I’m not certain, I believe the developers are using a messaging
intermediary.
pg
From: Bruce Momjian
Sent: Monday, August 31, 2020 5:19 PM
To: Godfrin, Philippe E
Cc: pgsql-gene...@postgresql.org
Subject: Re: [EXTERNAL] Re: Numeric data types
On Mon, Aug 31, 2020 at 10:14:48PM +
Very well, thanks very much.
pg
From: Bruce Momjian
Sent: Monday, August 31, 2020 5:31 PM
To: Godfrin, Philippe E
Cc: pgsql-gene...@postgresql.org
Subject: Re: [EXTERNAL] Re: Numeric data types
On Mon, Aug 31, 2020 at 10:20:51PM +, Godfrin, Philippe E wrote:
> Frankly, I’m not certain
Greetings,
In the case where you have a 'local' server, from which you are working with
foreign tables. And the foreign tables are partitioned. As each of the
partitioned tables is a table in its own right, is it correct to assume the
table (relation) size limit of 32 TB applies? For example, pr
Greetings
I am inserting a large number of rows, 5,10, 15 million. The python code
commits every 5000 inserts. The table has partitioned children.
At first, when there were a low number of rows inserted, the inserts would run
at a good clip - 30 - 50K inserts per second. Now, after inserting oh
all
Sent: Wednesday, November 24, 2021 1:20 PM
To: Godfrin, Philippe E
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance
On Wed, Nov 24, 2021 at 07:15:31PM +0000, Godfrin, Philippe E wrote:
> Greetings
> I am inserting a large number of rows, 5,10,
a certain number of records, the speed just
dropped off.
pg
From: Tom Lane
Sent: Wednesday, November 24, 2021 1:32 PM
To: Godfrin, Philippe E
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance
"Godfrin, Philippe E"
mailto:philippe.godf...@n
The notion of COPY blocks and asynchronously is very interesting
From: Gavin Roy
Sent: Wednesday, November 24, 2021 1:50 PM
To: Godfrin, Philippe E
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance
On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E
xplain
>(analyze, buffers, verbose) and then rollback?
Yes, I'm looking into that
pg
-Original Message-
From: David Rowley
Sent: Wednesday, November 24, 2021 7:13 PM
To: Godfrin, Philippe E
Cc: Tom Lane ; pgsql-general@lists.postgresql.org
Subject: Re: [EXTERNAL] Re: Inserts a
like to
know what separates COPY from bulk inserts…
pf
From: Gavin Roy
Sent: Wednesday, November 24, 2021 1:50 PM
To: Godfrin, Philippe E
Cc: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance
On Wed, Nov 24, 2021 at 2:15 PM Godfrin, Philippe E
Right you are sir! I figured that out a few hours ago!
pg
From: Ron
Sent: Wednesday, November 24, 2021 10:58 PM
To: pgsql-general@lists.postgresql.org
Subject: [EXTERNAL] Re: Inserts and bad performance
On 11/24/21 1:15 PM, Godfrin, Philippe E wrote: [snip] I dropped the unique
index , rebuilt
On 2021-12-08 14:44:47 -0500, David Gauthier wrote:
> So far, the tables I have in my DB have relatively low numbers of records
> (most
> are < 10K, all are < 10M). Things have been running great in terms of
> performance. But a project is being brainstormed which may require some
> tables
> t
th 2B recs
>
>On 2021-12-10 18:04:07 +, Godfrin, Philippe E wrote:
>> >But in my experience the biggest problem with large tables are unstable
>> >execution plans - for most of the parameters the optimizer will choose
>> >to use an index, but for some it will err
15 matches
Mail list logo