> On Dec 14, 2020, at 10:37 AM, Muhammad Bilal Jamil
> wrote:
>
> I think you can also increase the query performance by creating indexes?
>
>
>
OP said there was a key on the target (large) table. I’m not sure there’s much
of a win in indexing 10K ids.
>
>
select count(*) from is probably not using the index that your
insert/select would, so I would not use that as a test for performance.
If customer_backup has an index, the insert-select will be
performance-limited by updating that index.
If you can do a *create table customer_backup* as
*select
I think you can also increase the query performance by creating indexes?
On Mon, 14 Dec 2020 at 11:36, Rob Sargent wrote:
>
>
> > On Dec 14, 2020, at 4:47 AM, Thomas Kellerer wrote:
> >
> > Karthik Shivashankar schrieb am 14.12.2020 um 12:38:
> >> I have a postgres(v9.5) table named customer ho
> On Dec 14, 2020, at 4:47 AM, Thomas Kellerer wrote:
>
> Karthik Shivashankar schrieb am 14.12.2020 um 12:38:
>> I have a postgres(v9.5) table named customer holding 1 billion rows.
>> It is not partitioned but it has an index against the primary key
>> (integer). I need to keep a very few re
Karthik Shivashankar schrieb am 14.12.2020 um 12:38:
> I have a postgres(v9.5) table named customer holding 1 billion rows.
> It is not partitioned but it has an index against the primary key
> (integer). I need to keep a very few records (say, about 10k rows)
> and remove everything else.
>
> /ins
Hi,
I have a postgres(v9.5) table named customer holding 1 billion rows. It is not
partitioned but it has an index against the primary key (integer). I need to
keep a very few records (say, about 10k rows) and remove everything else.
insert into customer_backup select * from customer where cust