Advice on best way to store a large amount of data in postgresql

2023-01-09 Thread spiral
Hello, We have a table containing ~1.75 billion rows, using 170GB storage. The table schema is the following: messages=# \d messages Table "public.messages" Column| Type | Collation | Nullable | Default --+-+---+--+- mid

Re: Advice on best way to store a large amount of data in postgresql

2023-01-09 Thread [email protected]
That’s crazy only having 8GB memory when you have tables with over 100GBs. One general rule of thumb is have enough memory to hold the biggest index. Sent from my iPad > On Jan 9, 2023, at 3:23 AM, spiral wrote: > > Hello, > > We have a table containing ~1.75 billion rows, using 170GB storag

Re: Advice on best way to store a large amount of data in postgresql

2023-01-09 Thread Justin Pryzby
On Sun, Jan 08, 2023 at 07:02:01AM -0500, spiral wrote: > This table is used essentially as a key-value store; rows are accessed > only with `mid` primary key. Additionally, inserted rows may only be > deleted, but never updated. > > We only run the following queries: > - INSERT INTO messages VALU

Re: Advice on best way to store a large amount of data in postgresql

2023-01-09 Thread Samed YILDIRIM
Hi Spiral, If I were you, I would absolutely consider using table partitioning. There are a couple of questions to be answered. 1. What is the rate/speed of the table's growth? 2. What is the range of values you use for mid columns to query the table? Are they generally close to each other? Or, ar