Hi Ulrich, Do you remove data in one big transaction? That would cause big transaction log to be created. I also agree with Noel's suggestion to insert data in batches. As far as compaction goes, I would keep WRITE_DELAY at it's default (500), or even smaller (but positive) value. Apart from controlling how often auto-commit check runs, it also dictates how often background thread is trying to optimize file space by rewriting pages (evacuating pages from sparsely populated chunks) and compacting (moving chunks around). With high rate of inserts/updates auto-commit (not db commit, but chunk store operation) would happen anyway, once chunk reaches a certain size (around 1 MB). WRITE_DELAY intended to handle cases with low update rates, when save should be triggered (even if new chunk is still small) to prevent data loss. So setting WRITE_DELAY to smaller value should intensify background file space housekeeping. If it's still not good enough for your case and you would like to go with periodic shut-downs, use SHUTDOWN COMPACT. This is a full database rewrite, and it's the best way to minimize disk space. Normal SHUTDOWN is limited by not only MAX_COMPACT_TIME, but also by 16MB of rewrites in size (not very helpful in your case). On Tuesday, March 12, 2024 at 4:44:49 AM UTC-4 Noel Grandin wrote:
> > > On 3/12/2024 10:28 AM, Ulrich wrote: > > > > My database has around 50.000 tables storing several 100 millions of > rows. Its a kind of time series database. Data is > > added continously (around 1500 rows per second) and data older than 30 > days is removed once a day. > > > > I suggest (if you are not doing this already) that you move to an > architecture where instead of running DELETE when > removing old data, you can run DROP TABLE or TRUNCATE TABLE, which will be > more efficient with H2. > > > As long as the automatic compaction did not show nice results I decided > to switch off the automatic compaction, set > > MAX_COMPACT_TIME to 30000 and shutdown the database each 5 minutes using > SHUTDOWN. I use SHUTDOWN instead of SHUTDOWN > > COMPACT to get control over the maximum time while the db is not > available. > > > > Unfortunately SHUTDOWN just does not try very hard, if you want to reduce > disk space you will need to use SHUTDOWN COMPACT. > > H2 is unfortunately not a great match for your specific use-case, and I > don't think there is really anything in the way > of database parameters that will make a big difference. > > You could try batching your inserts (i.e. inserting a bunch of rows before > doing a COMMIT), that sometimes helps reduce > the disk usage. > > Regards, Noel. > -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/h2-database/6648566d-7e68-4d8c-994f-8b5333aff15en%40googlegroups.com.
