Our application is a bit off the beaten path in that what matters to us is performance from standstill to user logging in and running a report. Caching isn't very helpful because the user won't be running the reports twice in a row and new data will be entered in-between each time the user runs a report. If caching was an option, then I would just generate the reports overnight and they would be instantaneous.
I will try Java 8 server VM. I tried increasing -Dderby.storage.pageCacheSize=10000 but it didn't have a measurable effect. I tried tinkering with other settings too but nothing really yielded any result. I tried increasing/decreasing the compilation threshold, without effect on the JVM JIT. I presume JIT isn't really useful to my case because we aren't running stored procedures, so I presume that Derby will generate fresh bytecode every time. Compilation doesn't really appear to take enough time or improve performance enough to make a difference one way or the other. You point out something important though: performance measurement is fraught with subtle traps. Lots of projects have unit tests, but have you seen one that has cracked the problem of performance regression testing? I would rather get Derby fast enough that we don't have to switch to PostgreSQL. However, the performance of a database is a result of how much effort one puts into it and how good the match happens to be between our access patterns and the default configuration of the database. PostgreSQL might be 'lucky' on our data, but that doesn't make the performance difference any less real. -- Øyvind Harboe - Can Zylin Consulting help on your project? http://www.zylin.com/
