Le 15/10/2023 à 13:34, Javier Jimenez Shaw via gdal-dev a écrit :
Hi Even. Thanks, it sounds good.
However I see a potential problem. I see that you use once
"SetCacheMax". We should not forget about that in the future for
sensible tests. The cache of gdal is usually a percentage of the total
Hi Even. Thanks, it sounds good.
However I see a potential problem. I see that you use once "SetCacheMax".
We should not forget about that in the future for sensible tests. The cache
of gdal is usually a percentage of the total memory, that may change among
the environments and time.
On Wed, 11 Oc
Hi,
No experience with pytest-benchmark, but I maintain an unrelated project that
runs some benchmarks on CI, and here are some things worth mentioning:
- we store the results as a newline-delimited JSON file in a different GitHub
repository
(https://raw.githubusercontent.com/rust-analyzer/me
Hi Even,
> With virtualization, it is hard to guarantee that other things happening
on the host running the VM might not interfer. Even locally on my own
machine, I initially saw strong variations in timings
The advice I've come across for benchmarking is to use the minimum time
from the set of r
Hi,
I'm experimenting with adding performance regression testing in our CI.
Currently our CI has quite extensive functional coverage, but totally
lacks performance testing. Given that we use pytest, I've spotted
pytest-benchmark (https://pytest-benchmark.readthedocs.io/en/latest/) as
a likely