I've been thinking about the way surefire forks JVMs for all the scenarios where it forks a new JVM per testcase; right now we basically start one JVM, load the surefire provider and run the test. Then we shut down the JVM. Repeat until done.
Since we have a 2-way comms channel between the main plugin and the fork, I'm thinking we could start N extra JVM's, load surefire providers, wait for "go" signal from main process (and always keep N fresh JVMs waiting), run the test and then shutdown. Starting a JVM costs approximately 100ms, and in this manner we could avoid incurring a 100ms penalty for every test run. I'm thinking maybe N could be the CPU core count. In this manner we'd have number of "started" JVMs that have not yet started loading user code, but are ready to do so at the moment they receive "go". This should avoid any resource conflicts, since no user code has yet been run. We'd certainly be using a little more resources, but it should be nothing out of the ordinary? WDYT ? Am I missing something ? Kristian --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org For additional commands, e-mail: dev-h...@maven.apache.org