[ https://issues.apache.org/jira/browse/MRESOLVER-372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17773776#comment-17773776 ]
Rich DiCroce commented on MRESOLVER-372: ---------------------------------------- I've been investigating why the most recent versions of Eclipse throw errors when trying to update snapshots, and ended up here. IMO this problem can only be properly fixed by changes to Maven Resolver, but the situation is complicated. It's important to note that the rest of what I write is about the exception posted in the original description of this issue. The exception posted by Delany in the comment above is NOT the same issue. The top-level problem is that Eclipse has an open handle to the JAR file. On Windows, this prevents the file from being replaced. So we start going down the rabbit hole. Why does Eclipse have an open handle to the JAR file? I found that there are at least three separate answers: # m2e loads Maven plugins so they can be executed during builds # Eclipse JDT loads the project's dependencies if annotation processing is enabled # PMD Eclipse Plugin also loads the project's dependencies if "Enable using Java Project Build Path" is enabled But all three of these boil down to the same problem: they need to construct a URLClassLoader so they can load classes from the JARs. The classloader is the thing that is actually holding the open handle. Okay, so the problem could be solved by not holding on to the classloader. Except that's probably not feasible. An IDE needs to support incremental compilation. That means state needs to be retained between builds, which could mean that the classloader can't just be discarded. Well then, maybe the classloader itself could be changed to not hold open references to the JARs? That seems like it could cause all sorts of havoc if a JAR was replaced, and some classes are loaded from the old JAR and some classes get loaded from the new JAR. I doubt the JDK maintainers would be willing to accept a change like that. Okay, so we have to hold on to the classloader and the classloader has to hold on to the JARs. What if we just copied the JARs to some temp directory? Well, we could do that... but then every single part of the IDE that needs to load the project's plugins or dependencies has to implement that. That could be a lot of changes in a lot of places. And if any part of the IDE doesn't do that, the problem comes back and is very painful to locate. Which brings us to the conclusion I arrived at: the right solution is to just not replace the JARs at all. So I started digging through the Maven Resolver code and found the aether.artifactResolver.snapshotNormalization option. When this option is enabled (which is the default), Maven Resolver first downloads a new snapshot to a timestamped JAR file, then copies that JAR to the -SNAPSHOT file. Okay, so we just need to disable that option and our problem is solved! Except it's not. For one thing, locally built JARs are always written to the -SNAPSHOT file. So if you have two Eclipse workspaces open, and you're building a JAR in one workspace that is needed by the other workspace, you're still going to have the same problem. Secondly, from reading the code, it looks like Maven Resolver always looks for the -SNAPSHOT file first, precisely because that's where a locally built JAR is always installed. So if we turn off normalization and we already have a -SNAPSHOT file, then we'll never actually use any new snapshots we download from a server. That brings us to my proposed solution: the -SNAPSHOT file must die. Instead, all snapshot JARs must have timestamped filenames so that the JAR files are always immutable. Achieving this requires some big changes though: * maven-install-plugin would need an option to install timestamped snapshots. For backwards compatibility, this would have to be off by default. * Maven Resolver would need to deal with timestamped, locally-built snapshots. It would also have to deal with -SNAPSHOT files written by older versions, which probably means changing the way it decides which snapshot to use. Perhaps it could look at all the eligible snapshots and use the one with the newest file modification time. (Of course, then you're depending on the system clock to be sane, which isn't great either.) I'm not an expert on any of the internals here, so there may be other headaches I haven't thought of. Even if we can get consensus on how to fix this, I don't have time to work on it right now. I've already spent more time investigating this than I should have. But I figured it was worth it to document my findings so someone else can take a crack at it. > Download fails if file is currently in use under windows > -------------------------------------------------------- > > Key: MRESOLVER-372 > URL: https://issues.apache.org/jira/browse/MRESOLVER-372 > Project: Maven Resolver > Issue Type: Bug > Reporter: Christoph Läubrich > Priority: Major > > With the new file-locking in maven-resolver there is a problem under windows > if the file is currently used by another process (this can for example happen > in an IDE ...) and resolver likes to move the file: > > {code:java} > Caused by: java.nio.file.AccessDeniedException: > xxx-SNAPSHOT.jar.15463549870494779429.tmp -> xxxx-SNAPSHOT.jar > at > java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89) > at > java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103) > at java.base/sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:317) > at > java.base/sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:293) > at java.base/java.nio.file.Files.move(Files.java:1432) > at org.eclipse.aether.util.FileUtils$2.move(FileUtils.java:108) > at > org.eclipse.aether.internal.impl.DefaultFileProcessor.copy(DefaultFileProcessor.java:96) > at > org.eclipse.aether.internal.impl.DefaultFileProcessor.copy(DefaultFileProcessor.java:88) > at > org.eclipse.aether.internal.impl.DefaultArtifactResolver.getFile(DefaultArtifactResolver.java:490) > ... 30 more{code} > > My suggestion would be that resolver simply uses the temp file if it can't be > moved to final location and marks it as delete on exit. Even though this is > not optimal, it at least ensures the the build does not fail to the cost that > next time the file needs to be downloaded again. -- This message was sent by Atlassian Jira (v8.20.10#820010)