[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Jira


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762590#comment-17762590
 ] 

Jörg Hohwiller commented on MNG-7868:
-

[~cstamas] thanks for your quick response and your suggestions!

> And one more strange thing: your stack trace does not align properly with 
> Resolver 1.9.14 that is used in Maven 3.9.4, or do I miss something here?

You are right:

[https://github.com/apache/maven-resolver/blob/4605205db7d1a3f47f8e477cec08a699b7f5ac16/maven-resolver-impl/src/main/java/org/eclipse/aether/internal/impl/synccontext/named/NamedLockFactoryAdapter.java#L219]

Actually I was doing a lot of different tests and collected logs and 
errors/exceptions. In order to report the issue, I updated to the latest maven 
version and reproduced the error again. However, the stacktrace was from a 
previous test (most probably with maven 3.8.6).

I will enable diagnostic feature and follow your suggestions trying to 
reproduce the error.

What I can already say is that I can also reproduce the error when local repo 
is populated. But I will double-check.

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-build-cache-extension] dependabot[bot] opened a new pull request, #101: Bump com.github.tomakehurst:wiremock-jre8 from 2.35.0 to 2.35.1

2023-09-07 Thread via GitHub


dependabot[bot] opened a new pull request, #101:
URL: https://github.com/apache/maven-build-cache-extension/pull/101

   Bumps 
[com.github.tomakehurst:wiremock-jre8](https://github.com/wiremock/wiremock) 
from 2.35.0 to 2.35.1.
   
   Release notes
   Sourced from https://github.com/wiremock/wiremock/releases";>com.github.tomakehurst:wiremock-jre8's
 releases.
   
   2.35.1 - Security Release
   🔒 This is a security release that addresses the following issues
   
   https://github.com/wiremock/wiremock/security/advisories/GHSA-hq8w-9w8w-pmx7";>CVE-2023-41327
 - Controlled SSRF through URL in the WireMock Webhooks Extension and WireMock 
Studio
   
   Overall CVSS Score: 4.6 (https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:L/PR:N/UI:R/S:U/C:N/I:L/A:L/E:F/RL:O/RC:C&version=3.1";>AV:A/AC:L/PR:N/UI:R/S:U/C:N/I:L/A:L/E:F/RL:O/RC:C)
   
   
   https://github.com/wiremock/wiremock/security/advisories/GHSA-pmxq-pj47-j8j4";>CVE-2023-41329
 - Domain restrictions bypass via DNS
   Rebinding in WireMock and WireMock Studio webhooks, proxy and recorder modes
   
   Overall CVSS Score: 3.9 (https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator?vector=AV:A/AC:H/PR:H/UI:N/S:U/C:L/I:L/A:L/E:F/RL:O/RC:C&version=3.1";>AV:A/AC:H/PR:H/UI:N/S:U/C:L/I:L/A:L/E:F/RL:O/RC:C)
   
   
   
   NOTE: WireMock Studio, a proprietary distribution 
discontinued in 2022, is also affected by those issues and also affected by https://github.com/wiremock/wiremock/security/advisories/GHSA-676j-xrv3-73vc";>CVE-2023-39967
 - Overall CVSS Score 8.6 - “Controlled and full-read SSRF through URL 
parameter when testing a request, webhooks and proxy mode”. The fixes will not 
be provided. The vendor recommends migrating to https://www.wiremock.io/product";>WireMock Cloud which is available as 
SaaS and private beta for on-premises deployments
   Credits: https://github.com/W0rty";>@​W0rty, https://github.com/numacanedo";>@​numacanedo, https://github.com/Mahoney";>@​Mahoney, https://github.com/tomakehurst";>@​tomakehurst, https://github.com/oleg-nenashev";>@​oleg-nenashev
   
   
   
   Commits
   
   https://github.com/wiremock/wiremock/commit/87063432febaa9c10e90c92edb1d5a8f7afabae2";>8706343
 Bumped patch version
   https://github.com/wiremock/wiremock/commit/20adc25daacb4c3b52a6c2bee669295058027301";>20adc25
 Stop NetworkAddressRules doing DNS lookups
   https://github.com/wiremock/wiremock/commit/aa29d9c172859c93116d48a2d369ff6525510181";>aa29d9c
 Make NetworkAddressRulesAdheringDnsResolver testable
   https://github.com/wiremock/wiremock/commit/90a37e10887409ca9b95b7ca6e5771d505b8eb84";>90a37e1
 Applied DNS resolver enforcement to webhooks extension
   https://github.com/wiremock/wiremock/commit/d9fd0b4f2aae5147ff39dba85b23a893b8cb3fc3";>d9fd0b4
 Moved enforcement of network address rules to Apache client DNS resolver to 
a...
   https://github.com/wiremock/wiremock/commit/eac439f5a6ca9ad5e3a52e480fc33d54dce71de4";>eac439f
 Prevent webhook calling forbidden endpoints
   https://github.com/wiremock/wiremock/commit/9ba86d6c132f4c50ae43e7aaf2d3ac69a23084fc";>9ba86d6
 Rename poorly named method
   https://github.com/wiremock/wiremock/commit/ef5b722f6e2e4ecf09cbdfbb933fcd2e77ee9d2f";>ef5b722
 spotless apply
   https://github.com/wiremock/wiremock/commit/5412ed16e700901b2091b58bd4e2d217585ed5c0";>5412ed1
 Fixed some formatting in NetworkAddressRulesTest
   https://github.com/wiremock/wiremock/commit/295ad5cc386247c6062bb47260415e405390befc";>295ad5c
 Added some extra NetworkAddressRules test cases
   Additional commits viewable in https://github.com/wiremock/wiremock/compare/2.35.0...2.35.1";>compare 
view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.github.tomakehurst:wiremock-jre8&package-manager=maven&previous-version=2.35.0&new-version=2.35.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, overwriting any edits that 
have been made to it
   - `@dependabot merge` will merge this PR after your CI passes on it
   - `@dependabot squash and merge` will squash and merge this PR after your CI 
passes on it
   - `@dependabot cancel merge` will cancel a previously requested merge and 
block automerging
   - `@dependabot reopen` will reopen this PR if it is closed
   - `@dependabot close` will close this PR and stop Dependabot recreating it. 
You can achieve the same result by closing 

[jira] [Assigned] (MENFORCER-491) ENFORCER: plugin-info and mojo pages not found

2023-09-07 Thread Slawomir Jaranowski (Jira)


 [ 
https://issues.apache.org/jira/browse/MENFORCER-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slawomir Jaranowski reassigned MENFORCER-491:
-

Assignee: Slawomir Jaranowski

> ENFORCER: plugin-info and mojo pages not found
> --
>
> Key: MENFORCER-491
> URL: https://issues.apache.org/jira/browse/MENFORCER-491
> Project: Maven Enforcer Plugin
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Jörg Hohwiller
>Assignee: Slawomir Jaranowski
>Priority: Critical
> Fix For: next-release
>
>
> I get "page not found" for these pages that should actually be there:
> https://maven.apache.org/enforcer/maven-enforcer-plugin/plugin-info.html
> https://maven.apache.org/enforcer/maven-enforcer-plugin/help-mojo.html
> ... (all other mojos) ...
> The usage page is present:
> https://maven.apache.org/enforcer/maven-enforcer-plugin/usage.html
> From there you can click "Goals" from the menu to get to the first listed 
> missing page.
> Other plugins still seem to have the generated goals overview and their 
> details pages:
> https://maven.apache.org/plugins/maven-resources-plugin/plugin-info.html
> Is enforcer using a broken version of project-info-reports or is it using a 
> custom process to publish the maven site that is buggy?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MNGSITE-523) Maven Enforcer Plugin goals page not found

2023-09-07 Thread Slawomir Jaranowski (Jira)


 [ 
https://issues.apache.org/jira/browse/MNGSITE-523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slawomir Jaranowski closed MNGSITE-523.
---
Resolution: Duplicate

Thanks for report, we know such issue ... I will fix it soon.

> Maven Enforcer Plugin goals page not found
> --
>
> Key: MNGSITE-523
> URL: https://issues.apache.org/jira/browse/MNGSITE-523
> Project: Maven Project Web Site
>  Issue Type: Bug
>Reporter: Robert Patrick
>Priority: Critical
> Attachments: Screenshot 2023-09-06 at 2.20.30 PM.png
>
>
> It is no longer possible to navigate the the goals page for the Maven 
> Enforcer Plugin...see attached



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-resolver-ant-tasks] cstamas opened a new pull request, #30: Add depMgt support and tests

2023-09-07 Thread via GitHub


cstamas opened a new pull request, #30:
URL: https://github.com/apache/maven-resolver-ant-tasks/pull/30

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [maven-resolver-ant-tasks] cstamas commented on pull request #15: [MRESOLVER-98] Add support for dependency management

2023-09-07 Thread via GitHub


cstamas commented on PR #15:
URL: 
https://github.com/apache/maven-resolver-ant-tasks/pull/15#issuecomment-1709665302

   superseded by https://github.com/apache/maven-resolver-ant-tasks/pull/30


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [maven-resolver-ant-tasks] cstamas closed pull request #15: [MRESOLVER-98] Add support for dependency management

2023-09-07 Thread via GitHub


cstamas closed pull request #15: [MRESOLVER-98] Add support for dependency 
management
URL: https://github.com/apache/maven-resolver-ant-tasks/pull/15


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (MRESOLVER-401) Drop use of SL, up version to 1.5.0

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-401.
-
Resolution: Fixed

> Drop use of SL, up version to 1.5.0
> ---
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-401) Drop use of SL, up version to 1.5.0

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-401:
-

Assignee: Tamas Cservenak

> Drop use of SL, up version to 1.5.0
> ---
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-400) Update to parent POM 40, reformat

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-400:
-

Assignee: Tamas Cservenak

> Update to parent POM 40, reformat
> -
>
> Key: MRESOLVER-400
> URL: https://issues.apache.org/jira/browse/MRESOLVER-400
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Update parent to POM 40, reformat sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-402) Properly expose resolver configuration

2023-09-07 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-402:
-

 Summary: Properly expose resolver configuration
 Key: MRESOLVER-402
 URL: https://issues.apache.org/jira/browse/MRESOLVER-402
 Project: Maven Resolver
  Issue Type: Improvement
  Components: Ant Tasks
Reporter: Tamas Cservenak
 Fix For: ant-tasks-next


Make resolver fully configurable via Ant user properties or Java System 
Properties (with precedence correctly applied).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-403) Support depMgt for transitive dependencies

2023-09-07 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-403:
-

 Summary: Support depMgt for transitive dependencies
 Key: MRESOLVER-403
 URL: https://issues.apache.org/jira/browse/MRESOLVER-403
 Project: Maven Resolver
  Issue Type: Improvement
  Components: Ant Tasks
Reporter: Tamas Cservenak
 Fix For: ant-tasks-next


The depMgt section was completely ignored, make it work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-resolver-ant-tasks] cstamas merged pull request #29: [MRESOLVER-402] Fix property handling

2023-09-07 Thread via GitHub


cstamas merged PR #29:
URL: https://github.com/apache/maven-resolver-ant-tasks/pull/29


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (MRESOLVER-403) Support depMgt for transitive dependencies

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-403:
-

Assignee: Tamas Cservenak

> Support depMgt for transitive dependencies
> --
>
> Key: MRESOLVER-403
> URL: https://issues.apache.org/jira/browse/MRESOLVER-403
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> The depMgt section was completely ignored, make it work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-resolver-ant-tasks] cstamas merged pull request #30: [MRESOLVER-402] Add depMgt support and tests

2023-09-07 Thread via GitHub


cstamas merged PR #30:
URL: https://github.com/apache/maven-resolver-ant-tasks/pull/30


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (MRESOLVER-402) Properly expose resolver configuration

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-402.
-
Resolution: Fixed

> Properly expose resolver configuration
> --
>
> Key: MRESOLVER-402
> URL: https://issues.apache.org/jira/browse/MRESOLVER-402
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Make resolver fully configurable via Ant user properties or Java System 
> Properties (with precedence correctly applied).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (MRESOLVER-402) Properly expose resolver configuration

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-402?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak reassigned MRESOLVER-402:
-

Assignee: Tamas Cservenak

> Properly expose resolver configuration
> --
>
> Key: MRESOLVER-402
> URL: https://issues.apache.org/jira/browse/MRESOLVER-402
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> Make resolver fully configurable via Ant user properties or Java System 
> Properties (with precedence correctly applied).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MRESOLVER-403) Support depMgt for transitive dependencies

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MRESOLVER-403.
-
Resolution: Fixed

> Support depMgt for transitive dependencies
> --
>
> Key: MRESOLVER-403
> URL: https://issues.apache.org/jira/browse/MRESOLVER-403
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-next
>
>
> The depMgt section was completely ignored, make it work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-resolver-ant-tasks] ppkarwasz commented on pull request #15: [MRESOLVER-98] Add support for dependency management

2023-09-07 Thread via GitHub


ppkarwasz commented on PR #15:
URL: 
https://github.com/apache/maven-resolver-ant-tasks/pull/15#issuecomment-1709683439

   @cstamas,
   
   I am not using Maven Resolver any more, so feel free to take over the PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Updated] (MRESOLVER-344) Upgrade Maven to 3.9.4, Resolver 1.9.15

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-344:
--
Summary: Upgrade Maven to 3.9.4, Resolver 1.9.15  (was: Upgrade Maven to 
3.9.4)

> Upgrade Maven to 3.9.4, Resolver 1.9.15
> ---
>
> Key: MRESOLVER-344
> URL: https://issues.apache.org/jira/browse/MRESOLVER-344
> Project: Maven Resolver
>  Issue Type: Dependency upgrade
>  Components: Ant Tasks
>Reporter: Sylwester Lachiewicz
>Assignee: Sylwester Lachiewicz
>Priority: Major
> Fix For: ant-tasks-1.5.0
>
>
> Upgrade to Maven 3.9.4 and Resolver 1.9.15



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-401) Drop use of SL and deprecated stuff, up version to 1.5.0

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-401:
--
Summary: Drop use of SL and deprecated stuff, up version to 1.5.0  (was: 
Drop use of SL, up version to 1.5.0)

> Drop use of SL and deprecated stuff, up version to 1.5.0
> 
>
> Key: MRESOLVER-401
> URL: https://issues.apache.org/jira/browse/MRESOLVER-401
> Project: Maven Resolver
>  Issue Type: Task
>  Components: Ant Tasks
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: ant-tasks-1.5.0
>
>
> Drop use of deprecated SL, switch to supplier (and drop other deprecated 
> uses).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762618#comment-17762618
 ] 

Tamas Cservenak commented on MNG-7868:
--

As mentioned on linked mvnd GH issue:
* my suspicion is about "hot artifacts" (commonly used libraries across MANY 
modules, typical examples are slf4j-api or alike ones)
* lock dump (emitted if lock diag enabled AND error like you reported happens) 
will make us able to either prove my "hot artifacts" theory, OR totally dismiss 
it (and then look somewhere else).

To interpret lock diag dump, project knowledge is needed (as it is very low 
level, will emit lock names with refs and acquired locksteps), as locks names 
(hopefully file-gav mapper used, not file-hgav that obfuscates/sha1 locks 
names) will contain {{G~A~V}} strings, and it is the project owner (and not me) 
who is be able to identify "in-reactor" and "external dep" artifacts from it...

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (DOXIA-706) Sink.text(String, SinkEventAttributes) not properly supported by Xhtml5BaseSink

2023-09-07 Thread Konrad Windszus (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konrad Windszus updated DOXIA-706:
--
Summary: Sink.text(String, SinkEventAttributes) not properly supported by 
Xhtml5BaseSink  (was: Sink.text(String, SinkEventAttributes) not properly 
supported by Xhtml5BaseSync)

> Sink.text(String, SinkEventAttributes) not properly supported by 
> Xhtml5BaseSink
> ---
>
> Key: DOXIA-706
> URL: https://issues.apache.org/jira/browse/DOXIA-706
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.0.0-M7
>Reporter: Konrad Windszus
>Priority: Major
>
> All attributes passed as second argument to {{Sink.text(String, 
> SinkEventAttributes}} are just silently disregarded in 
> https://github.com/apache/maven-doxia/blob/336a9a4030980809814783ce30197acc5f42c9a2/doxia-core/src/main/java/org/apache/maven/doxia/sink/impl/Xhtml5BaseSink.java#L1802.
> Instead attributes like Semantics.STRONG should be supported also here (i.e. 
> its semantics should be similar to {{inline(SinkEventAttributes)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Michael Osipov (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762628#comment-17762628
 ] 

Michael Osipov commented on MNG-7868:
-

Please note: 
https://maven.apache.org/resolver/maven-resolver-named-locks/analyzing-lock-issues.html

With that you can simply query how long a hot artifact has been locked. I bet 
it is simply not a bug here, but high request count, or lock starvation.

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (DOXIA-706) Sink.text(String, SinkEventAttributes) not properly supported by Xhtml5BaseSink

2023-09-07 Thread Konrad Windszus (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konrad Windszus updated DOXIA-706:
--
Description: 
All attributes passed as second argument to {{Sink.text(String, 
SinkEventAttributes)}} are just silently disregarded in 
https://github.com/apache/maven-doxia/blob/336a9a4030980809814783ce30197acc5f42c9a2/doxia-core/src/main/java/org/apache/maven/doxia/sink/impl/Xhtml5BaseSink.java#L1802.
Instead attributes like Semantics.STRONG should be supported also here (i.e. 
its semantics should be similar to {{inline(SinkEventAttributes)}}

  was:
All attributes passed as second argument to {{Sink.text(String, 
SinkEventAttributes}} are just silently disregarded in 
https://github.com/apache/maven-doxia/blob/336a9a4030980809814783ce30197acc5f42c9a2/doxia-core/src/main/java/org/apache/maven/doxia/sink/impl/Xhtml5BaseSink.java#L1802.
Instead attributes like Semantics.STRONG should be supported also here (i.e. 
its semantics should be similar to {{inline(SinkEventAttributes)}}


> Sink.text(String, SinkEventAttributes) not properly supported by 
> Xhtml5BaseSink
> ---
>
> Key: DOXIA-706
> URL: https://issues.apache.org/jira/browse/DOXIA-706
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.0.0-M7
>Reporter: Konrad Windszus
>Priority: Major
>
> All attributes passed as second argument to {{Sink.text(String, 
> SinkEventAttributes)}} are just silently disregarded in 
> https://github.com/apache/maven-doxia/blob/336a9a4030980809814783ce30197acc5f42c9a2/doxia-core/src/main/java/org/apache/maven/doxia/sink/impl/Xhtml5BaseSink.java#L1802.
> Instead attributes like Semantics.STRONG should be supported also here (i.e. 
> its semantics should be similar to {{inline(SinkEventAttributes)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (DOXIA-706) Sink.text(String, SinkEventAttributes) not properly supported by Xhtml5BaseSync

2023-09-07 Thread Konrad Windszus (Jira)
Konrad Windszus created DOXIA-706:
-

 Summary: Sink.text(String, SinkEventAttributes) not properly 
supported by Xhtml5BaseSync
 Key: DOXIA-706
 URL: https://issues.apache.org/jira/browse/DOXIA-706
 Project: Maven Doxia
  Issue Type: Bug
  Components: Core
Affects Versions: 2.0.0-M7
Reporter: Konrad Windszus


All attributes passed as second argument to {{Sink.text(String, 
SinkEventAttributes}} are just silently disregarded in 
https://github.com/apache/maven-doxia/blob/336a9a4030980809814783ce30197acc5f42c9a2/doxia-core/src/main/java/org/apache/maven/doxia/sink/impl/Xhtml5BaseSink.java#L1802.
Instead attributes like Semantics.STRONG should be supported also here (i.e. 
its semantics should be similar to {{inline(SinkEventAttributes)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762630#comment-17762630
 ] 

Tamas Cservenak commented on MNG-7868:
--

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs instead 
of slf4j-api directly. This will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-resolver-ant-tasks] cstamas commented on pull request #15: [MRESOLVER-98] Add support for dependency management

2023-09-07 Thread via GitHub


cstamas commented on PR #15:
URL: 
https://github.com/apache/maven-resolver-ant-tasks/pull/15#issuecomment-1709716961

   @ppkarwasz already happened, 1.5.0 is ready for release. Anyway, thanks for 
the ice breaker PR :+1: 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762653#comment-17762653
 ] 

Tamas Cservenak commented on MNG-7868:
--

And finally one more remark: IF my theory of "hot artifacts" is right, with 
"properly big (or having some attributes like layout or something am not aware 
yet)" project should always manifest, unrelated to lock implementation used (so 
it should happen with JVM-local locking, file locking but also 
Hazelcast/Redisson). This is somewhat confirmed on the mvnd issue.

I have to add, and was stated on mvnd issue, that Windows FS locking most 
probably just adds another layer of "uncertainty", so IMHO use of WinFS is NOT 
a prerequisite for this bug, but it may just exacerbate it.

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext.named.NamedLockFactoryAdapter$AdaptedLockSyncContext.acquire
>  (NamedLockFactoryAdapter.java:202)
> at org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolve 
> (DefaultArtifactResolver.java:271)
> at 
> org.eclipse.aether.internal.impl.DefaultArtifactResolver.resolveArtifacts 
> (DefaultArtifactResolver.java:259)
> at 
> org.eclipse.aether.internal.impl.DefaultRepositorySystem.resolveDependencies 
> (DefaultRepositorySystem.java:352)
> {code}
> See also this related discussion:
> https://github.com/apache/maven-mvnd/issues/836#issuecomment-1702488377



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 8:53 AM:
--

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs instead 
of slf4j-api directly. This will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aeth

[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 8:53 AM:
--

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart build AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the maven-install-plugin and also happens from other 
> spots in maven:
> I also got this stacktrace:
> {code}
> Suppressed: java.lang.IllegalStateException: Attempt 1: Could not acquire 
> write lock for 
> 'C:\Users\hohwille\.m2\repository\.locks\artifact~com.caucho~com.springsource.com.caucho~3.2.1.lock'
>  in 30 SECONDS
> at 
> org.eclipse.aether.internal.impl.synccontext

[GitHub] [maven-doxia] kwin opened a new pull request, #175: [DOXIA-706] Support SinkEventAttributes in Sink.text(...)

2023-09-07 Thread via GitHub


kwin opened a new pull request, #175:
URL: https://github.com/apache/maven-doxia/pull/175

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (DOXIA-706) Sink.text(String, SinkEventAttributes) not properly supported by Xhtml5BaseSink

2023-09-07 Thread Konrad Windszus (Jira)


 [ 
https://issues.apache.org/jira/browse/DOXIA-706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konrad Windszus reassigned DOXIA-706:
-

Assignee: Konrad Windszus

> Sink.text(String, SinkEventAttributes) not properly supported by 
> Xhtml5BaseSink
> ---
>
> Key: DOXIA-706
> URL: https://issues.apache.org/jira/browse/DOXIA-706
> Project: Maven Doxia
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.0.0-M7
>Reporter: Konrad Windszus
>Assignee: Konrad Windszus
>Priority: Major
>
> All attributes passed as second argument to {{Sink.text(String, 
> SinkEventAttributes)}} are just silently disregarded in 
> https://github.com/apache/maven-doxia/blob/336a9a4030980809814783ce30197acc5f42c9a2/doxia-core/src/main/java/org/apache/maven/doxia/sink/impl/Xhtml5BaseSink.java#L1802.
> Instead attributes like Semantics.STRONG should be supported also here (i.e. 
> its semantics should be similar to {{inline(SinkEventAttributes)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven] cstamas merged pull request #1224: [MNG-7870] Undeprecate G level metadata

2023-09-07 Thread via GitHub


cstamas merged PR #1224:
URL: https://github.com/apache/maven/pull/1224


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (MNG-7870) Undeprecate wrongly deprecated repository metadata

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MNG-7870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak closed MNG-7870.

Resolution: Fixed

maven-3.9.x: 
https://github.com/apache/maven/commit/84ee422e65e86fc866c94dec162311b46a27f187
master: 
https://github.com/apache/maven/commit/1c050eee7bc21b5b6ea3d774ace255eba85e20e2

> Undeprecate wrongly deprecated repository metadata
> --
>
> Key: MNG-7870
> URL: https://issues.apache.org/jira/browse/MNG-7870
> Project: Maven
>  Issue Type: Task
>  Components: Artifacts and Repositories
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 4.0.0-alpha-8, 3.9.5
>
>
> In commit 
> https://github.com/apache/maven/commit/1af8513fa7512cf25022b249cae0f84062c5085b
>  related to MNG-7385 the modello G level metadata was deprecated (by mistake 
> I assume).
> Undo this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MNGSITE-524) "Goals" link is broken for maven-enforcer-plugin

2023-09-07 Thread Dmitry Cherniachenko (Jira)
Dmitry Cherniachenko created MNGSITE-524:


 Summary: "Goals" link is broken for maven-enforcer-plugin
 Key: MNGSITE-524
 URL: https://issues.apache.org/jira/browse/MNGSITE-524
 Project: Maven Project Web Site
  Issue Type: Bug
Reporter: Dmitry Cherniachenko


The plugin home page https://maven.apache.org/enforcer/maven-enforcer-plugin/ 
has a "Goals" link in the sidebar on the left.

The link currently points to 
https://maven.apache.org/enforcer/maven-enforcer-plugin/plugin-info.html which 
gives a "Page Not Found".

The same happens when clicking on the "enforcer:enforce" link on the home page:
https://maven.apache.org/enforcer/maven-enforcer-plugin/enforce-mojo.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)
Tamas Cservenak created MRESOLVER-404:
-

 Summary: New strategy for Hazelcast named locks
 Key: MRESOLVER-404
 URL: https://issues.apache.org/jira/browse/MRESOLVER-404
 Project: Maven Resolver
  Issue Type: Improvement
  Components: Resolver
Reporter: Tamas Cservenak


Originally (and even today, but see below) Hazelcast NamedLock implementation 
worked like this:
* on lock acquire, an ISemaphore DO is created (or just get if exists), is 
refCounted
* on lock release, if refCount shows = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

  was:
Originally (and even today, but see below) Hazelcast NamedLock implementation 
worked like this:
* on lock acquire, an ISemaphore DO is created (or just get if exists), is 
refCounted
* on lock release, if refCount shows = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But th

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds that use this cluster to coordinate). Artifacts count existing out there 
is not infinite, but is large enough -- especially if cluster shared across 
many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performanc

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With mapping, number of DOs would lowered to "maximum at one moment" (so 
if you have a large build farm that juggles with 1000 artifacts at one moment, 
you'd have 1000).

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated so

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With indirection, the number of DOs would lowered to "maximum 
concurrently used", so if you have a large build farm, that is able to juggle 
with 1000 artifacts at given one moment, your cluster would have 1000 
ISemaphores.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluste

[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distributed Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With indirection, the number of DOs would lowered to "maximum 
concurrently used", so if you have a large build farm, that is able to juggle 
with 1000 artifacts at given one moment, your cluster would have 1000 
ISemaphores.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distribute Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of clust

[jira] [Closed] (MNGSITE-524) "Goals" link is broken for maven-enforcer-plugin

2023-09-07 Thread Michael Osipov (Jira)


 [ 
https://issues.apache.org/jira/browse/MNGSITE-524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Osipov closed MNGSITE-524.
--
Resolution: Duplicate

> "Goals" link is broken for maven-enforcer-plugin
> 
>
> Key: MNGSITE-524
> URL: https://issues.apache.org/jira/browse/MNGSITE-524
> Project: Maven Project Web Site
>  Issue Type: Bug
>Reporter: Dmitry Cherniachenko
>Priority: Minor
>
> The plugin home page https://maven.apache.org/enforcer/maven-enforcer-plugin/ 
> has a "Goals" link in the sidebar on the left.
> The link currently points to 
> https://maven.apache.org/enforcer/maven-enforcer-plugin/plugin-info.html 
> which gives a "Page Not Found".
> The same happens when clicking on the "enforcer:enforce" link on the home 
> page:
> https://maven.apache.org/enforcer/maven-enforcer-plugin/enforce-mojo.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MRESOLVER-404) New strategy for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Description: 
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distributed Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). Artifacts count existing out 
there is not infinite, but is large enough -- especially if cluster shared 
across many different/unrelated builds -- to grow over sane limit.

So, current recommendation is to have "large enough" dedicated Hazelcast 
cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
client", so puts burden onto JVM process running it as node, hence Maven as 
well). But even then, regular reboot of cluster may be needed.

A proper but somewhat complicated solution would be to introduce some sort of 
indirection: create as many ISemaphore as needed at the moment, and map those 
onto locks names in use at the moment (and reuse unused semaphores). Problem 
is, that mapping would need to be distributed as well (so all clients pick them 
up, or perform new mapping), and this may cause performance penalty. But this 
could be proved by exhaustive perf testing only.

The benefit would be obvious: today cluster holds as many ISemaphores as many 
Artifacts were met by all the builds, that use given cluster since cluster 
boot. With indirection, the number of DOs would lowered to "maximum 
concurrently used", so if you have a large build farm, that is able to juggle 
with 1000 artifacts at given one moment, your cluster would have 1000 
ISemaphores.

Still, with proper "segmenting" of the clusters, for example to have them split 
for "related" job groups, hence, the Artifacts coming thru them would remain 
somewhat within limited boundaries, or some automation for "cluster regular 
reboot", or simply just create "huge enough" clusters, may make users benefit 
of never hitting these issues (cluster OOM). 

And current code is most probably the fastest solution, hence, I just created 
this issue to have it documented, but i plan no meritorious work on this topic.

  was:
Originally (for today, see below) Hazelcast NamedLock implementation worked 
like this:
* on lock acquire, an ISemaphore DO with lock name is created (or just get, if 
exists), is refCounted
* on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
(releasing HZ cluster resources)
* if after some time, a new lock acquire happened for same name, ISemaphore DO 
would get re-created.

Today, HZ NamedLocks implementation works in following way:
* there is only one Semaphore provider implementation, the 
{{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
Distributed Object (DO) name and does not destroys the DO

Reason for this is historical: originally, named locks precursor code was done 
for Hazelcast 2/3, that used "unreliable" distributed operations, and 
recreating previously destroyed DO was possible (at the cost of 
"unreliability").

Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
reliable, it was at the cost that DOs once created, then destroyed, could not 
be recreated anymore. This change was applied to 
{{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
ISemaphores (release semaphore is no-op method).

But, this has an important consequence: a long running Hazelcast cluster will 
have more and more ISemaphore DOs (basically as many as many Artifacts all the 
builds met, that use this cluster to coordinate). 

[jira] [Updated] (MRESOLVER-404) New strategy may be needed for Hazelcast named locks

2023-09-07 Thread Tamas Cservenak (Jira)


 [ 
https://issues.apache.org/jira/browse/MRESOLVER-404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Cservenak updated MRESOLVER-404:
--
Summary: New strategy may be needed for Hazelcast named locks  (was: New 
strategy for Hazelcast named locks)

> New strategy may be needed for Hazelcast named locks
> 
>
> Key: MRESOLVER-404
> URL: https://issues.apache.org/jira/browse/MRESOLVER-404
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Priority: Major
>
> Originally (for today, see below) Hazelcast NamedLock implementation worked 
> like this:
> * on lock acquire, an ISemaphore DO with lock name is created (or just get, 
> if exists), is refCounted
> * on lock release, if refCount shows 0 = uses, ISemaphore was destroyed 
> (releasing HZ cluster resources)
> * if after some time, a new lock acquire happened for same name, ISemaphore 
> DO would get re-created.
> Today, HZ NamedLocks implementation works in following way:
> * there is only one Semaphore provider implementation, the 
> {{DirectHazelcastSemaphoreProvider}} that maps lock name 1:1 onto ISemaphore 
> Distributed Object (DO) name and does not destroys the DO
> Reason for this is historical: originally, named locks precursor code was 
> done for Hazelcast 2/3, that used "unreliable" distributed operations, and 
> recreating previously destroyed DO was possible (at the cost of 
> "unreliability").
> Since Hazelcast 4.x it updated to RAFT consensus algorithm and made things 
> reliable, it was at the cost that DOs once created, then destroyed, could not 
> be recreated anymore. This change was applied to 
> {{DirectHazelcastSemaphoreProvider}} as well, by simply not dropping unused 
> ISemaphores (release semaphore is no-op method).
> But, this has an important consequence: a long running Hazelcast cluster will 
> have more and more ISemaphore DOs (basically as many as many Artifacts all 
> the builds met, that use this cluster to coordinate). Artifacts count 
> existing out there is not infinite, but is large enough -- especially if 
> cluster shared across many different/unrelated builds -- to grow over sane 
> limit.
> So, current recommendation is to have "large enough" dedicated Hazelcast 
> cluster and  use {{semaphore-hazelcast-client}} (that is a "thin client" that 
> connects to cluster), instead of {{semaphore-hazelcast}} (that is "thick 
> client", so puts burden onto JVM process running it as node, hence Maven as 
> well). But even then, regular reboot of cluster may be needed.
> A proper but somewhat complicated solution would be to introduce some sort of 
> indirection: create as many ISemaphore as needed at the moment, and map those 
> onto locks names in use at the moment (and reuse unused semaphores). Problem 
> is, that mapping would need to be distributed as well (so all clients pick 
> them up, or perform new mapping), and this may cause performance penalty. But 
> this could be proved by exhaustive perf testing only.
> The benefit would be obvious: today cluster holds as many ISemaphores as many 
> Artifacts were met by all the builds, that use given cluster since cluster 
> boot. With indirection, the number of DOs would lowered to "maximum 
> concurrently used", so if you have a large build farm, that is able to juggle 
> with 1000 artifacts at given one moment, your cluster would have 1000 
> ISemaphores.
> Still, with proper "segmenting" of the clusters, for example to have them 
> split for "related" job groups, hence, the Artifacts coming thru them would 
> remain somewhat within limited boundaries, or some automation for "cluster 
> regular reboot", or simply just create "huge enough" clusters, may make users 
> benefit of never hitting these issues (cluster OOM). 
> And current code is most probably the fastest solution, hence, I just created 
> this issue to have it documented, but i plan no meritorious work on this 
> topic.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 11:10 AM:
---

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 
EDIT: Rethinking No 1 "build refactor", it will not help... so forget it. Due 
to nature of SyncContext, if you had original M1(slf4j-api, lib-a) and 
M2(slf4j-apim lib-b), and with refactoring above slf4j-api will become 
"primed", the sync context will still become mutually exclusive assuming then 
need lib-a and lib-b download, and sync context applies to ALL artifacts...


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happens with concurrent builds: {{mvn -T ...}}
> * is seems to be windows related (at least mainly happens on windows)
> * it is in-deterministic and is not so easy to create an isolated and simple 
> project and a reproducible scenario that always results in this error. 
> However, I get this very often in my current project with many modules (500+).
> * it is not specific to the 

[jira] [Comment Edited] (MNG-7868) "Could not acquire lock(s)" error in concurrent maven builds

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MNG-7868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762630#comment-17762630
 ] 

Tamas Cservenak edited comment on MNG-7868 at 9/7/23 11:10 AM:
---

Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 
EDIT: Rethinking No 1 "build refactor", it will not help... so forget it. Due 
to nature of SyncContext, if you had original M1(slf4j-api, lib-a) and 
M2(slf4j-api, lib-b), and with refactoring above slf4j-api will become 
"primed", the sync context will still become mutually exclusive assuming then 
need lib-a and lib-b download, and sync context applies to ALL artifacts...


was (Author: cstamas):
Ideas to try IF "hot artifacts" theory proves right:
For example assume that there is a big multi module build where each module 
depend on slf4j-api (HOW it depends does not matter, inherited from parent POM, 
directly refd + depMgt, etc, all that matters that effective POM of each module 
contains direct dependency on slf4j-api).

1. Refactor the build in following way (very simplified, but only to get the 
idea, the goal is really to "prime" local repo and let all the modules enter 
the "happy path"): 
* introduce new module in reactor like "common-libs", make that one module 
depend on slf4j-api, and all the existing module depend on common-libs. This 
will cause following:
* all existing modules will be scheduled by smart builder AFTER common-libs is 
built
* common-libs will possibly use exclusive lock to get slf4j-api, while all the 
others modules will end up on "happy path" (shared lock, as local repo is 
primed)

2. OR Resolver code change to reduce the "coarseness" of SyncContext (currently 
it causes exclusion for each resolution, ie. both context may resolve 10 
artifacts with only one overlapping -- the "hot" one), but this is not gonna 
happen in Maven 3.x/Resolver 1.x timeframe, but most probably in Resolver 2.x 
that is to be picked up by Maven 4.x.
 
EDIT: Rethinking No 1 "build refactor", it will not help... so forget it. Due 
to nature of SyncContext, if you had original M1(slf4j-api, lib-a) and 
M2(slf4j-apim lib-b), and with refactoring above slf4j-api will become 
"primed", the sync context will still become mutually exclusive assuming then 
need lib-a and lib-b download, and sync context applies to ALL artifacts...

> "Could not acquire lock(s)" error in concurrent maven builds
> 
>
> Key: MNG-7868
> URL: https://issues.apache.org/jira/browse/MNG-7868
> Project: Maven
>  Issue Type: Bug
> Environment: windows, maven 3.9.4
>Reporter: Jörg Hohwiller
>Priority: Major
>
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install (default-install) 
> on project foo.bar: Execution default-install of goal 
> org.apache.maven.plugins:maven-install-plugin:3.1.1:install failed: Could not 
> acquire lock(s) -> [Help 1]
> {code}
> I am using maven 3.9.4 on windows:
> {code}
> $ mvn -v
> Apache Maven 3.9.4 (dfbb324ad4a7c8fb0bf182e6d91b0ae20e3d2dd9)
> Maven home: D:\projects\test\software\mvn
> Java version: 17.0.5, vendor: Eclipse Adoptium, runtime: 
> D:\projects\test\software\java
> Default locale: en_US, platform encoding: UTF-8
> OS name: "windows 10", version: "10.0", arch: "amd64", family: "windows"
> {code}
> I searched for this bug and found issues like MRESOLVER-332 that first look 
> identical or similar but do not really seem to be related so I decided to 
> create this issue.
> For this bug I made the following observations:
> * it only happen

[GitHub] [maven-enforcer] slawekjaranowski opened a new pull request, #286: [MENFORCER-491] Fix plugin documentation generation

2023-09-07 Thread via GitHub


slawekjaranowski opened a new pull request, #286:
URL: https://github.com/apache/maven-enforcer/pull/286

   The new maven-plugin-report-plugin should be used.
   
   ---
   
   Following this checklist to help us incorporate your 
   contribution quickly and easily:
   
- [x] Make sure there is a [JIRA 
issue](https://issues.apache.org/jira/browse/MENFORCER) filed 
  for the change (usually before you start working on it).  Trivial 
changes like typos do not 
  require a JIRA issue.  Your pull request should address just this 
issue, without 
  pulling in other changes.
- [x] Each commit in the pull request should have a meaningful subject line 
and body.
- [x] Format the pull request title like `[MENFORCER-XXX] - Fixes bug in 
ApproximateQuantiles`,
  where you replace `MENFORCER-XXX` with the appropriate JIRA issue. 
Best practice
  is to use the JIRA issue title in the pull request title and in the 
first line of the 
  commit message.
- [x] Write a pull request description that is detailed enough to 
understand what the pull request does, how, and why.
- [x] Run `mvn clean verify` to make sure basic checks pass. A more 
thorough check will 
  be performed on your pull request automatically.
- [x] You have run the integration tests successfully (`mvn -Prun-its clean 
verify`).
   
   If your pull request is about ~20 lines of code you don't need to sign an
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf) if you are unsure
   please ask on the developers list.
   
   To make clear that you license your contribution under 
   the [Apache License Version 2.0, January 
2004](http://www.apache.org/licenses/LICENSE-2.0)
   you have to acknowledge this by using the following check-box.
   
- [x] I hereby declare this contribution to be licenced under the [Apache 
License Version 2.0, January 2004](http://www.apache.org/licenses/LICENSE-2.0)
   
- [x] In any other case, please file an [Apache Individual Contributor 
License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (MSHARED-1298) MIssing site report should be detected

2023-09-07 Thread Slawomir Jaranowski (Jira)
Slawomir Jaranowski created MSHARED-1298:


 Summary: MIssing site report should be detected
 Key: MSHARED-1298
 URL: https://issues.apache.org/jira/browse/MSHARED-1298
 Project: Maven Shared Components
  Issue Type: Bug
  Components: maven-reporting-exec
Reporter: Slawomir Jaranowski


When we define as report plugin which not contains any report Mojo 
we should inform by warning or even break a build. 

eg:

{code:xml}
  

  
org.apache.maven.plugins
maven-plugin-report-plugin
  

  
{code}

is silently skipped and no report is generated.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-build-cache-extension] kbuntrock commented on a diff in pull request #91: [MBUILDCACHE-64] Exclusion mechanism bugfix

2023-09-07 Thread via GitHub


kbuntrock commented on code in PR #91:
URL: 
https://github.com/apache/maven-build-cache-extension/pull/91#discussion_r1315087882


##
src/main/java/org/apache/maven/buildcache/checksum/MavenProjectInput.java:
##
@@ -633,13 +714,7 @@ private static boolean isReadable(Path entry) throws 
IOException {
 }
 
 private boolean isFilteredOutSubpath(Path path) {
-Path normalized = path.normalize();
-for (Path filteredOutDir : filteredOutPaths) {
-if (normalized.startsWith(filteredOutDir)) {
-return true;
-}
-}
-return false;
+return inputExcludeDirectoryPathMatchers.stream().anyMatch(matcher -> 
matcher.stopTreeWalking(path));

Review Comment:
   EDIT : My response was wrong. Removing its content to improve readability.
   
   I agree with the point, but I did not want to go too far with this MR. :P 
   
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (MRESOLVER-387) Provide "static" supplier for RepositorySystem

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762778#comment-17762778
 ] 

Tamas Cservenak commented on MRESOLVER-387:
---

Just FTR, here is an example how maven-resolver-ant-tasks migrated from 
deprecated SL to new Supplier (introduced with this JIRA):
https://github.com/apache/maven-resolver-ant-tasks/commit/95e85e6deac3217fa905f17f836c2716a10673e7

> Provide "static" supplier for RepositorySystem
> --
>
> Key: MRESOLVER-387
> URL: https://issues.apache.org/jira/browse/MRESOLVER-387
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.15
>
>
> To provide SL replacement.
> Something like this
> https://github.com/maveniverse/mima/blob/main/runtime/standalone-static/src/main/java/eu/maveniverse/maven/mima/runtime/standalonestatic/RepositorySystemSupplier.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (MRESOLVER-387) Provide "static" supplier for RepositorySystem

2023-09-07 Thread Tamas Cservenak (Jira)


[ 
https://issues.apache.org/jira/browse/MRESOLVER-387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17762778#comment-17762778
 ] 

Tamas Cservenak edited comment on MRESOLVER-387 at 9/7/23 3:13 PM:
---

Just FTR, here is an example how maven-resolver-ant-tasks migrated from 
deprecated SL to new Supplier (introduced with this JIRA):
https://github.com/apache/maven-resolver-ant-tasks/pull/28


was (Author: cstamas):
Just FTR, here is an example how maven-resolver-ant-tasks migrated from 
deprecated SL to new Supplier (introduced with this JIRA):
https://github.com/apache/maven-resolver-ant-tasks/commit/95e85e6deac3217fa905f17f836c2716a10673e7

> Provide "static" supplier for RepositorySystem
> --
>
> Key: MRESOLVER-387
> URL: https://issues.apache.org/jira/browse/MRESOLVER-387
> Project: Maven Resolver
>  Issue Type: Improvement
>  Components: Resolver
>Reporter: Tamas Cservenak
>Assignee: Tamas Cservenak
>Priority: Major
> Fix For: 1.9.15
>
>
> To provide SL replacement.
> Something like this
> https://github.com/maveniverse/mima/blob/main/runtime/standalone-static/src/main/java/eu/maveniverse/maven/mima/runtime/standalonestatic/RepositorySystemSupplier.java



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (SUREFIRE-2193) Improve test execution time by sending tests in batch to ForkBooter

2023-09-07 Thread GATIKRUSHNA SAHU (Jira)
GATIKRUSHNA SAHU created SUREFIRE-2193:
--

 Summary: Improve test execution time by sending tests in batch to 
ForkBooter
 Key: SUREFIRE-2193
 URL: https://issues.apache.org/jira/browse/SUREFIRE-2193
 Project: Maven Surefire
  Issue Type: Improvement
Reporter: GATIKRUSHNA SAHU


Surefire is slow because IPC where for every test case it perform follow below 
process
 # SERIALIZE config then DESERIALIZE  and then wait for the result.Improvement 
if we can send number of test case in batch then it will save cost involve in 
IPC



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (SUREFIRE-2193) Improve test execution time by sending tests in batch to ForkBooter

2023-09-07 Thread GATIKRUSHNA SAHU (Jira)


 [ 
https://issues.apache.org/jira/browse/SUREFIRE-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

GATIKRUSHNA SAHU updated SUREFIRE-2193:
---
Description: 
Surefire is slow because IPC where for every test case it perform  below process
 # SERIALIZE config then DESERIALIZE  and then wait for the result.Improvement 
if we can send number of test case in batch then it will save cost involve in 
IPC

  was:
Surefire is slow because IPC where for every test case it perform follow below 
process
 # SERIALIZE config then DESERIALIZE  and then wait for the result.Improvement 
if we can send number of test case in batch then it will save cost involve in 
IPC


> Improve test execution time by sending tests in batch to ForkBooter
> ---
>
> Key: SUREFIRE-2193
> URL: https://issues.apache.org/jira/browse/SUREFIRE-2193
> Project: Maven Surefire
>  Issue Type: Improvement
>Reporter: GATIKRUSHNA SAHU
>Priority: Major
>
> Surefire is slow because IPC where for every test case it perform  below 
> process
>  # SERIALIZE config then DESERIALIZE  and then wait for the 
> result.Improvement if we can send number of test case in batch then it will 
> save cost involve in IPC



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-enforcer] dependabot[bot] opened a new pull request, #287: Bump org.codehaus.plexus:plexus-classworlds from 2.5.2 to 2.7.0

2023-09-07 Thread via GitHub


dependabot[bot] opened a new pull request, #287:
URL: https://github.com/apache/maven-enforcer/pull/287

   Bumps 
[org.codehaus.plexus:plexus-classworlds](https://github.com/codehaus-plexus/plexus-classworlds)
 from 2.5.2 to 2.7.0.
   
   Release notes
   Sourced from https://github.com/codehaus-plexus/plexus-classworlds/releases";>org.codehaus.plexus:plexus-classworlds's
 releases.
   
   2.7.0
   What's Changed
   
   Bump actions/cache from v2.1.1 to v2.1.3 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/13";>codehaus-plexus/plexus-classworlds#13
   Bump release-drafter/release-drafter from v5.11.0 to v5.15.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/18";>codehaus-plexus/plexus-classworlds#18
   Bump actions/cache from v2.1.3 to v2.1.4 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/16";>codehaus-plexus/plexus-classworlds#16
   Bump maven-enforcer-plugin from 1.3.1 to 1.4.1 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/8";>codehaus-plexus/plexus-classworlds#8
   Bump actions/cache from v2.1.4 to v2.1.5 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/21";>codehaus-plexus/plexus-classworlds#21
   Adding support for PPC64LE by https://github.com/ezeeyahoo";>@​ezeeyahoo in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/6";>codehaus-plexus/plexus-classworlds#6
   Bump actions/cache from 2.1.5 to 2.1.6 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/26";>codehaus-plexus/plexus-classworlds#26
   Bump actions/setup-java from 1 to 2.2.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/30";>codehaus-plexus/plexus-classworlds#30
   Bump actions/setup-java from 2.2.0 to 2.3.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/31";>codehaus-plexus/plexus-classworlds#31
   Bump actions/setup-java from 2.3.0 to 2.3.1 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/33";>codehaus-plexus/plexus-classworlds#33
   Bump maven-dependency-plugin from 2.0 to 3.2.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/27";>codehaus-plexus/plexus-classworlds#27
   Bump maven-javadoc-plugin from 2.9.1 to 3.3.1 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/32";>codehaus-plexus/plexus-classworlds#32
   Bump actions/cache from 2.1.6 to 2.1.7 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/34";>codehaus-plexus/plexus-classworlds#34
   Bump actions/setup-java from 2.3.1 to 2.4.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/35";>codehaus-plexus/plexus-classworlds#35
   Bump actions/setup-java from 2.4.0 to 2.5.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/37";>codehaus-plexus/plexus-classworlds#37
   Bump release-drafter/release-drafter from 5.15.0 to 5.18.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/42";>codehaus-plexus/plexus-classworlds#42
   Bump release-drafter/release-drafter from 5.18.0 to 5.18.1 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/43";>codehaus-plexus/plexus-classworlds#43
   Bump maven-enforcer-plugin from 1.4.1 to 3.0.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/29";>codehaus-plexus/plexus-classworlds#29
   Bump plexus from 6.5 to 8 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/28";>codehaus-plexus/plexus-classworlds#28
   Bump maven-bundle-plugin from 3.5.1 to 5.1.4 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/38";>codehaus-plexus/plexus-classworlds#38
   Bump actions/setup-java from 2.5.0 to 3.2.0 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/56";>codehaus-plexus/plexus-classworlds#56
   Bump actions/cache from 2.1.7 to 3.0.2 by https://github.com/dependabot";>@​dependabot in https://redirect.github.com/codehaus-plexus/plexus-classworlds/pull/53";>codehaus-plexus/plexus-cl

[jira] [Assigned] (MPLUGIN-442) Get rid of deprecated XDoc format

2023-09-07 Thread Konrad Windszus (Jira)


 [ 
https://issues.apache.org/jira/browse/MPLUGIN-442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konrad Windszus reassigned MPLUGIN-442:
---

Assignee: Konrad Windszus

> Get rid of deprecated XDoc format
> -
>
> Key: MPLUGIN-442
> URL: https://issues.apache.org/jira/browse/MPLUGIN-442
> Project: Maven Plugin Tools
>  Issue Type: Bug
>  Components: Plugin Plugin
>Reporter: Konrad Windszus
>Assignee: Konrad Windszus
>Priority: Major
>
> At some time in the future m-site-p/doxia will no longer support XDoc 
> (compare with 
> https://issues.apache.org/jira/browse/DOXIA-569?focusedCommentId=17634481&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17634481).
>  Therefore the "report" goal should be converted to create "markdown" for the 
> plugin goal documentation pages instead of "XDoc" in 
> https://github.com/apache/maven-plugin-tools/blob/master/maven-plugin-tools-generators/src/main/java/org/apache/maven/tools/plugin/generator/PluginXdocGenerator.java.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-plugin-tools] kwin opened a new pull request, #225: [MPLUGIN-442] Generate goal documentation leveraging Sink API

2023-09-07 Thread via GitHub


kwin opened a new pull request, #225:
URL: https://github.com/apache/maven-plugin-tools/pull/225

   Drop XDoc intermediate format


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [maven-plugin-tools] kwin commented on a diff in pull request #225: [MPLUGIN-442] Generate goal documentation leveraging Sink API

2023-09-07 Thread via GitHub


kwin commented on code in PR #225:
URL: 
https://github.com/apache/maven-plugin-tools/pull/225#discussion_r1318836226


##
maven-plugin-report-plugin/src/main/java/org/apache/maven/plugin/plugin/report/GoalRenderer.java:
##
@@ -0,0 +1,543 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.maven.plugin.plugin.report;
+
+import java.io.File;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.text.MessageFormat;
+import java.util.AbstractMap.SimpleEntry;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Optional;
+import java.util.regex.Matcher;
+import java.util.regex.Pattern;
+import java.util.stream.Collectors;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.maven.doxia.sink.Sink;
+import org.apache.maven.doxia.sink.SinkFactory;
+import org.apache.maven.doxia.sink.impl.SinkEventAttributeSet.Semantics;
+import org.apache.maven.plugin.descriptor.MojoDescriptor;
+import org.apache.maven.plugin.descriptor.Parameter;
+import org.apache.maven.plugin.logging.Log;
+import org.apache.maven.project.MavenProject;
+import org.apache.maven.tools.plugin.EnhancedParameterWrapper;
+import org.apache.maven.tools.plugin.ExtendedMojoDescriptor;
+import org.apache.maven.tools.plugin.javadoc.JavadocLinkGenerator;
+import org.apache.maven.tools.plugin.util.PluginUtils;
+import org.codehaus.plexus.i18n.I18N;
+
+public class GoalRenderer extends AbstractPluginReportRenderer {
+
+/** Regular expression matching an XHTML link with group 1 = link target, 
group 2 = link label. */
+private static final Pattern HTML_LINK_PATTERN = Pattern.compile("(.*?)");
+
+public static GoalRenderer create(
+SinkFactory sinkFactory,
+File outputDirectory,
+I18N i18n,
+Locale locale,
+MavenProject project,
+MojoDescriptor descriptor,
+boolean disableInternalJavadocLinkValidation,
+Log log)
+throws IOException {
+String filename = descriptor.getGoal() + "-mojo.html";
+Sink sink = sinkFactory.createSink(outputDirectory, filename);
+return new GoalRenderer(
+sink, i18n, locale, project, descriptor, outputDirectory, 
disableInternalJavadocLinkValidation, log);
+}
+
+/** The directory where the generated site is written. Used for resolving 
relative links to javadoc. */
+private final File reportOutputDirectory;
+
+private final MojoDescriptor descriptor;
+private final boolean disableInternalJavadocLinkValidation;
+
+private final Log log;
+
+// only used from tests directly
+GoalRenderer(
+Sink sink,
+I18N i18n,
+Locale locale,
+MavenProject project,
+MojoDescriptor descriptor,
+File reportOutputDirectory,
+boolean disableInternalJavadocLinkValidation,
+Log log) {
+super(sink, locale, i18n, project);
+this.reportOutputDirectory = reportOutputDirectory;
+this.descriptor = descriptor;
+this.disableInternalJavadocLinkValidation = 
disableInternalJavadocLinkValidation;
+this.log = log;
+}
+
+@Override
+public String getTitle() {
+return descriptor.getFullGoalName();
+}
+
+@Override
+protected void renderBody() {
+startSection(descriptor.getFullGoalName());
+renderReportNotice();
+renderDescription(
+"goal.fullname", descriptor.getPluginDescriptor().getId() + 
":" + descriptor.getGoal(), false);
+
+String context = "goal " + descriptor.getGoal();
+if (StringUtils.isNotEmpty(descriptor.getDeprecated())) {
+renderDescription("goal.deprecated", 
getXhtmlWithValidatedLinks(descriptor.getDeprecated(), context), true);
+}
+if (StringUtils.isNotEmpty(descriptor.getDescription())) {
+renderDescription(
+"goal.description", 
getXhtmlWithValidatedLinks(descriptor.getDescription(), context), true);
+  

[jira] [Assigned] (MSHARED-1298) MIssing site report should be detected

2023-09-07 Thread Michael Osipov (Jira)


 [ 
https://issues.apache.org/jira/browse/MSHARED-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Osipov reassigned MSHARED-1298:
---

Assignee: Michael Osipov

> MIssing site report should be detected
> --
>
> Key: MSHARED-1298
> URL: https://issues.apache.org/jira/browse/MSHARED-1298
> Project: Maven Shared Components
>  Issue Type: Bug
>  Components: maven-reporting-exec
>Reporter: Slawomir Jaranowski
>Assignee: Michael Osipov
>Priority: Major
>
> When we define as report plugin which not contains any report Mojo 
> we should inform by warning or even break a build. 
> eg:
> {code:xml}
>   
> 
>   
> org.apache.maven.plugins
> maven-plugin-report-plugin
>   
> 
>   
> {code}
> is silently skipped and no report is generated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-plugin-tools] michael-o commented on pull request #225: [MPLUGIN-442] Generate goal documentation leveraging Sink API

2023-09-07 Thread via GitHub


michael-o commented on PR #225:
URL: 
https://github.com/apache/maven-plugin-tools/pull/225#issuecomment-1710451152

   Will happily review, leave me a couple of days...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [maven-enforcer] dependabot[bot] closed pull request #287: Bump org.codehaus.plexus:plexus-classworlds from 2.5.2 to 2.7.0

2023-09-07 Thread via GitHub


dependabot[bot] closed pull request #287: Bump 
org.codehaus.plexus:plexus-classworlds from 2.5.2 to 2.7.0
URL: https://github.com/apache/maven-enforcer/pull/287


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [maven-enforcer] dependabot[bot] commented on pull request #287: Bump org.codehaus.plexus:plexus-classworlds from 2.5.2 to 2.7.0

2023-09-07 Thread via GitHub


dependabot[bot] commented on PR #287:
URL: https://github.com/apache/maven-enforcer/pull/287#issuecomment-1710490874

   OK, I won't notify you about org.codehaus.plexus:plexus-classworlds again, 
unless you re-open this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Closed] (MENFORCER-490) Properly declare dependencies

2023-09-07 Thread Slawomir Jaranowski (Jira)


 [ 
https://issues.apache.org/jira/browse/MENFORCER-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slawomir Jaranowski closed MENFORCER-490.
-
Resolution: Fixed

> Properly declare dependencies
> -
>
> Key: MENFORCER-490
> URL: https://issues.apache.org/jira/browse/MENFORCER-490
> Project: Maven Enforcer Plugin
>  Issue Type: Bug
>Reporter: Elliotte Rusty Harold
>Assignee: Elliotte Rusty Harold
>Priority: Minor
> Fix For: next-release
>
>
> mvn dependency:analyze reveals a number of undeclared dependencies in various 
> sub modules. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Closed] (MENFORCER-491) ENFORCER: plugin-info and mojo pages not found

2023-09-07 Thread Slawomir Jaranowski (Jira)


 [ 
https://issues.apache.org/jira/browse/MENFORCER-491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slawomir Jaranowski closed MENFORCER-491.
-
Resolution: Fixed

> ENFORCER: plugin-info and mojo pages not found
> --
>
> Key: MENFORCER-491
> URL: https://issues.apache.org/jira/browse/MENFORCER-491
> Project: Maven Enforcer Plugin
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Jörg Hohwiller
>Assignee: Slawomir Jaranowski
>Priority: Critical
> Fix For: next-release
>
>
> I get "page not found" for these pages that should actually be there:
> https://maven.apache.org/enforcer/maven-enforcer-plugin/plugin-info.html
> https://maven.apache.org/enforcer/maven-enforcer-plugin/help-mojo.html
> ... (all other mojos) ...
> The usage page is present:
> https://maven.apache.org/enforcer/maven-enforcer-plugin/usage.html
> From there you can click "Goals" from the menu to get to the first listed 
> missing page.
> Other plugins still seem to have the generated goals overview and their 
> details pages:
> https://maven.apache.org/plugins/maven-resources-plugin/plugin-info.html
> Is enforcer using a broken version of project-info-reports or is it using a 
> custom process to publish the maven site that is buggy?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MENFORCER-490) Properly declare dependencies

2023-09-07 Thread Slawomir Jaranowski (Jira)


 [ 
https://issues.apache.org/jira/browse/MENFORCER-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slawomir Jaranowski updated MENFORCER-490:
--
Issue Type: Improvement  (was: Bug)

> Properly declare dependencies
> -
>
> Key: MENFORCER-490
> URL: https://issues.apache.org/jira/browse/MENFORCER-490
> Project: Maven Enforcer Plugin
>  Issue Type: Improvement
>Reporter: Elliotte Rusty Harold
>Assignee: Elliotte Rusty Harold
>Priority: Minor
> Fix For: 3.4.1
>
>
> mvn dependency:analyze reveals a number of undeclared dependencies in various 
> sub modules. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (MSHARED-1298) MIssing site report should be detected

2023-09-07 Thread Slawomir Jaranowski (Jira)


 [ 
https://issues.apache.org/jira/browse/MSHARED-1298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Slawomir Jaranowski updated MSHARED-1298:
-
Description: 
When we define as report plugin which not contains any report Mojo 
we should inform by warning or even break a build.

eg:
{code:xml}
  

  
org.apache.maven.plugins
maven-plugin-plugin
  

  
{code}
is silently skipped and no report is generated.

  was:
When we define as report plugin which not contains any report Mojo 
we should inform by warning or even break a build. 

eg:

{code:xml}
  

  
org.apache.maven.plugins
maven-plugin-report-plugin
  

  
{code}

is silently skipped and no report is generated.




> MIssing site report should be detected
> --
>
> Key: MSHARED-1298
> URL: https://issues.apache.org/jira/browse/MSHARED-1298
> Project: Maven Shared Components
>  Issue Type: Bug
>  Components: maven-reporting-exec
>Reporter: Slawomir Jaranowski
>Assignee: Michael Osipov
>Priority: Major
>
> When we define as report plugin which not contains any report Mojo 
> we should inform by warning or even break a build.
> eg:
> {code:xml}
>   
> 
>   
> org.apache.maven.plugins
> maven-plugin-plugin
>   
> 
>   
> {code}
> is silently skipped and no report is generated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (MENFORCER-490) Properly declare dependencies

2023-09-07 Thread Elliotte Rusty Harold (Jira)


 [ 
https://issues.apache.org/jira/browse/MENFORCER-490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliotte Rusty Harold reopened MENFORCER-490:
-

> Properly declare dependencies
> -
>
> Key: MENFORCER-490
> URL: https://issues.apache.org/jira/browse/MENFORCER-490
> Project: Maven Enforcer Plugin
>  Issue Type: Improvement
>Reporter: Elliotte Rusty Harold
>Assignee: Elliotte Rusty Harold
>Priority: Minor
> Fix For: 3.4.1
>
>
> mvn dependency:analyze reveals a number of undeclared dependencies in various 
> sub modules. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[GitHub] [maven-javadoc-plugin] dependabot[bot] opened a new pull request, #229: Bump com.google.guava:guava from 31.1-jre to 32.0.0-jre in /src/it/projects/MJAVADOC-769

2023-09-07 Thread via GitHub


dependabot[bot] opened a new pull request, #229:
URL: https://github.com/apache/maven-javadoc-plugin/pull/229

   Bumps [com.google.guava:guava](https://github.com/google/guava) from 
31.1-jre to 32.0.0-jre.
   
   Release notes
   Sourced from https://github.com/google/guava/releases";>com.google.guava:guava's 
releases.
   
   32.0.0
   Maven
   
 com.google.guava
 guava
 32.0.0-jre
 
 32.0.0-android
   
   
   Jar files
   
   https://repo1.maven.org/maven2/com/google/guava/guava/32.0.0-jre/guava-32.0.0-jre.jar";>32.0.0-jre.jar
   https://repo1.maven.org/maven2/com/google/guava/guava/32.0.0-android/guava-32.0.0-android.jar";>32.0.0-android.jar
   
   Guava requires https://github.com/google/guava/wiki/UseGuavaInYourBuild#what-about-guavas-own-dependencies";>one
 runtime dependency, which you can download here:
   
   https://repo1.maven.org/maven2/com/google/guava/failureaccess/1.0.1/failureaccess-1.0.1.jar";>failureaccess-1.0.1.jar
   
   Javadoc
   
   http://guava.dev/releases/32.0.0-jre/api/docs/";>32.0.0-jre
   http://guava.dev/releases/32.0.0-android/api/docs/";>32.0.0-android
   
   JDiff
   
   http://guava.dev/releases/32.0.0-jre/api/diffs/";>32.0.0-jre vs. 
31.1-jre
   http://guava.dev/releases/32.0.0-android/api/diffs/";>32.0.0-android vs. 
31.1-android
   http://guava.dev/releases/32.0.0-android/api/androiddiffs/";>32.0.0-android
 vs. 32.0.0-jre
   
   Changelog
   Security fixes
   
   Reimplemented Files.createTempDir and 
FileBackedOutputStream to further address CVE-2020-8908 (https://redirect.github.com/google/guava/issues/4011";>#4011) and 
CVE-2023-2976 (https://redirect.github.com/google/guava/issues/2575";>#2575). 
(feb83a1c8f)
   
   While CVE-2020-8908 was officially closed when we deprecated 
Files.createTempDir in https://github.com/google/guava/releases/tag/v30.0";>Guava 30.0, we've 
heard from users that even recent versions of Guava have been listed as 
vulnerable in other databases of security vulnerabilities. In 
response, we've reimplemented the method (and the very rarely used 
FileBackedOutputStream class, which had a similar issue) to 
eliminate the insecure behavior entirely. This change could technically affect 
users in a number of different ways (discussed under "Incompatible 
changes" below), but in practice, the only problem users are likely to 
encounter is with Windows. If you are using those APIs under Windows, you 
should skip 32.0.0 and go straight to https://github.com/google/guava/releases/tag/v32.0.1";>32.0.1 which 
fixes the problem. (Unfortunately, we didn't think of the Windows problem until 
after the release. And while w
 e https://github.com/google/guava#important-warnings";>warn that 
common.io in particular may not work under Windows, we didn't 
intend to regress support.) Sorry for the trouble.
   Incompatible changes
   Although this release bumps Guava's major version number, it makes 
no binary-incompatible changes to the guava 
artifact.
   One change could cause issues for Widows users, and a few other changes 
could cause issues for users in more usual situations:
   
   The new implementations of Files.createTempDir and 
FileBackedOutputStream https://redirect.github.com/google/guava/issues/6535";>throw an exception 
under Windows. This is fixed in https://github.com/google/guava/releases/tag/v32.0.1";>32.0.1. Sorry 
for the trouble.
   guava-gwt now https://redirect.github.com/google/guava/issues/6627";>requires GWT https://github.com/gwtproject/gwt/releases/tag/2.10.0";>2.10.0.
   This release makes a binary-incompatible change to a @Beta 
API in the separate artifact guava-testlib. 
Specifically, we changed the return type of 
TestingExecutors.sameThreadScheduledExecutor to 
ListeningScheduledExecutorService. The old return type was a 
package-private class, which caused the Kotlin compiler to produce warnings. 
(dafaa3e435)
   
   
   
   ... (truncated)
   
   
   Commits
   
   See full diff in https://github.com/google/guava/commits";>compare view
   
   
   
   
   
   [![Dependabot compatibility 
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=com.google.guava:guava&package-manager=maven&previous-version=31.1-jre&new-version=32.0.0-jre)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
   
   Dependabot will resolve any conflicts with this PR as long as you don't 
alter it yourself. You can also trigger a rebase manually by commenting 
`@dependabot rebase`.
   
   [//]: # (dependabot-automerge-start)
   [//]: # (dependabot-automerge-end)
   
   ---
   
   
   Dependabot commands and options
   
   
   You can trigger Dependabot actions by commenting on this PR:
   - `@dependabot rebase` will rebase this PR
   - `@dependabot recreate` will recreate this PR, ove

[GitHub] [maven] w6et opened a new pull request, #1228: resolve Circular dependencies for project org.apache.maven:maven-xml-impl and rename name

2023-09-07 Thread via GitHub


w6et opened a new pull request, #1228:
URL: https://github.com/apache/maven/pull/1228

   … resolve dependencies for project org.apache.maven:maven-xml-impl
   
   reason:Circular dependencies:
   maven-xml-impl
   ->plexus-xml
   ->maven-xml-impl
   plexus-xml need to exclusion maven-xml-impl
   2)rename 'Implementation of Maven API XML' to 'maven-xml-impl(Implementation 
of Maven API XML)'
   
   Following this checklist to help us incorporate your
   contribution quickly and easily:
   
- [ ] Make sure there is a [JIRA 
issue](https://issues.apache.org/jira/browse/MNG) filed
  for the change (usually before you start working on it).  Trivial 
changes like typos do not
  require a JIRA issue. Your pull request should address just this 
issue, without
  pulling in other changes.
- [ ] Each commit in the pull request should have a meaningful subject line 
and body.
- [ ] Format the pull request title like `[MNG-XXX] SUMMARY`,
  where you replace `MNG-XXX` and `SUMMARY` with the appropriate JIRA 
issue.
- [ ] Also format the first line of the commit message like `[MNG-XXX] 
SUMMARY`.
  Best practice is to use the JIRA issue title in both the pull request 
title and in the first line of the commit message.
- [ ] Write a pull request description that is detailed enough to 
understand what the pull request does, how, and why.
- [ ] Run `mvn clean verify` to make sure basic checks pass. A more 
thorough check will
  be performed on your pull request automatically.
- [ ] You have run the [Core IT][core-its] successfully.
   
   If your pull request is about ~20 lines of code you don't need to sign an
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.pdf) if you are unsure
   please ask on the developers list.
   
   To make clear that you license your contribution under
   the [Apache License Version 2.0, January 
2004](http://www.apache.org/licenses/LICENSE-2.0)
   you have to acknowledge this by using the following check-box.
   
- [ ] I hereby declare this contribution to be licenced under the [Apache 
License Version 2.0, January 2004](http://www.apache.org/licenses/LICENSE-2.0)
   
- [ ] In any other case, please file an [Apache Individual Contributor 
License Agreement](https://www.apache.org/licenses/icla.pdf).
   
   [core-its]: https://maven.apache.org/core-its/core-it-suite/
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@maven.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org