[jira] [Commented] (SOLR-14336) Warn users running old version of Solr
[ https://issues.apache.org/jira/browse/SOLR-14336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063167#comment-17063167 ] Jan Høydahl commented on SOLR-14336: The message won't need to say that there *is* a new version. More something like "You are running a version of Solr that is more than a year old. You may be missing out on important improvements and security updates.". > Warn users running old version of Solr > -- > > Key: SOLR-14336 > URL: https://issues.apache.org/jira/browse/SOLR-14336 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Jan Høydahl >Priority: Major > > There are obviously many very old Solr installs out there. People are still > reporting issues for Solr 4.x and 5.x. This is a proposal that Solr will > print a warning in the logs and display a warning in the Admin UI Dashboard > when running a (probably) outdated version. > I do not aim to "call home" to Apache to check versions, but instead parse > release date from 'solr-impl-version' string and warn if more than 1 year > old. That should be very conservative as there will almost certainly be a > handful new releases, with potential security fixes. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] janhoy commented on issue #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
janhoy commented on issue #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#issuecomment-601604990 Not sure I like the magic switch between returning actual data and returning a list of metdatata simply based on whether a znode happens to have children or not. This is not deterministic. I'd rather have an explicit action, e.g. `GET /api/cluster/zookeeper/collections/_children` to list children for a node, and let pure `GET ` calls always return UTF-8 data. Futher, it must be possible to retrieve the metadata of any znode. The existing `/admin/zookeeper` API returns both metadata (ctime etc), utf-8 data and children in the same response. So consider either a `GET /api/cluster/zookeeper/collections/_metadata` call, or putting the metadata in HTTP headers `X-solr-zkmeta-ctime: foo`? An example to show the problem is the `/collections` znode which happens to have no children on a clean node after install. Try this: docker run --rm -ti solr:8.4.1 bash solr start -c curl "http://localhost:8983/solr/admin/zookeeper?path=/collections&detail=true"; It will return `children_count:0`. Your current impl would just return the data of that node, i.e. an empty string, but after creating the first collection it would return a list, and you'd need to add `leaf=true` to get to the data, which is very inconsistent. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13942) /api/cluster/zk/* to fetch raw ZK data
[ https://issues.apache.org/jira/browse/SOLR-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063244#comment-17063244 ] Jan Høydahl commented on SOLR-13942: Just a quick note that if you need to script access to ZK you can probably do lots of stuff based on the verbose output of /admin/zookeeper, with some help of {{jq}}, examples: {code} # get list of live nodes: curl -s "http://localhost:8983/solr/admin/zookeeper?path=/live_nodes"; | jq '.tree[0].children[].data.title' # list files in _default configset curl -s "http://localhost:8983/solr/admin/zookeeper?path=/configs/_default&detail=true"; | jq '.tree[0].children[].data.title' # output content of /configs/_default/synonyms.txt curl -s "http://localhost:8983/solr/admin/zookeeper?path=/configs/_default/synonyms.txt&detail=true"; | jq -r '.znode.data' {code} > /api/cluster/zk/* to fetch raw ZK data > -- > > Key: SOLR-13942 > URL: https://issues.apache.org/jira/browse/SOLR-13942 > Project: Solr > Issue Type: New Feature > Components: v2 API >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Minor > Time Spent: 3h 10m > Remaining Estimate: 0h > > example > download the {{state.json}} of > {code} > GET http://localhost:8983/api/cluster/zk/collections/gettingstarted/state.json > {code} > get a list of all children under {{/live_nodes}} > {code} > GET http://localhost:8983/api/cluster/zk/live_nodes > {code} > If the requested path is a node with children show the list of child nodes > and their meta data -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13942) /api/cluster/zk/* to fetch raw ZK data
[ https://issues.apache.org/jira/browse/SOLR-13942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063244#comment-17063244 ] Jan Høydahl edited comment on SOLR-13942 at 3/20/20, 9:48 AM: -- Just a quick note that if you need to script access to ZK you can probably do lots of stuff based on the verbose output of /admin/zookeeper, with some help of {{jq}}, examples: {code} # get list of live nodes: curl -s "http://localhost:8983/solr/admin/zookeeper?path=/live_nodes"; | jq -r '.tree[0].children[].data.title' # list files in _default configset curl -s "http://localhost:8983/solr/admin/zookeeper?path=/configs/_default&detail=true"; | jq -r '.tree[0].children[].data.title' # output content of /configs/_default/synonyms.txt curl -s "http://localhost:8983/solr/admin/zookeeper?path=/configs/_default/synonyms.txt&detail=true"; | jq -r '.znode.data' {code} was (Author: janhoy): Just a quick note that if you need to script access to ZK you can probably do lots of stuff based on the verbose output of /admin/zookeeper, with some help of {{jq}}, examples: {code} # get list of live nodes: curl -s "http://localhost:8983/solr/admin/zookeeper?path=/live_nodes"; | jq '.tree[0].children[].data.title' # list files in _default configset curl -s "http://localhost:8983/solr/admin/zookeeper?path=/configs/_default&detail=true"; | jq '.tree[0].children[].data.title' # output content of /configs/_default/synonyms.txt curl -s "http://localhost:8983/solr/admin/zookeeper?path=/configs/_default/synonyms.txt&detail=true"; | jq -r '.znode.data' {code} > /api/cluster/zk/* to fetch raw ZK data > -- > > Key: SOLR-13942 > URL: https://issues.apache.org/jira/browse/SOLR-13942 > Project: Solr > Issue Type: New Feature > Components: v2 API >Reporter: Noble Paul >Assignee: Noble Paul >Priority: Minor > Time Spent: 3h 10m > Remaining Estimate: 0h > > example > download the {{state.json}} of > {code} > GET http://localhost:8983/api/cluster/zk/collections/gettingstarted/state.json > {code} > get a list of all children under {{/live_nodes}} > {code} > GET http://localhost:8983/api/cluster/zk/live_nodes > {code} > If the requested path is a node with children show the list of child nodes > and their meta data -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on issue #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path
mocobeta commented on issue #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path URL: https://github.com/apache/lucene-solr/pull/1352#issuecomment-601614547 I mean to close this PR because we will eventually replace all relative paths here with absolute URLs, sooner or later. I'll try to make some changes to renderJavadoc task to drop all relative paths (and move all generated docs under _project_/build again). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta closed pull request #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path
mocobeta closed pull request #1352: LUCENE-9278: Make javadoc folder structure follow Gradle project path URL: https://github.com/apache/lucene-solr/pull/1352 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] mocobeta commented on a change in pull request #1360: LUCENE-9281: First mockup of SPIClassIterator retirement
mocobeta commented on a change in pull request #1360: LUCENE-9281: First mockup of SPIClassIterator retirement URL: https://github.com/apache/lucene-solr/pull/1360#discussion_r395549766 ## File path: lucene/analysis/common/src/java/org/apache/lucene/analysis/util/AnalysisSPILoader.java ## @@ -143,6 +144,24 @@ public S newInstance(String name, Map args) { public Set availableServices() { return originalNames; } + + /** + * Looks up SPI name (static "NAME" field) with appropriate modifiers. + * Also it must be a String class and declared in the concrete class. + * @return the SPI name + * @throws NoSuchFieldException - if the "NAME" field is not defined. + * @throws IllegalAccessException - if the "NAME" field is inaccessible. + * @throws IllegalStateException - if the "NAME" field does not have appropriate modifiers or isn't a String field. + */ + public static String lookupSPIName(Class service) throws NoSuchFieldException, IllegalAccessException, IllegalStateException { Review comment: This could be package private (since the original method was not public and still this is called only from classes in the same package)? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9285) Transient test failure in TestAddIndexes.testAddIndexesWithRollback
Michael McCandless created LUCENE-9285: -- Summary: Transient test failure in TestAddIndexes.testAddIndexesWithRollback Key: LUCENE-9285 URL: https://issues.apache.org/jira/browse/LUCENE-9285 Project: Lucene - Core Issue Type: Task Reporter: Michael McCandless Alas does not seem to reproduce for me: {noformat} org.apache.lucene.index.TestAddIndexes > test suite's output saved to /home/mike/src/lfd/lucene/core/build/test-results/test/outputs/OUTPUT-org.apache.lucene.inde\ x.TestAddIndexes.txt, copied below: > java.nio.file.NoSuchFileException: _5j_Lucene85FieldsIndexfile_pointers_15.tmp > at __randomizedtesting.SeedInfo.seed([8511E1AF630BC270:63360E40C7A4A90E]:0) > at org.apache.lucene.store.ByteBuffersDirectory.deleteFile(ByteBuffersDirectory.java:148) > at org.apache.lucene.store.MockDirectoryWrapper.deleteFile(MockDirectoryWrapper.java:602) > at org.apache.lucene.store.LockValidatingDirectoryWrapper.deleteFile(LockValidatingDirectoryWrapper.java:38) > at org.apache.lucene.index.IndexFileDeleter.deleteFile(IndexFileDeleter.java:696) > at org.apache.lucene.index.IndexFileDeleter.deleteFiles(IndexFileDeleter.java:690) > at org.apache.lucene.index.IndexFileDeleter.refresh(IndexFileDeleter.java:449) > at org.apache.lucene.index.IndexWriter.rollbackInternalNoCommit(IndexWriter.java:2334) > at org.apache.lucene.index.IndexWriter.rollbackInternal(IndexWriter.java:2275) > at org.apache.lucene.index.IndexWriter.rollback(IndexWriter.java:2268) > at org.apache.lucene.index.TestAddIndexes.testAddIndexesWithRollback(TestAddIndexes.java:974) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1754) > at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:942) > at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:978) > at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992) > at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) > at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) > at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) > at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) > at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:370) > at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:819) > at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:470) > at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:951) > at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:836) > at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:887) > at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:898) > at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) > at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) > at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) > at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) > at org.apache.lucene.util.TestRuleAsserti
[GitHub] [lucene-solr] janhoy commented on issue #1364: SOLR-14335: Lock Solr's memory to prevent swapping
janhoy commented on issue #1364: SOLR-14335: Lock Solr's memory to prevent swapping URL: https://github.com/apache/lucene-solr/pull/1364#issuecomment-601670345 I updated to have a separate `bootstrap` module, only generating a `solr.jar` witha single class in it. Also the MANIFEST now specifies main class and classpath, so you can run `java -jar solr.jar --module=http` and it will start Solr. This is still gradle only, so `ant server && bin/solr start` will not work... This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-11709) JSON "Stats" Facets should support directly specifying a domain change (for filters/blockjoin/etc...)
[ https://issues.apache.org/jira/browse/SOLR-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-11709: Parent: SOLR-14006 Issue Type: Sub-task (was: Improvement) > JSON "Stats" Facets should support directly specifying a domain change (for > filters/blockjoin/etc...) > - > > Key: SOLR-11709 > URL: https://issues.apache.org/jira/browse/SOLR-11709 > Project: Solr > Issue Type: Sub-task > Components: Facet Module >Reporter: Chris M. Hostetter >Priority: Major > > AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" > (ex: {{foo:"min(fieldA)"}} ) means there is no way to request a statistic > with a domain change applied -- stats are always computed relative to it's > immediate parent (ie: the baseset matching the {{q}} for a top level stat, or > the constrained set if a stat is a subfacet of something else) > This means that things like the simple "fq exclusion" in StatsComponent have > no straight forward equivalent in JSON faceting. > The work around appears to be to use a {{type:"query", q:"*:*, domain:...}} > parent and specify the stats you are interested in as sub-facets... > {code} > $ curl 'http://localhost:8983/solr/techproducts/query' -d > 'q=*:*&omitHeader=true&fq={!tag=boo}id:hoss&stats=true&stats.field={!max=true > ex=boo}popularity&rows=0&json.facet={ > bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { > foo:"max(popularity)" } } }' > { > "response":{"numFound":0,"start":0,"docs":[] > }, > "facets":{ > "count":0, > "bar":{ > "count":32, > "foo":10}}, > "stats":{ > "stats_fields":{ > "popularity":{ > "max":10.0 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-11709) JSON "Stats" Facets should support directly specifying a domain change (for filters/blockjoin/etc...)
[ https://issues.apache.org/jira/browse/SOLR-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-11709: Description: AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" (ex: {{foo:"min(fieldA)"}} ) means there is no way to request a statistic with a domain change applied -- stats are always computed relative to it's immediate parent (ie: the baseset matching the {{q}} for a top level stat, or the constrained set if a stat is a subfacet of something else) This means that things like the simple "fq exclusion" in StatsComponent have no straight forward equivalent in JSON faceting. The work around appears to be to use a {{type:"query", q:"\*:\*, domain:...}} parent and specify the stats you are interested in as sub-facets... {code} $ curl 'http://localhost:8983/solr/techproducts/query' -d 'q=*:*&omitHeader=true&fq={!tag=boo}id:hoss&stats=true&stats.field={!max=true ex=boo}popularity&rows=0&json.facet={ bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { foo:"max(popularity)" } } }' { "response":{"numFound":0,"start":0,"docs":[] }, "facets":{ "count":0, "bar":{ "count":32, "foo":10}}, "stats":{ "stats_fields":{ "popularity":{ "max":10.0 {code} was: AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" (ex: {{foo:"min(fieldA)"}} ) means there is no way to request a statistic with a domain change applied -- stats are always computed relative to it's immediate parent (ie: the baseset matching the {{q}} for a top level stat, or the constrained set if a stat is a subfacet of something else) This means that things like the simple "fq exclusion" in StatsComponent have no straight forward equivalent in JSON faceting. The work around appears to be to use a {{type:"query", q:"*:*, domain:...}} parent and specify the stats you are interested in as sub-facets... {code} $ curl 'http://localhost:8983/solr/techproducts/query' -d 'q=*:*&omitHeader=true&fq={!tag=boo}id:hoss&stats=true&stats.field={!max=true ex=boo}popularity&rows=0&json.facet={ bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { foo:"max(popularity)" } } }' { "response":{"numFound":0,"start":0,"docs":[] }, "facets":{ "count":0, "bar":{ "count":32, "foo":10}}, "stats":{ "stats_fields":{ "popularity":{ "max":10.0 {code} > JSON "Stats" Facets should support directly specifying a domain change (for > filters/blockjoin/etc...) > - > > Key: SOLR-11709 > URL: https://issues.apache.org/jira/browse/SOLR-11709 > Project: Solr > Issue Type: Sub-task > Components: Facet Module >Reporter: Chris M. Hostetter >Priority: Major > > AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" > (ex: {{foo:"min(fieldA)"}} ) means there is no way to request a statistic > with a domain change applied -- stats are always computed relative to it's > immediate parent (ie: the baseset matching the {{q}} for a top level stat, or > the constrained set if a stat is a subfacet of something else) > This means that things like the simple "fq exclusion" in StatsComponent have > no straight forward equivalent in JSON faceting. > The work around appears to be to use a {{type:"query", q:"\*:\*, domain:...}} > parent and specify the stats you are interested in as sub-facets... > {code} > $ curl 'http://localhost:8983/solr/techproducts/query' -d > 'q=*:*&omitHeader=true&fq={!tag=boo}id:hoss&stats=true&stats.field={!max=true > ex=boo}popularity&rows=0&json.facet={ > bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { > foo:"max(popularity)" } } }' > { > "response":{"numFound":0,"start":0,"docs":[] > }, > "facets":{ > "count":0, > "bar":{ > "count":32, > "foo":10}}, > "stats":{ > "stats_fields":{ > "popularity":{ > "max":10.0 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (SOLR-14350) Reproducing failure in JsonRequestApiTest
[ https://issues.apache.org/jira/browse/SOLR-14350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N resolved SOLR-14350. - Fix Version/s: master (9.0) Resolution: Fixed > Reproducing failure in JsonRequestApiTest > - > > Key: SOLR-14350 > URL: https://issues.apache.org/jira/browse/SOLR-14350 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Munendra S N >Priority: Major > Fix For: master (9.0) > > > ant test -Dtestcase=JsonRequestApiTest -Dtests.seed=2E1298D05407991 > -Dtests.file.encoding=ISO-8859-1 > > On current master. Fails with gradle too: > ./gradlew :solr:solrj:test --tests > "org.apache.solr.client.ref_guide_examples.JsonRequestApiTest" > -Ptests.seed=2E1298D05407991 -Ptests.file.encoding=ISO-8859-1 > > 2> 5655 INFO (TEST-JsonRequestApiTest.testStatFacet1-seed#[2E1298D05407991]) > [ ] o.a.s.SolrTestCaseJ4 ###Ending testStatFacet1 > > java.lang.AssertionError: expected: java.lang.Integer<3> but was: > java.lang.Long<3> > > at __randomizedtesting.SeedInfo.seed([2E1298D05407991:185168F18FADF3BB]:0) > > at org.junit.Assert.fail(Assert.java:88) > > at org.junit.Assert.failNotEquals(Assert.java:834) > > at org.junit.Assert.assertEquals(Assert.java:118) > > at org.junit.Assert.assertEquals(Assert.java:144) > > at > org.apache.solr.client.ref_guide_examples.JsonRequestApiTest.testStatFacet1(JsonRequestApiTest.java:470) > > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > > at java.base/java.lang.reflect.Method.invoke(Method.java:566) > > at > com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1754) > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley opened a new pull request #1367: ivy settings: local maven repo pattern needs classifier
dsmiley opened a new pull request #1367: ivy settings: local maven repo pattern needs classifier URL: https://github.com/apache/lucene-solr/pull/1367 The ivy settings config has a local maven repo resolver (added by @dweiss 8 years ago). It's commented out so it's not essential that we get it right. It has a bug in the pattern that forgot the classifier. https://stackoverflow.com/questions/8617963/ivysettings-xml-add-local-maven-path Locally I configure the build to use the local maven repo and this has been a problem because spatial4j's main JAR file is resolved when trying to get the test JAR! (I don't think I need a JIRA for this but I can create one if someone really wants that ceremony) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395784853 ## File path: solr/core/src/java/org/apache/solr/handler/admin/ZkRead.java ## @@ -0,0 +1,117 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.solr.api.Command; +import org.apache.solr.api.EndPoint; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.impl.BinaryResponseParser; +import org.apache.solr.common.MapWriter; +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.MapSolrParams; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.ContentStreamBase; +import org.apache.solr.common.util.Utils; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.zookeeper.data.Stat; + +import static org.apache.solr.common.params.CommonParams.OMIT_HEADER; +import static org.apache.solr.common.params.CommonParams.WT; +import static org.apache.solr.response.RawResponseWriter.CONTENT; +import static org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM; + +/**Exposes the content of the Zookeeper + * This is an expert feature that exposes the data inside the back end zookeeper.This API may change or + * be removed in future versions. + * This is not a public API. The data that is returned is not guaranteed to remain same + * across releases, as the data stored in Zookeeper may change from time to time. + */ +@EndPoint(path = "/cluster/zk/*", +method = SolrRequest.METHOD.GET, +permission = COLL_READ_PERM) +public class ZkRead { + private final CoreContainer coreContainer; + + public ZkRead(CoreContainer coreContainer) { +this.coreContainer = coreContainer; + } + + @Command + public void get(SolrQueryRequest req, SolrQueryResponse rsp) { +String path = req.getPathTemplateValues().get("*"); +if (path == null || path.isEmpty()) path = "/"; +byte[] d = null; +try { + List l = coreContainer.getZkController().getZkClient().getChildren(path, null, false); + if (l != null && !l.isEmpty()) { +String prefix = path.endsWith("/") ? path : path + "/"; + +rsp.add(path, (MapWriter) ew -> { + for (String s : l) { +try { + Stat stat = coreContainer.getZkController().getZkClient().exists(prefix + s, null, false); + ew.put(s, (MapWriter) ew1 -> { +ew1.put("version", stat.getVersion()); +ew1.put("aversion", stat.getAversion()); +ew1.put("children", stat.getNumChildren()); +ew1.put("ctime", stat.getCtime()); +ew1.put("cversion", stat.getCversion()); +ew1.put("czxid", stat.getCzxid()); +ew1.put("ephemeralOwner", stat.getEphemeralOwner()); +ew1.put("mtime", stat.getMtime()); +ew1.put("mzxid", stat.getMzxid()); +ew1.put("pzxid", stat.getPzxid()); +ew1.put("dataLength", stat.getDataLength()); + }); +} catch (Exception e) { Review comment: This try/catch is inside a loop that makes requests to ZooKeeper. So you are saying that if the thread is interruped (shutdown, or in tests for example) the plan is to swallow that interruption and continue processing (doing requests to ZooKeeper for every child this zkpath has, with sleeps/retries), and the intention of that is to add an "exception" to one (just one) of the elements in the tree in the response? Regarding other exceptions, lets say you are getting back a SessionExpiredException, the plan is to continue iterating over all the child docs and continue adding `node -> KeeperException.SessionExpiredException`? I think the proper handling of most these exceptions is to stop processing and return an error. If you get a NoNodeExc
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395784967 ## File path: solr/core/src/java/org/apache/solr/handler/admin/ZkRead.java ## @@ -0,0 +1,117 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.solr.api.Command; +import org.apache.solr.api.EndPoint; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.impl.BinaryResponseParser; +import org.apache.solr.common.MapWriter; +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.MapSolrParams; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.ContentStreamBase; +import org.apache.solr.common.util.Utils; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.zookeeper.data.Stat; + +import static org.apache.solr.common.params.CommonParams.OMIT_HEADER; +import static org.apache.solr.common.params.CommonParams.WT; +import static org.apache.solr.response.RawResponseWriter.CONTENT; +import static org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM; + +/**Exposes the content of the Zookeeper + * This is an expert feature that exposes the data inside the back end zookeeper.This API may change or + * be removed in future versions. + * This is not a public API. The data that is returned is not guaranteed to remain same + * across releases, as the data stored in Zookeeper may change from time to time. + */ +@EndPoint(path = "/cluster/zk/*", +method = SolrRequest.METHOD.GET, +permission = COLL_READ_PERM) +public class ZkRead { + private final CoreContainer coreContainer; + + public ZkRead(CoreContainer coreContainer) { +this.coreContainer = coreContainer; + } + + @Command + public void get(SolrQueryRequest req, SolrQueryResponse rsp) { +String path = req.getPathTemplateValues().get("*"); +if (path == null || path.isEmpty()) path = "/"; +byte[] d = null; +try { + List l = coreContainer.getZkController().getZkClient().getChildren(path, null, false); + if (l != null && !l.isEmpty()) { +String prefix = path.endsWith("/") ? path : path + "/"; + +rsp.add(path, (MapWriter) ew -> { + for (String s : l) { +try { + Stat stat = coreContainer.getZkController().getZkClient().exists(prefix + s, null, false); + ew.put(s, (MapWriter) ew1 -> { +ew1.put("version", stat.getVersion()); +ew1.put("aversion", stat.getAversion()); +ew1.put("children", stat.getNumChildren()); +ew1.put("ctime", stat.getCtime()); +ew1.put("cversion", stat.getCversion()); +ew1.put("czxid", stat.getCzxid()); +ew1.put("ephemeralOwner", stat.getEphemeralOwner()); +ew1.put("mtime", stat.getMtime()); +ew1.put("mzxid", stat.getMzxid()); +ew1.put("pzxid", stat.getPzxid()); +ew1.put("dataLength", stat.getDataLength()); + }); +} catch (Exception e) { + ew.put("s", Collections.singletonMap("error", e.getMessage())); +} + } +}); + + } else { +d = coreContainer.getZkController().getZkClient().getData(path, null, null, false); +if (d == null || d.length == 0) { + rsp.add(path, null); + return; +} + +Map map = new HashMap<>(1); +map.put(WT, "raw"); +map.put(OMIT_HEADER, "true"); +req.setParams(SolrParams.wrapDefaults(new MapSolrParams(map), req.getParams())); + + +rsp.add(CONTENT, new ContentStreamBase.ByteArrayStream(d, null, +d[0] == '{' ? CommonParams.JSON_MIME : BinaryResponseParser.BINARY_CONTENT_TYPE)); + + } + +} catch (Exception e) { Review comment: It does. Se
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395786539 ## File path: solr/core/src/java/org/apache/solr/handler/admin/ZkRead.java ## @@ -0,0 +1,117 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.solr.api.Command; +import org.apache.solr.api.EndPoint; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.impl.BinaryResponseParser; +import org.apache.solr.common.MapWriter; +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.MapSolrParams; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.ContentStreamBase; +import org.apache.solr.common.util.Utils; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.zookeeper.data.Stat; + +import static org.apache.solr.common.params.CommonParams.OMIT_HEADER; +import static org.apache.solr.common.params.CommonParams.WT; +import static org.apache.solr.response.RawResponseWriter.CONTENT; +import static org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM; + +/**Exposes the content of the Zookeeper + * This is an expert feature that exposes the data inside the back end zookeeper.This API may change or + * be removed in future versions. + * This is not a public API. The data that is returned is not guaranteed to remain same + * across releases, as the data stored in Zookeeper may change from time to time. + */ +@EndPoint(path = "/cluster/zk/*", +method = SolrRequest.METHOD.GET, +permission = COLL_READ_PERM) +public class ZkRead { + private final CoreContainer coreContainer; + + public ZkRead(CoreContainer coreContainer) { +this.coreContainer = coreContainer; + } + + @Command + public void get(SolrQueryRequest req, SolrQueryResponse rsp) { +String path = req.getPathTemplateValues().get("*"); +if (path == null || path.isEmpty()) path = "/"; +byte[] d = null; +try { + List l = coreContainer.getZkController().getZkClient().getChildren(path, null, false); + if (l != null && !l.isEmpty()) { +String prefix = path.endsWith("/") ? path : path + "/"; + +rsp.add(path, (MapWriter) ew -> { + for (String s : l) { +try { + Stat stat = coreContainer.getZkController().getZkClient().exists(prefix + s, null, false); + ew.put(s, (MapWriter) ew1 -> { +ew1.put("version", stat.getVersion()); +ew1.put("aversion", stat.getAversion()); +ew1.put("children", stat.getNumChildren()); +ew1.put("ctime", stat.getCtime()); +ew1.put("cversion", stat.getCversion()); +ew1.put("czxid", stat.getCzxid()); +ew1.put("ephemeralOwner", stat.getEphemeralOwner()); +ew1.put("mtime", stat.getMtime()); +ew1.put("mzxid", stat.getMzxid()); +ew1.put("pzxid", stat.getPzxid()); +ew1.put("dataLength", stat.getDataLength()); + }); +} catch (Exception e) { + ew.put("s", Collections.singletonMap("error", e.getMessage())); +} + } +}); + + } else { +d = coreContainer.getZkController().getZkClient().getData(path, null, null, false); +if (d == null || d.length == 0) { + rsp.add(path, null); + return; +} + +Map map = new HashMap<>(1); +map.put(WT, "raw"); +map.put(OMIT_HEADER, "true"); +req.setParams(SolrParams.wrapDefaults(new MapSolrParams(map), req.getParams())); + + +rsp.add(CONTENT, new ContentStreamBase.ByteArrayStream(d, null, +d[0] == '{' ? CommonParams.JSON_MIME : BinaryResponseParser.BINARY_CONTENT_TYPE)); + + } + +} catch (Exception e) { Review comment: Also, would
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395787450 ## File path: solr/core/src/java/org/apache/solr/handler/admin/ZookeeperRead.java ## @@ -0,0 +1,118 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.solr.api.Command; +import org.apache.solr.api.EndPoint; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.impl.BinaryResponseParser; +import org.apache.solr.common.MapWriter; +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.MapSolrParams; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.ContentStreamBase; +import org.apache.solr.common.util.Utils; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.zookeeper.data.Stat; + +import static org.apache.solr.common.params.CommonParams.OMIT_HEADER; +import static org.apache.solr.common.params.CommonParams.WT; +import static org.apache.solr.response.RawResponseWriter.CONTENT; +import static org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM; + +/**Exposes the content of the Zookeeper + * This is an expert feature that exposes the data inside the back end zookeeper.This API may change or + * be removed in future versions. + * This is not a public API. The data that is returned is not guaranteed to remain same + * across releases, as the data stored in Zookeeper may change from time to time. + */ +@EndPoint(path = "/cluster/zk/*", +method = SolrRequest.METHOD.GET, +permission = COLL_READ_PERM) +public class ZookeeperRead { Review comment: Handlers of requests are typically called `Handler` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395789886 ## File path: solr/core/src/test/org/apache/solr/handler/admin/ZookeeperStatusHandlerTest.java ## @@ -74,6 +78,39 @@ public void tearDown() throws Exception { super.tearDown(); } + @Test + public void testZkread() throws Exception { +URL baseUrl = cluster.getJettySolrRunner(0).getBaseUrl(); +String basezk = baseUrl.toString().replace("/solr", "/api") + "/cluster/zk"; + +try( HttpSolrClient client = new HttpSolrClient.Builder(baseUrl.toString()).build()) { + Object o = Utils.executeGET(client.getHttpClient(), + basezk + "/security.json", + Utils.JSONCONSUMER ); + assertNotNull(o); + o = Utils.executeGET(client.getHttpClient(), + basezk + "/configs", + Utils.JSONCONSUMER ); + assertEquals("0", String.valueOf(getObjectByPath(o,true, split(":/configs:_default:dataLength",':'; + assertEquals("0", String.valueOf(getObjectByPath(o,true, split(":/configs:conf:dataLength",':'; + byte[] bytes = new byte[1024*5]; + for (int i = 0; i < bytes.length; i++) { +bytes[i] = (byte) random().nextInt(128); Review comment: Wait, can't you do `"this is some test content".getBytes(StandardCharsets.UTF_8);`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395797147 ## File path: solr/core/src/test/org/apache/solr/handler/admin/ZookeeperReadTest.java ## @@ -0,0 +1,100 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.lang.invoke.MethodHandles; +import java.net.URL; +import java.util.Map; + +import org.apache.solr.client.solrj.impl.HttpSolrClient; +import org.apache.solr.cloud.SolrCloudTestCase; +import org.apache.solr.common.util.Utils; +import org.apache.zookeeper.CreateMode; +import org.junit.After; +import org.junit.Before; +import org.junit.BeforeClass; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.solr.common.util.StrUtils.split; +import static org.apache.solr.common.util.Utils.getObjectByPath; + +public class ZookeeperReadTest extends SolrCloudTestCase { + private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); + + @BeforeClass + public static void setupCluster() throws Exception { +configureCluster(1) +.addConfig("conf", configset("cloud-minimal")) +.configure(); + } + + @Before + @Override + public void setUp() throws Exception { +super.setUp(); + } + + @After + @Override + public void tearDown() throws Exception { +super.tearDown(); + } + + @Test + public void testZkread() throws Exception { +URL baseUrl = cluster.getJettySolrRunner(0).getBaseUrl(); +String basezk = baseUrl.toString().replace("/solr", "/api") + "/cluster/zk"; + +try (HttpSolrClient client = new HttpSolrClient.Builder(baseUrl.toString()).build()) { + Object o = Utils.executeGET(client.getHttpClient(), + basezk + "/security.json", + Utils.JSONCONSUMER); + assertNotNull(o); + o = Utils.executeGET(client.getHttpClient(), + basezk + "/configs", + Utils.JSONCONSUMER); + assertEquals("0", String.valueOf(getObjectByPath(o, true, split(":/configs:_default:dataLength", ':'; + assertEquals("0", String.valueOf(getObjectByPath(o, true, split(":/configs:conf:dataLength", ':'; + + o = Utils.executeGET(client.getHttpClient(), + basezk + "/configs?leaf=true", + Utils.JSONCONSUMER); + assertTrue(((Map)o).containsKey("/configs")); Review comment: can you fix the compiler warnings? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395798913 ## File path: solr/core/src/java/org/apache/solr/handler/admin/ZkRead.java ## @@ -0,0 +1,117 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.solr.api.Command; +import org.apache.solr.api.EndPoint; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.impl.BinaryResponseParser; +import org.apache.solr.common.MapWriter; +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.MapSolrParams; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.ContentStreamBase; +import org.apache.solr.common.util.Utils; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.zookeeper.data.Stat; + +import static org.apache.solr.common.params.CommonParams.OMIT_HEADER; +import static org.apache.solr.common.params.CommonParams.WT; +import static org.apache.solr.response.RawResponseWriter.CONTENT; +import static org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM; + +/**Exposes the content of the Zookeeper + * This is an expert feature that exposes the data inside the back end zookeeper.This API may change or + * be removed in future versions. + * This is not a public API. The data that is returned is not guaranteed to remain same + * across releases, as the data stored in Zookeeper may change from time to time. + */ +@EndPoint(path = "/cluster/zk/*", +method = SolrRequest.METHOD.GET, +permission = COLL_READ_PERM) +public class ZkRead { + private final CoreContainer coreContainer; Review comment: Ah, true. The problem is that, by requiring the CoreContainer you are making this much more difficult to test (which is why I guess you are starting a Solr cluster just to test this handler). Maybe there is a way to improve the design to improve tests This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data
tflobbe commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r395800023 ## File path: solr/core/src/java/org/apache/solr/handler/admin/ZkRead.java ## @@ -0,0 +1,117 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the "License"); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.solr.handler.admin; + +import java.util.Collections; +import java.util.HashMap; +import java.util.List; +import java.util.Map; + +import org.apache.solr.api.Command; +import org.apache.solr.api.EndPoint; +import org.apache.solr.client.solrj.SolrRequest; +import org.apache.solr.client.solrj.impl.BinaryResponseParser; +import org.apache.solr.common.MapWriter; +import org.apache.solr.common.params.CommonParams; +import org.apache.solr.common.params.MapSolrParams; +import org.apache.solr.common.params.SolrParams; +import org.apache.solr.common.util.ContentStreamBase; +import org.apache.solr.common.util.Utils; +import org.apache.solr.core.CoreContainer; +import org.apache.solr.request.SolrQueryRequest; +import org.apache.solr.response.SolrQueryResponse; +import org.apache.zookeeper.data.Stat; + +import static org.apache.solr.common.params.CommonParams.OMIT_HEADER; +import static org.apache.solr.common.params.CommonParams.WT; +import static org.apache.solr.response.RawResponseWriter.CONTENT; +import static org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM; + +/**Exposes the content of the Zookeeper + * This is an expert feature that exposes the data inside the back end zookeeper.This API may change or + * be removed in future versions. + * This is not a public API. The data that is returned is not guaranteed to remain same + * across releases, as the data stored in Zookeeper may change from time to time. + */ +@EndPoint(path = "/cluster/zk/*", +method = SolrRequest.METHOD.GET, +permission = COLL_READ_PERM) +public class ZkRead { + private final CoreContainer coreContainer; + + public ZkRead(CoreContainer coreContainer) { +this.coreContainer = coreContainer; + } + + @Command + public void get(SolrQueryRequest req, SolrQueryResponse rsp) { +String path = req.getPathTemplateValues().get("*"); +if (path == null || path.isEmpty()) path = "/"; +byte[] d = null; +try { + List l = coreContainer.getZkController().getZkClient().getChildren(path, null, false); + if (l != null && !l.isEmpty()) { +String prefix = path.endsWith("/") ? path : path + "/"; + +rsp.add(path, (MapWriter) ew -> { + for (String s : l) { +try { + Stat stat = coreContainer.getZkController().getZkClient().exists(prefix + s, null, false); + ew.put(s, (MapWriter) ew1 -> { +ew1.put("version", stat.getVersion()); +ew1.put("aversion", stat.getAversion()); +ew1.put("children", stat.getNumChildren()); +ew1.put("ctime", stat.getCtime()); +ew1.put("cversion", stat.getCversion()); +ew1.put("czxid", stat.getCzxid()); +ew1.put("ephemeralOwner", stat.getEphemeralOwner()); +ew1.put("mtime", stat.getMtime()); +ew1.put("mzxid", stat.getMzxid()); +ew1.put("pzxid", stat.getPzxid()); +ew1.put("dataLength", stat.getDataLength()); + }); +} catch (Exception e) { + ew.put("s", Collections.singletonMap("error", e.getMessage())); +} + } +}); + + } else { +d = coreContainer.getZkController().getZkClient().getData(path, null, null, false); +if (d == null || d.length == 0) { + rsp.add(path, null); + return; +} + +Map map = new HashMap<>(1); +map.put(WT, "raw"); +map.put(OMIT_HEADER, "true"); +req.setParams(SolrParams.wrapDefaults(new MapSolrParams(map), req.getParams())); Review comment: Is this only the case for the situation where we write the data? what about the case of the child nodes? can those requests be written in other response formats? If this is the only format that's allowed, should we fail if an
[GitHub] [lucene-solr] dsmiley opened a new pull request #1368: SOLR-14351: Fix/improve MDCLoggingContext usage
dsmiley opened a new pull request #1368: SOLR-14351: Fix/improve MDCLoggingContext usage URL: https://github.com/apache/lucene-solr/pull/1368 https://issues.apache.org/jira/browse/SOLR-14351 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] andyvuong opened a new pull request #1369: SOLR-14213: Allow enabling shared store to be scriptable
andyvuong opened a new pull request #1369: SOLR-14213: Allow enabling shared store to be scriptable URL: https://github.com/apache/lucene-solr/pull/1369 Enhancing the configuration section to allow shared storage feature to be enabled via system properties vs just checking for presence of section in solr.xml. ``` ${sharedStoreEnabled:false} ``` Shared storage is now only enabled if cloud mode is on, section is present, and the field sharedStoreEnabled is present and set to true. If is present but the field isn't, startup fails. The field supports system properties overriding whatever default is specified. This can be passed in via the solr binary such as `bin/solr start -e cloud -noprompt -DsharedStoreEnabled=true` or set by specifying the environment variable SHARE_STORE_ENABLED. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13492) Disallow explicit GC by default during Solr startup
[ https://issues.apache.org/jira/browse/SOLR-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063752#comment-17063752 ] Guna Sekhar Dora commented on SOLR-13492: - [~elyograg] I'm taking this up. > Disallow explicit GC by default during Solr startup > --- > > Key: SOLR-13492 > URL: https://issues.apache.org/jira/browse/SOLR-13492 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Shawn Heisey >Assignee: Shawn Heisey >Priority: Major > > Solr should use the -XX:+DisableExplicitGC option as part of its default GC > tuning. > None of Solr's stock code uses explicit GCs, so that option will have no > effect on most installs. The effective result of this is that if somebody > adds custom code to Solr and THAT code does an explicit GC, it won't be > allowed to function. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] gunasekhardora opened a new pull request #1370: SOLR-13492: disable explicit GCs by default
gunasekhardora opened a new pull request #1370: SOLR-13492: disable explicit GCs by default URL: https://github.com/apache/lucene-solr/pull/1370 # Description -XX:+DisableExplicitGC disables any explicit calls to System.gc() # Solution In many cases, if an explicit GC is invoked, a potential premature garbage collection might degrade application's performance. # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [x] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13492) Disallow explicit GC by default during Solr startup
[ https://issues.apache.org/jira/browse/SOLR-13492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17063758#comment-17063758 ] Guna Sekhar Dora commented on SOLR-13492: - On a side note, comments in solr.in.sh and solr.in.cmd related to GC settings are outdated. > Disallow explicit GC by default during Solr startup > --- > > Key: SOLR-13492 > URL: https://issues.apache.org/jira/browse/SOLR-13492 > Project: Solr > Issue Type: Improvement > Components: scripts and tools >Reporter: Shawn Heisey >Assignee: Shawn Heisey >Priority: Major > Time Spent: 10m > Remaining Estimate: 0h > > Solr should use the -XX:+DisableExplicitGC option as part of its default GC > tuning. > None of Solr's stock code uses explicit GCs, so that option will have no > effect on most installs. The effective result of this is that if somebody > adds custom code to Solr and THAT code does an explicit GC, it won't be > allowed to function. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] munendrasn commented on a change in pull request #1370: SOLR-13492: disable explicit GCs by default
munendrasn commented on a change in pull request #1370: SOLR-13492: disable explicit GCs by default URL: https://github.com/apache/lucene-solr/pull/1370#discussion_r395961202 ## File path: solr/CHANGES.txt ## @@ -47,6 +47,8 @@ Other Changes * SOLR-14012: Return long value for unique and hll aggregations irrespective of shard count (Munendra S N, hossman) +* SOLR-13492: Disallow explicit GC by default during Solr startup. Review comment: I think this can be moved to 8.6 others section. Also, please include reporter and your name This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] munendrasn commented on a change in pull request #1370: SOLR-13492: disable explicit GCs by default
munendrasn commented on a change in pull request #1370: SOLR-13492: disable explicit GCs by default URL: https://github.com/apache/lucene-solr/pull/1370#discussion_r395961225 ## File path: solr/solr-ref-guide/src/major-changes-in-solr-9.adoc ## @@ -96,7 +96,9 @@ _(raw; not yet edited)_ * SOLR-14012: unique and hll aggregations always returns long value irrespective of standalone or solcloud (Munendra S N, hossman) - + +* SOLR-13492: Explicit GCs are not to be allowed by default. Review comment: This is not needed This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] atris commented on issue #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches
atris commented on issue #1294: LUCENE-9074: Slice Allocation Control Plane For Concurrent Searches URL: https://github.com/apache/lucene-solr/pull/1294#issuecomment-601995664 @jpountz Let me know if this looks fine. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org