[jira] [Updated] (LUCENE-9014) Design a new solution for publishing JavaDoc
[ https://issues.apache.org/jira/browse/LUCENE-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jan Høydahl updated LUCENE-9014: Description: Currently we check in to svn folders with hundreds of html files for JavaDoc. Need a better way. (was: Find and test a way to copy folders with hundreds of html files into the website without a git checkin. Success when a process is tested and documented. Would be nice if it can be demonstrated on staging site, but acceptable if the process covers prod site directly.) Summary: Design a new solution for publishing JavaDoc (was: Design a solution for static JavaDoc and RefGuide files) > Design a new solution for publishing JavaDoc > > > Key: LUCENE-9014 > URL: https://issues.apache.org/jira/browse/LUCENE-9014 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in to svn folders with hundreds of html files for JavaDoc. > Need a better way. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git
Jan Høydahl created LUCENE-9020: --- Summary: Find a way to publish Solr RefGuide without checking into git Key: LUCENE-9020 URL: https://issues.apache.org/jira/browse/LUCENE-9020 Project: Lucene - Core Issue Type: Sub-task Reporter: Jan Høydahl Currently we check in all versions of RefGuide (hundreds of small html files) into svn to publish as part of the site. With new site we should find a smoother way to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9014) Design a new solution for publishing JavaDoc
[ https://issues.apache.org/jira/browse/LUCENE-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956800#comment-16956800 ] Jan Høydahl commented on LUCENE-9014: - I have an idea for a better way to publish our JavaDocs. Today we have a set of html links pointing to each module's JavaDoc, see [http://lucene.apache.org/solr/8_2_0/] Instead, we can make these links point to the freely hosted javadocs on [https://www.javadoc.io|https://www.javadoc.io/] (e.g. [https://www.javadoc.io/doc/org.apache.solr/solr-core/8.2.0] for solr-core). There is nothing to upload, they pull everything from Maven. > Design a new solution for publishing JavaDoc > > > Key: LUCENE-9014 > URL: https://issues.apache.org/jira/browse/LUCENE-9014 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in to svn folders with hundreds of html files for JavaDoc. > Need a better way. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git
[ https://issues.apache.org/jira/browse/LUCENE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956803#comment-16956803 ] Jan Høydahl commented on LUCENE-9020: - Why don't we try to find an external (free) hosted Asciidoc service somewhere to host our refGuide. Perhaps they even provide search for us? Since we'll not treat the RefGuide as a signed release anymore, like we did with the PDF, there should be no requirement to keep the docs on ASF infrastructure? Something like [readthedocs|https://readthedocs.org/] but I think they don't support asciidoc that well. Any suggestions? > Find a way to publish Solr RefGuide without checking into git > - > > Key: LUCENE-9020 > URL: https://issues.apache.org/jira/browse/LUCENE-9020 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in all versions of RefGuide (hundreds of small html files) > into svn to publish as part of the site. With new site we should find a > smoother way to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13824) JSON Request API ignores prematurely closing curly brace.
[ https://issues.apache.org/jira/browse/SOLR-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev reassigned SOLR-13824: --- Assignee: Mikhail Khludnev > JSON Request API ignores prematurely closing curly brace. > -- > > Key: SOLR-13824 > URL: https://issues.apache.org/jira/browse/SOLR-13824 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: JSON Request API >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13824.patch, SOLR-13824.patch, SOLR-13824.patch, > SOLR-13824.patch > > > {code:java} > json={query:"content:foo", facet:{zz:{field:id}}} > {code} > this works fine, but if we mistype {{}}} instead of {{,}} > {code:java} > json={query:"content:foo"} facet:{zz:{field:id}}} > {code} > It's captured only partially, here's we have under debug > {code:java} > "json":{"query":"content:foo"}, > {code} > I suppose it should throw an error with 400 code. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13824) JSON Request API ignores prematurely closing curly brace.
[ https://issues.apache.org/jira/browse/SOLR-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13824: Fix Version/s: 8.4 Resolution: Fixed Status: Resolved (was: Patch Available) > JSON Request API ignores prematurely closing curly brace. > -- > > Key: SOLR-13824 > URL: https://issues.apache.org/jira/browse/SOLR-13824 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: JSON Request API >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13824.patch, SOLR-13824.patch, SOLR-13824.patch, > SOLR-13824.patch > > > {code:java} > json={query:"content:foo", facet:{zz:{field:id}}} > {code} > this works fine, but if we mistype {{}}} instead of {{,}} > {code:java} > json={query:"content:foo"} facet:{zz:{field:id}}} > {code} > It's captured only partially, here's we have under debug > {code:java} > "json":{"query":"content:foo"}, > {code} > I suppose it should throw an error with 400 code. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13824) JSON Request API to reject anything after JSON object is over
[ https://issues.apache.org/jira/browse/SOLR-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13824: Summary: JSON Request API to reject anything after JSON object is over (was: JSON Request API ignores prematurely closing curly brace. ) > JSON Request API to reject anything after JSON object is over > - > > Key: SOLR-13824 > URL: https://issues.apache.org/jira/browse/SOLR-13824 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: JSON Request API >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13824.patch, SOLR-13824.patch, SOLR-13824.patch, > SOLR-13824.patch > > > {code:java} > json={query:"content:foo", facet:{zz:{field:id}}} > {code} > this works fine, but if we mistype {{}}} instead of {{,}} > {code:java} > json={query:"content:foo"} facet:{zz:{field:id}}} > {code} > It's captured only partially, here's we have under debug > {code:java} > "json":{"query":"content:foo"}, > {code} > I suppose it should throw an error with 400 code. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-9021) QueryParser should avoid creating an LookaheadSuccess(Error) object with every instance
[ https://issues.apache.org/jira/browse/LUCENE-9021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Przemek Bruski updated LUCENE-9021: --- Attachment: LUCENE-9021.patch > QueryParser should avoid creating an LookaheadSuccess(Error) object with > every instance > --- > > Key: LUCENE-9021 > URL: https://issues.apache.org/jira/browse/LUCENE-9021 > Project: Lucene - Core > Issue Type: Bug >Reporter: Przemek Bruski >Priority: Major > Attachments: LUCENE-9021.patch > > > This is basically the same as > https://issues.apache.org/jira/browse/SOLR-11242 , but for Lucene QueryParser -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9021) QueryParser should avoid creating an LookaheadSuccess(Error) object with every instance
Przemek Bruski created LUCENE-9021: -- Summary: QueryParser should avoid creating an LookaheadSuccess(Error) object with every instance Key: LUCENE-9021 URL: https://issues.apache.org/jira/browse/LUCENE-9021 Project: Lucene - Core Issue Type: Bug Reporter: Przemek Bruski Attachments: LUCENE-9021.patch This is basically the same as https://issues.apache.org/jira/browse/SOLR-11242 , but for Lucene QueryParser -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] pbruski opened a new pull request #962: LUCENE-9021 QueryParser: re-use the LookaheadSuccess exception
pbruski opened a new pull request #962: LUCENE-9021 QueryParser: re-use the LookaheadSuccess exception URL: https://github.com/apache/lucene-solr/pull/962 This is basically the same as https://issues.apache.org/jira/browse/SOLR-11242 , but for Lucene QueryParser # Checklist Please review the following and check all that apply: - [x] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [x] I have created a Jira issue and added the issue ID to my pull request title. - [x] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [x] I have developed this patch against the `master` branch. - [x] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9022) Never cache GlobalOrdinalsWithScoreQuery
Jim Ferenczi created LUCENE-9022: Summary: Never cache GlobalOrdinalsWithScoreQuery Key: LUCENE-9022 URL: https://issues.apache.org/jira/browse/LUCENE-9022 Project: Lucene - Core Issue Type: Improvement Reporter: Jim Ferenczi Today we disable caching for GlobalOrdinalsQuery (https://issues.apache.org/jira/browse/LUCENE-8062) but not GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is sometimes executed in a filter context (despite the name, e.g. when min/max are provided) so we should protect this query for the same reasons invoked in LUCENE-8062. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-13858) Clean up SolrInfoBean / SolrMetricProducer API
Andrzej Bialecki created SOLR-13858: --- Summary: Clean up SolrInfoBean / SolrMetricProducer API Key: SOLR-13858 URL: https://issues.apache.org/jira/browse/SOLR-13858 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: metrics Reporter: Andrzej Bialecki Assignee: Andrzej Bialecki Fix For: master (9.0) For historical reasons both {{SolrInfoBean}} and {{SolrMetricProducer}} contain methods and constants / enums that deal with handling and reporting of metrics. In almost all cases implementations of {{SolrInfoBean}} also implement {{SolrMetricProducer}}. I propose to refactor this API so that {{SolrInfoBean}} simply extends {{SolrMetricProducer}}. This will reduce the API surface and eliminate multiple rote methods that subclasses must now implement. This is an incompatible API change so it's applicable only to version 9.0. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jimczi opened a new pull request #963: LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery
jimczi opened a new pull request #963: LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery URL: https://github.com/apache/lucene-solr/pull/963 Today we disable caching for GlobalOrdinalsQuery (https://issues.apache.org/jira/browse/LUCENE-8062) but not GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is sometimes executed in a filter context (despite the name, e.g. when min/max are provided) so we should protect this query for the same reasons invoked in LUCENE-8062. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13202) Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode()
[ https://issues.apache.org/jira/browse/SOLR-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pranay Kumar Chaudhary updated SOLR-13202: -- Attachment: SOLR-13202.patch > Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode() > -- > > Key: SOLR-13202 > URL: https://issues.apache.org/jira/browse/SOLR-13202 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Cesar Rodriguez >Priority: Major > Labels: diffblue, newdev > Attachments: SOLR-13202.patch, home.zip > > > Requesting any of the following URLs causes Solr to return an HTTP 500 error > response: > {noformat} > http://localhost:8983/solr/films/select?fq={!join%20from=b%20to=a} > http://localhost:8983/solr/films/select?fq={!join%20to=a} > http://localhost:8983/solr/films/select?fq={!join} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > java.lang.NullPointerException > at org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:578) > at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:52) > at > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328) > at > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567) > at > org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434) > at > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > [...] > {noformat} > The problem seems to be related with method {{hasCode}} in the class > {{org.apache.solr.search.JoinQuery}}: > {code:java} > @Override > public int hashCode() { > int h = classHash(); > h = h * 31 + fromField.hashCode(); > h = h * 31 + toField.hashCode(); > h = h * 31 + q.hashCode(); > h = h * 31 + Objects.hashCode(fromIndex); > h = h * 31 + (int) fromCoreOpenTime; > return h; > } > {code} > The URLs provided above selectively leave uninitialized the fields > {{fromField}}, {{fromIndex}}, {{q}}, and {{toField}}, but all of these fields > are accessed by this method. > > We found this issue and ~70 more like this using [Diffblue Microservices > Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more > information on this [fuzz testing > campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13202) Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode()
[ https://issues.apache.org/jira/browse/SOLR-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956942#comment-16956942 ] Pranay Kumar Chaudhary commented on SOLR-13202: --- I have added a null check for the mandatory params in the joinQuery. The issue is not coming up now. > Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode() > -- > > Key: SOLR-13202 > URL: https://issues.apache.org/jira/browse/SOLR-13202 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Cesar Rodriguez >Priority: Major > Labels: diffblue, newdev > Attachments: SOLR-13202.patch, home.zip > > > Requesting any of the following URLs causes Solr to return an HTTP 500 error > response: > {noformat} > http://localhost:8983/solr/films/select?fq={!join%20from=b%20to=a} > http://localhost:8983/solr/films/select?fq={!join%20to=a} > http://localhost:8983/solr/films/select?fq={!join} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > java.lang.NullPointerException > at org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:578) > at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:52) > at > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328) > at > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567) > at > org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434) > at > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > [...] > {noformat} > The problem seems to be related with method {{hasCode}} in the class > {{org.apache.solr.search.JoinQuery}}: > {code:java} > @Override > public int hashCode() { > int h = classHash(); > h = h * 31 + fromField.hashCode(); > h = h * 31 + toField.hashCode(); > h = h * 31 + q.hashCode(); > h = h * 31 + Objects.hashCode(fromIndex); > h = h * 31 + (int) fromCoreOpenTime; > return h; > } > {code} > The URLs provided above selectively leave uninitialized the fields > {{fromField}}, {{fromIndex}}, {{q}}, and {{toField}}, but all of these fields > are accessed by this method. > > We found this issue and ~70 more like this using [Diffblue Microservices > Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more > information on this [fuzz testing > campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br]. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13202) Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode()
[ https://issues.apache.org/jira/browse/SOLR-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956942#comment-16956942 ] Pranay Kumar Chaudhary edited comment on SOLR-13202 at 10/22/19 11:24 AM: -- [~cesar.rodriguez] I have added a null check for the mandatory params in the joinQuery. The issue is not coming up now. Please review the patch. was (Author: pranay3): I have added a null check for the mandatory params in the joinQuery. The issue is not coming up now. Please review the patch. > Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode() > -- > > Key: SOLR-13202 > URL: https://issues.apache.org/jira/browse/SOLR-13202 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Cesar Rodriguez >Priority: Major > Labels: diffblue, newdev > Attachments: SOLR-13202.patch, home.zip > > > Requesting any of the following URLs causes Solr to return an HTTP 500 error > response: > {noformat} > http://localhost:8983/solr/films/select?fq={!join%20from=b%20to=a} > http://localhost:8983/solr/films/select?fq={!join%20to=a} > http://localhost:8983/solr/films/select?fq={!join} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > java.lang.NullPointerException > at org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:578) > at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:52) > at > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328) > at > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567) > at > org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434) > at > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > [...] > {noformat} > The problem seems to be related with method {{hasCode}} in the class > {{org.apache.solr.search.JoinQuery}}: > {code:java} > @Override > public int hashCode() { > int h = classHash(); > h = h * 31 + fromField.hashCode(); > h = h * 31 + toField.hashCode(); > h = h * 31 + q.hashCode(); > h = h * 31 + Objects.hashCode(fromIndex); > h = h * 31 + (int) fromCoreOpenTime; > return h; > } > {code} > The URLs provided above selectively leave uninitialized the fields > {{fromField}}, {{fromIndex}}, {{q}}, and {{toField}}, but all of these fields > are accessed by this method. > > We found this issue and ~70 more like this using [Diffblue Microservices > Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more > information on this [fuzz testing > campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br]. -- This message was sent by Atl
[jira] [Comment Edited] (SOLR-13202) Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode()
[ https://issues.apache.org/jira/browse/SOLR-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16956942#comment-16956942 ] Pranay Kumar Chaudhary edited comment on SOLR-13202 at 10/22/19 11:24 AM: -- I have added a null check for the mandatory params in the joinQuery. The issue is not coming up now. Please review the patch. was (Author: pranay3): I have added a null check for the mandatory params in the joinQuery. The issue is not coming up now. > Three NullPointerExceptions in org.apache.solr.search.JoinQuery.hashCode() > -- > > Key: SOLR-13202 > URL: https://issues.apache.org/jira/browse/SOLR-13202 > Project: Solr > Issue Type: Bug >Affects Versions: master (9.0) > Environment: h1. Steps to reproduce > * Use a Linux machine. > * Build commit {{ea2c8ba}} of Solr as described in the section below. > * Build the films collection as described below. > * Start the server using the command {{./bin/solr start -f -p 8983 -s > /tmp/home}} > * Request the URL given in the bug description. > h1. Compiling the server > {noformat} > git clone https://github.com/apache/lucene-solr > cd lucene-solr > git checkout ea2c8ba > ant compile > cd solr > ant server > {noformat} > h1. Building the collection > We followed [Exercise > 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from > the [Solr > Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The > attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that > you will obtain by following the steps below: > {noformat} > mkdir -p /tmp/home > echo '' > > /tmp/home/solr.xml > {noformat} > In one terminal start a Solr instance in foreground: > {noformat} > ./bin/solr start -f -p 8983 -s /tmp/home > {noformat} > In another terminal, create a collection of movies, with no shards and no > replication, and initialize it: > {noformat} > bin/solr create -c films > curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": > {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' > http://localhost:8983/solr/films/schema > curl -X POST -H 'Content-type:application/json' --data-binary > '{"add-copy-field" : {"source":"*","dest":"_text_"}}' > http://localhost:8983/solr/films/schema > ./bin/post -c films example/films/films.json > {noformat} >Reporter: Cesar Rodriguez >Priority: Major > Labels: diffblue, newdev > Attachments: SOLR-13202.patch, home.zip > > > Requesting any of the following URLs causes Solr to return an HTTP 500 error > response: > {noformat} > http://localhost:8983/solr/films/select?fq={!join%20from=b%20to=a} > http://localhost:8983/solr/films/select?fq={!join%20to=a} > http://localhost:8983/solr/films/select?fq={!join} > {noformat} > The error response seems to be caused by the following uncaught exception: > {noformat} > java.lang.NullPointerException > at org.apache.solr.search.JoinQuery.hashCode(JoinQParserPlugin.java:578) > at org.apache.solr.search.QueryResultKey.(QueryResultKey.java:52) > at > org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1328) > at > org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:567) > at > org.apache.solr.handler.component.QueryComponent.doProcessUngroupedSearch(QueryComponent.java:1434) > at > org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:373) > at > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298) > at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) > at org.apache.solr.core.SolrCore.execute(SolrCore.java:2559) > [...] > {noformat} > The problem seems to be related with method {{hasCode}} in the class > {{org.apache.solr.search.JoinQuery}}: > {code:java} > @Override > public int hashCode() { > int h = classHash(); > h = h * 31 + fromField.hashCode(); > h = h * 31 + toField.hashCode(); > h = h * 31 + q.hashCode(); > h = h * 31 + Objects.hashCode(fromIndex); > h = h * 31 + (int) fromCoreOpenTime; > return h; > } > {code} > The URLs provided above selectively leave uninitialized the fields > {{fromField}}, {{fromIndex}}, {{q}}, and {{toField}}, but all of these fields > are accessed by this method. > > We found this issue and ~70 more like this using [Diffblue Microservices > Testing|https://www.diffblue.com/labs/?utm_source=solr-br]. Find more > information on this [fuzz testing > campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results?utm_source=solr-br]. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [lucene-solr] jgq2008303393 removed a comment on issue #940: LUCENE-9002: Query caching leads to absurdly slow queries
jgq2008303393 removed a comment on issue #940: LUCENE-9002: Query caching leads to absurdly slow queries URL: https://github.com/apache/lucene-solr/pull/940#issuecomment-543103567 @jpountz Please help to merge : ) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jgq2008303393 commented on issue #940: LUCENE-9002: Query caching leads to absurdly slow queries
jgq2008303393 commented on issue #940: LUCENE-9002: Query caching leads to absurdly slow queries URL: https://github.com/apache/lucene-solr/pull/940#issuecomment-544947806 @jpountz Please help to merge : ) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jimczi merged pull request #963: LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery
jimczi merged pull request #963: LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery URL: https://github.com/apache/lucene-solr/pull/963 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9022) Never cache GlobalOrdinalsWithScoreQuery
[ https://issues.apache.org/jira/browse/LUCENE-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957042#comment-16957042 ] ASF subversion and git services commented on LUCENE-9022: - Commit c68470e57764f9c954008887d302a07a8bf6ae14 in lucene-solr's branch refs/heads/master from Jim Ferenczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c68470e ] LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery (#963) > Never cache GlobalOrdinalsWithScoreQuery > > > Key: LUCENE-9022 > URL: https://issues.apache.org/jira/browse/LUCENE-9022 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Today we disable caching for GlobalOrdinalsQuery > (https://issues.apache.org/jira/browse/LUCENE-8062) but not > GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is > sometimes executed in a filter context (despite the name, e.g. when min/max > are provided) so we should protect this query for the same reasons invoked in > LUCENE-8062. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9022) Never cache GlobalOrdinalsWithScoreQuery
[ https://issues.apache.org/jira/browse/LUCENE-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957045#comment-16957045 ] ASF subversion and git services commented on LUCENE-9022: - Commit c89ec4b07457f942b69187a08021af09ceb87fee in lucene-solr's branch refs/heads/branch_8x from Jim Ferenczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c89ec4b ] LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery (#963) > Never cache GlobalOrdinalsWithScoreQuery > > > Key: LUCENE-9022 > URL: https://issues.apache.org/jira/browse/LUCENE-9022 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Today we disable caching for GlobalOrdinalsQuery > (https://issues.apache.org/jira/browse/LUCENE-8062) but not > GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is > sometimes executed in a filter context (despite the name, e.g. when min/max > are provided) so we should protect this query for the same reasons invoked in > LUCENE-8062. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9022) Never cache GlobalOrdinalsWithScoreQuery
[ https://issues.apache.org/jira/browse/LUCENE-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957048#comment-16957048 ] ASF subversion and git services commented on LUCENE-9022: - Commit 5245f1cb053d107d6fdf0276f96e678a658c2e90 in lucene-solr's branch refs/heads/branch_8_3 from Jim Ferenczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5245f1c ] LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery (#963) > Never cache GlobalOrdinalsWithScoreQuery > > > Key: LUCENE-9022 > URL: https://issues.apache.org/jira/browse/LUCENE-9022 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Time Spent: 20m > Remaining Estimate: 0h > > Today we disable caching for GlobalOrdinalsQuery > (https://issues.apache.org/jira/browse/LUCENE-8062) but not > GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is > sometimes executed in a filter context (despite the name, e.g. when min/max > are provided) so we should protect this query for the same reasons invoked in > LUCENE-8062. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-9022) Never cache GlobalOrdinalsWithScoreQuery
[ https://issues.apache.org/jira/browse/LUCENE-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Ferenczi resolved LUCENE-9022. -- Fix Version/s: 8.3 master (9.0) Resolution: Fixed Thanks [~jpountz]! > Never cache GlobalOrdinalsWithScoreQuery > > > Key: LUCENE-9022 > URL: https://issues.apache.org/jira/browse/LUCENE-9022 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Fix For: master (9.0), 8.3 > > Time Spent: 20m > Remaining Estimate: 0h > > Today we disable caching for GlobalOrdinalsQuery > (https://issues.apache.org/jira/browse/LUCENE-8062) but not > GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is > sometimes executed in a filter context (despite the name, e.g. when min/max > are provided) so we should protect this query for the same reasons invoked in > LUCENE-8062. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9014) Design a new solution for publishing JavaDoc
[ https://issues.apache.org/jira/browse/LUCENE-9014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957063#comment-16957063 ] Alexandre Rafalovitch commented on LUCENE-9014: --- It does look good. They seem to cross-reference all the packages they have when they build a package (so Solr to Lucene to other Apache or core packages Javadoc). The only question is whether our build process does anything special that they would not and how to compensate for it. > Design a new solution for publishing JavaDoc > > > Key: LUCENE-9014 > URL: https://issues.apache.org/jira/browse/LUCENE-9014 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in to svn folders with hundreds of html files for JavaDoc. > Need a better way. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git
[ https://issues.apache.org/jira/browse/LUCENE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957072#comment-16957072 ] Alexandre Rafalovitch commented on LUCENE-9020: --- Something like [Netlify|https://www.netlify.com/] could be an option, they build in containers and host static results. The free option is quite generous, though it only allows one user. I think we do some unusual things in our Asciidoc build, so it may not work for pure non-custom build services. > Find a way to publish Solr RefGuide without checking into git > - > > Key: LUCENE-9020 > URL: https://issues.apache.org/jira/browse/LUCENE-9020 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in all versions of RefGuide (hundreds of small html files) > into svn to publish as part of the site. With new site we should find a > smoother way to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git
[ https://issues.apache.org/jira/browse/LUCENE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957070#comment-16957070 ] Cassandra Targett commented on LUCENE-9020: --- There is not an external free hosted Asciidoc service that I'm aware of, but at any rate we're not hosting Asciidoc files, we're hosting HTML files. bq. With new site we should find a smoother way to do this. I'd like to question this premise as I think I'm missing something with this and with LUCENE-9014. The process to publish today is as simple as {{svn import }}, which is rather smooth from that point of view, so I feel like you may be trying to articulate a different problem? > Find a way to publish Solr RefGuide without checking into git > - > > Key: LUCENE-9020 > URL: https://issues.apache.org/jira/browse/LUCENE-9020 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in all versions of RefGuide (hundreds of small html files) > into svn to publish as part of the site. With new site we should find a > smoother way to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9023) GlobalOrdinalsWithScore should not compute occurrences when the provided min is 1
Jim Ferenczi created LUCENE-9023: Summary: GlobalOrdinalsWithScore should not compute occurrences when the provided min is 1 Key: LUCENE-9023 URL: https://issues.apache.org/jira/browse/LUCENE-9023 Project: Lucene - Core Issue Type: Improvement Reporter: Jim Ferenczi This is a continuation of https://issues.apache.org/jira/browse/LUCENE-9022 Today the GlobalOrdinalsWithScore collector and query checks the number of matching docs per parent if the provided min is greater than 0. However we should also not compute the occurrences of children when min is equals to 1 since this is the minimum requirement for a document to match. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] jimczi opened a new pull request #964: LUCENE-9023: GlobalOrdinalsWithScore should not compute occurrences when the provided min is 1
jimczi opened a new pull request #964: LUCENE-9023: GlobalOrdinalsWithScore should not compute occurrences when the provided min is 1 URL: https://github.com/apache/lucene-solr/pull/964 This is a continuation of https://issues.apache.org/jira/browse/LUCENE-9022 Today the GlobalOrdinalsWithScore collector and query checks the number of matching docs per parent if the provided min is greater than 0. However we should also not compute the occurrences of children when min is equals to 1 since this is the minimum requirement for a document to match. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8062) Never cache GlobalOrdinalQuery
[ https://issues.apache.org/jira/browse/LUCENE-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957086#comment-16957086 ] ASF subversion and git services commented on LUCENE-8062: - Commit 421718607ebcf448f9931b462cce87f1e86670ce in lucene-solr's branch refs/heads/master from jimczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4217186 ] LUCENE-8062: Update CHANGES entry after backport to 8.3 > Never cache GlobalOrdinalQuery > -- > > Key: LUCENE-8062 > URL: https://issues.apache.org/jira/browse/LUCENE-8062 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Fix For: 7.2, 8.0 > > Attachments: LUCENE-8062.patch, LUCENE-8062.patch > > > GlobalOrdinalsQuery holds a possibly large bitset of global ordinals that can > pollute the query cache because the size of the query is not accounted in the > memory usage of the cache. > Moreover two instances of this query must share the same top reader context > to be considered equal so they are not the ideal candidate for segment level > caching. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8996) maxScore is sometimes missing from distributed grouped responses
[ https://issues.apache.org/jira/browse/LUCENE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957141#comment-16957141 ] Christine Poerschke commented on LUCENE-8996: - {quote}... About the LUCENE-9010 I would keep fix and tests together - because the tests prove that you actually fixed the issue ... {quote} The LUCENE-9010 patch includes two {{@Ignore}}-ed tests ({{testAllGroupsEmptyInSecondPass}} and {{testSomeGroupsEmptyInSecondPass}}) which demonstrate the issue and the fix here then would remove the {{@Ignore}} annotation and thus prove fixing of the issue. {quote}... I'm not sure about the narrative tests, but I agree on too many numbers and I can fix the tests by: ... {quote} I wonder if the separate LUCENE-9010 could actually help with this: * step 1: LUCENE-9010 three initial tests for existing behaviour including two {{@Ignore}} annotated tests to demonstrate the issue * step 2: LUCENE-8996 fix for the issue including two {{@Ignore}} annotation removals as proof-of-fix * step 3: LUCENE-9010 further test coverage and refining, including possible removal and replacement of the initial tests > maxScore is sometimes missing from distributed grouped responses > > > Key: LUCENE-8996 > URL: https://issues.apache.org/jira/browse/LUCENE-8996 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.3 >Reporter: Julien Massenet >Priority: Minor > Attachments: LUCENE-8996.patch, lucene_6_5-GroupingMaxScore.patch, > lucene_solr_5_3-GroupingMaxScore.patch, master-GroupingMaxScore.patch > > Time Spent: 10m > Remaining Estimate: 0h > > This issue occurs when using the grouping feature in distributed mode and > sorting by score. > Each group's {{docList}} in the response is supposed to contain a > {{maxScore}} entry that hold the maximum score for that group. Using the > current releases, it sometimes happens that this piece of information is not > included: > {code} > { > "responseHeader": { > "status": 0, > "QTime": 42, > "params": { > "sort": "score desc", > "fl": "id,score", > "q": "_text_:\"72\"", > "group.limit": "2", > "group.field": "group2", > "group.sort": "score desc", > "group": "true", > "wt": "json", > "fq": "group2:72 OR group2:45" > } > }, > "grouped": { > "group2": { > "matches": 567, > "groups": [ > { > "groupValue": 72, > "doclist": { > "numFound": 562, > "start": 0, > "maxScore": 2.0378063, > "docs": [ > { > "id": "29!26551", > "score": 2.0378063 > }, > { > "id": "78!11462", > "score": 2.0298104 > } > ] > } > }, > { > "groupValue": 45, > "doclist": { > "numFound": 5, > "start": 0, > "docs": [ > { > "id": "72!8569", > "score": 1.8988966 > }, > { > "id": "72!14075", > "score": 1.5191172 > } > ] > } > } > ] > } > } > } > {code} > Looking into the issue, it comes from the fact that if a shard does not > contain a document from that group, trying to merge its {{maxScore}} with > real {{maxScore}} entries from other shards is invalid (it results in NaN). > I'm attaching a patch containing a fix. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13857) QueryParser.jj produces code that will not compile
[ https://issues.apache.org/jira/browse/SOLR-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957143#comment-16957143 ] Erick Erickson commented on SOLR-13857: --- We need to be able to regenerate this code at will. That said, I don't know how gnarly changing QueryParser.jj would be to remove the deprecated methods > QueryParser.jj produces code that will not compile > -- > > Key: SOLR-13857 > URL: https://issues.apache.org/jira/browse/SOLR-13857 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Gus Heck >Assignee: Gus Heck >Priority: Major > > There are 2 problems that have crept into the parser generation system. > # SOLR-8764 removed deprecated methods that are part of a generated > interface (and the implementation thereof). It's kind of stinky that Javacc > is generating an interface that includes deprecated methods, but deleting > them from the generated class means that re-generation causes compiler > errors, so this should probably be reverted. > # SOLR-11662 changed the signature of > org.apache.solr.parser.QueryParser#newFieldQuery to add a parameter, but did > not update the corresponding portion of the QueryParser.jj file, and so the > method signature reverts upon regeneration, causing compile errors. > # There are a few places where string concatenation was turned to .append() > The pull request to be attached soon fixes these two issues such that running > ant javacc-QueryParser will once again produce code that compiles. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8996) maxScore is sometimes missing from distributed grouped responses
[ https://issues.apache.org/jira/browse/LUCENE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated LUCENE-8996: Attachment: LUCENE-8996.patch > maxScore is sometimes missing from distributed grouped responses > > > Key: LUCENE-8996 > URL: https://issues.apache.org/jira/browse/LUCENE-8996 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.3 >Reporter: Julien Massenet >Priority: Minor > Attachments: LUCENE-8996.patch, LUCENE-8996.patch, > lucene_6_5-GroupingMaxScore.patch, lucene_solr_5_3-GroupingMaxScore.patch, > master-GroupingMaxScore.patch > > Time Spent: 10m > Remaining Estimate: 0h > > This issue occurs when using the grouping feature in distributed mode and > sorting by score. > Each group's {{docList}} in the response is supposed to contain a > {{maxScore}} entry that hold the maximum score for that group. Using the > current releases, it sometimes happens that this piece of information is not > included: > {code} > { > "responseHeader": { > "status": 0, > "QTime": 42, > "params": { > "sort": "score desc", > "fl": "id,score", > "q": "_text_:\"72\"", > "group.limit": "2", > "group.field": "group2", > "group.sort": "score desc", > "group": "true", > "wt": "json", > "fq": "group2:72 OR group2:45" > } > }, > "grouped": { > "group2": { > "matches": 567, > "groups": [ > { > "groupValue": 72, > "doclist": { > "numFound": 562, > "start": 0, > "maxScore": 2.0378063, > "docs": [ > { > "id": "29!26551", > "score": 2.0378063 > }, > { > "id": "78!11462", > "score": 2.0298104 > } > ] > } > }, > { > "groupValue": 45, > "doclist": { > "numFound": 5, > "start": 0, > "docs": [ > { > "id": "72!8569", > "score": 1.8988966 > }, > { > "id": "72!14075", > "score": 1.5191172 > } > ] > } > } > ] > } > } > } > {code} > Looking into the issue, it comes from the fact that if a shard does not > contain a document from that group, trying to merge its {{maxScore}} with > real {{maxScore}} entries from other shards is invalid (it results in NaN). > I'm attaching a patch containing a fix. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8996) maxScore is sometimes missing from distributed grouped responses
[ https://issues.apache.org/jira/browse/LUCENE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957147#comment-16957147 ] Christine Poerschke commented on LUCENE-8996: - Just attached LUCENE-8996.patch includes the {{@Ignore}} annotation removal stuff. And I'm going to go out on a limb here and: * commit the LUCENE-9010 test addition (totally happy for it to be reverted or replaced later) -- step 1 above * propose the fix for the issue here for the 8.3.0 RC2 respin -- step 2 above > maxScore is sometimes missing from distributed grouped responses > > > Key: LUCENE-8996 > URL: https://issues.apache.org/jira/browse/LUCENE-8996 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.3 >Reporter: Julien Massenet >Priority: Minor > Attachments: LUCENE-8996.patch, LUCENE-8996.patch, > lucene_6_5-GroupingMaxScore.patch, lucene_solr_5_3-GroupingMaxScore.patch, > master-GroupingMaxScore.patch > > Time Spent: 10m > Remaining Estimate: 0h > > This issue occurs when using the grouping feature in distributed mode and > sorting by score. > Each group's {{docList}} in the response is supposed to contain a > {{maxScore}} entry that hold the maximum score for that group. Using the > current releases, it sometimes happens that this piece of information is not > included: > {code} > { > "responseHeader": { > "status": 0, > "QTime": 42, > "params": { > "sort": "score desc", > "fl": "id,score", > "q": "_text_:\"72\"", > "group.limit": "2", > "group.field": "group2", > "group.sort": "score desc", > "group": "true", > "wt": "json", > "fq": "group2:72 OR group2:45" > } > }, > "grouped": { > "group2": { > "matches": 567, > "groups": [ > { > "groupValue": 72, > "doclist": { > "numFound": 562, > "start": 0, > "maxScore": 2.0378063, > "docs": [ > { > "id": "29!26551", > "score": 2.0378063 > }, > { > "id": "78!11462", > "score": 2.0298104 > } > ] > } > }, > { > "groupValue": 45, > "doclist": { > "numFound": 5, > "start": 0, > "docs": [ > { > "id": "72!8569", > "score": 1.8988966 > }, > { > "id": "72!14075", > "score": 1.5191172 > } > ] > } > } > ] > } > } > } > {code} > Looking into the issue, it comes from the fact that if a shard does not > contain a document from that group, trying to merge its {{maxScore}} with > real {{maxScore}} entries from other shards is invalid (it results in NaN). > I'm attaching a patch containing a fix. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging
[ https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957149#comment-16957149 ] Erick Erickson commented on SOLR-13268: --- No clue really, but thanks for pointing this out. It's a place to look at least and I'm baffled. > Clean up any test failures resulting from defaulting to async logging > - > > Key: SOLR-13268 > URL: https://issues.apache.org/jira/browse/SOLR-13268 > Project: Solr > Issue Type: Bug >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Major > Attachments: SOLR-13268-flushing.patch, SOLR-13268.patch, > SOLR-13268.patch, SOLR-13268.patch > > Time Spent: 1h > Remaining Estimate: 0h > > This is a catch-all for test failures due to the async logging changes. So > far, the I see a couple failures on JDK13 only. I'll collect a "starter set" > here, these are likely systemic, once the root cause is found/fixed, then > others are likely fixed as well. > JDK13: > ant test -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV > -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8 > ant test -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E > -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk > -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true > -Dtests.file.encoding=UTF-8 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (LUCENE-8996) maxScore is sometimes missing from distributed grouped responses
[ https://issues.apache.org/jira/browse/LUCENE-8996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christine Poerschke updated LUCENE-8996: Status: Patch Available (was: Open) > maxScore is sometimes missing from distributed grouped responses > > > Key: LUCENE-8996 > URL: https://issues.apache.org/jira/browse/LUCENE-8996 > Project: Lucene - Core > Issue Type: Bug >Affects Versions: 5.3 >Reporter: Julien Massenet >Priority: Minor > Attachments: LUCENE-8996.patch, LUCENE-8996.patch, > lucene_6_5-GroupingMaxScore.patch, lucene_solr_5_3-GroupingMaxScore.patch, > master-GroupingMaxScore.patch > > Time Spent: 10m > Remaining Estimate: 0h > > This issue occurs when using the grouping feature in distributed mode and > sorting by score. > Each group's {{docList}} in the response is supposed to contain a > {{maxScore}} entry that hold the maximum score for that group. Using the > current releases, it sometimes happens that this piece of information is not > included: > {code} > { > "responseHeader": { > "status": 0, > "QTime": 42, > "params": { > "sort": "score desc", > "fl": "id,score", > "q": "_text_:\"72\"", > "group.limit": "2", > "group.field": "group2", > "group.sort": "score desc", > "group": "true", > "wt": "json", > "fq": "group2:72 OR group2:45" > } > }, > "grouped": { > "group2": { > "matches": 567, > "groups": [ > { > "groupValue": 72, > "doclist": { > "numFound": 562, > "start": 0, > "maxScore": 2.0378063, > "docs": [ > { > "id": "29!26551", > "score": 2.0378063 > }, > { > "id": "78!11462", > "score": 2.0298104 > } > ] > } > }, > { > "groupValue": 45, > "doclist": { > "numFound": 5, > "start": 0, > "docs": [ > { > "id": "72!8569", > "score": 1.8988966 > }, > { > "id": "72!14075", > "score": 1.5191172 > } > ] > } > } > ] > } > } > } > {code} > Looking into the issue, it comes from the fact that if a shard does not > contain a document from that group, trying to merge its {{maxScore}} with > real {{maxScore}} entries from other shards is invalid (it results in NaN). > I'm attaching a patch containing a fix. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9010) extend TopGroups.merge test coverage
[ https://issues.apache.org/jira/browse/LUCENE-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957173#comment-16957173 ] ASF subversion and git services commented on LUCENE-9010: - Commit f8292f5372502598dc8cabc50642c4f783e1c811 in lucene-solr's branch refs/heads/master from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f8292f5 ] LUCENE-9010: extend TopGroups.merge test coverage > extend TopGroups.merge test coverage > > > Key: LUCENE-9010 > URL: https://issues.apache.org/jira/browse/LUCENE-9010 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Christine Poerschke >Priority: Minor > Attachments: LUCENE-9010.patch, LUCENE-9010.patch > > > This sub-task proposes to add test coverage for the {{TopGroups.merge}} > method, separately from but as preparation for LUCENE-8996 fixing the > 'maxScore is sometimes missing' bug. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-6376) Edismax field alias bug
[ https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957233#comment-16957233 ] Chris Schneider commented on SOLR-6376: --- Want to add more details, as I ran into this and it was pretty annoying to identify initially as the query looked correct. Not sure what exactly triggers it, but it does seem to require multiple fields. The below can reproduce exactly. I did this on 7.7.2 1. Define a couple fields. These can be whatever. I made them text_general with the names `field_1` and `field_2` 2. Index a couple documents: { "field_1":"some field", "field_2":"some other field", }, { "field_1":"2.0", "field_2":"~", } 3. Run query with edismax defType:edismax q=field_1:query^2.0 OR field_2:query~2 OR alias:query f.alias.qf=invalid_field qf=field_1 field_2 stopwords=true Returns: "response":{ "numFound":1," start":0," docs":[{ "field_1":["2.0"], "field_2":["~"], "id":"2c5f0f77-5c19-42b0-ad20-367187ed13ac", "_version_":1648113233049944064 }] Debug response: "parsedquery":"+(field_1:query field_1:2.0 DisjunctionMaxQuery((field_1:or | field_2:or)) field_2:query field_2:2 DisjunctionMaxQuery((field_1:or | field_2:or)))", "parsedquery_toString":"+(field_1:query field_1:2.0 (field_1:or | field_2:or) field_2:query field_2:2 (field_1:or | field_2:or))", > Edismax field alias bug > --- > > Key: SOLR-6376 > URL: https://issues.apache.org/jira/browse/SOLR-6376 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9, 4.10.1 >Reporter: Thomas Egense >Priority: Minor > Labels: difficulty-easy, edismax, impact-low > Attachments: SOLR-6376.patch, SOLR-6376.patch > > > If you create a field alias that maps to a nonexistent field, the query will > be parsed to utter garbage. > The bug can reproduced very easily. Add the following line to the /browse > request handler in the tutorial example solrconfig.xml > name features XXX > (XXX is a nonexistent field) > This simple query will actually work correctly: > name_features:video > and it will be parsed to (features:video | name:video) and return 3 results. > It has simply discarded the nonexistent field and the result set is correct. > However if you change the query to: > name_features:video AND name_features:video > you will now get 0 result and the query is parsed to > +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | > title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 > | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) > (features:video | name:video))~3) > Notice the AND operator is now used a term! The parsed query can turn out > even worse and produce query parts such as: > title:2~2 > title:and^2.0^10.0 > Prefered solution: During start up, shut down Solr if there is a nonexistant > field alias. Just as is the case if the cycle-detection detects a cycle: > Acceptable solution: Ignore the nonexistant field totally. > Thomas Egense -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-6376) Edismax field alias bug
[ https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957233#comment-16957233 ] Chris Schneider edited comment on SOLR-6376 at 10/22/19 5:35 PM: - Want to add more details, as I ran into this and it was pretty annoying to identify initially as the query looked correct. Not sure what exactly triggers it, but it does seem to require multiple fields. The below steps can reproduce. I did this on 7.7.2 1. Define a couple fields. These can be whatever. I made them text_general with the names *field_1* and *field_2* 2. Index a couple documents: {noformat} { "field_1":"some field", "field_2":"some other field" }, { "field_1":"2.0", "field_2":"~" }{noformat} 3. Run query with edismax {code:java} defType=edismax q=field_1:query^2.0 OR field_2:query~2 OR alias:query f.alias.qf=invalid_field qf=field_1 field_2 stopwords=true{code} Returns: {noformat} "response":{ "numFound":1," start":0," docs":[ { "field_1":["2.0"], "field_2":["~"], "id":"2c5f0f77-5c19-42b0-ad20-367187ed13ac", "_version_":1648113233049944064 } ] {noformat} Debug response: {noformat} "parsedquery":"+(field_1:query field_1:2.0 DisjunctionMaxQuery((field_1:or | field_2:or)) field_2:query field_2:2 DisjunctionMaxQuery((field_1:or | field_2:or)))", "parsedquery_toString":"+(field_1:query field_1:2.0 (field_1:or | field_2:or) field_2:query field_2:2 (field_1:or | field_2:or))", {noformat} was (Author: cschneider86): Want to add more details, as I ran into this and it was pretty annoying to identify initially as the query looked correct. Not sure what exactly triggers it, but it does seem to require multiple fields. The below steps can reproduce. I did this on 7.7.2 1. Define a couple fields. These can be whatever. I made them text_general with the names *field_1* and *field_2* 2. Index a couple documents: {noformat} { "field_1":"some field", "field_2":"some other field" }, { "field_1":"2.0", "field_2":"~" }{noformat} 3. Run query with edismax {code:java} defType:edismax q=field_1:query^2.0 OR field_2:query~2 OR alias:query f.alias.qf=invalid_field qf=field_1 field_2 stopwords=true{code} Returns: {noformat} "response":{ "numFound":1," start":0," docs":[ { "field_1":["2.0"], "field_2":["~"], "id":"2c5f0f77-5c19-42b0-ad20-367187ed13ac", "_version_":1648113233049944064 } ] {noformat} Debug response: {noformat} "parsedquery":"+(field_1:query field_1:2.0 DisjunctionMaxQuery((field_1:or | field_2:or)) field_2:query field_2:2 DisjunctionMaxQuery((field_1:or | field_2:or)))", "parsedquery_toString":"+(field_1:query field_1:2.0 (field_1:or | field_2:or) field_2:query field_2:2 (field_1:or | field_2:or))", {noformat} > Edismax field alias bug > --- > > Key: SOLR-6376 > URL: https://issues.apache.org/jira/browse/SOLR-6376 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9, 4.10.1 >Reporter: Thomas Egense >Priority: Minor > Labels: difficulty-easy, edismax, impact-low > Attachments: SOLR-6376.patch, SOLR-6376.patch > > > If you create a field alias that maps to a nonexistent field, the query will > be parsed to utter garbage. > The bug can reproduced very easily. Add the following line to the /browse > request handler in the tutorial example solrconfig.xml > name features XXX > (XXX is a nonexistent field) > This simple query will actually work correctly: > name_features:video > and it will be parsed to (features:video | name:video) and return 3 results. > It has simply discarded the nonexistent field and the result set is correct. > However if you change the query to: > name_features:video AND name_features:video > you will now get 0 result and the query is parsed to > +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | > title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 > | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) > (features:video | name:video))~3) > Notice the AND operator is now used a term! The parsed query can turn out > even worse and produce query parts such as: > title:2~2 > title:and^2.0^10.0 > Prefered solution: During start up, shut down Solr if there is a nonexistant > field alias. Just as is the case if the cycle-detection detects a cycle: > Acceptable solution: Ignore the nonexistant field totally. > Thomas Egense -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucen
[jira] [Comment Edited] (SOLR-6376) Edismax field alias bug
[ https://issues.apache.org/jira/browse/SOLR-6376?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957233#comment-16957233 ] Chris Schneider edited comment on SOLR-6376 at 10/22/19 5:35 PM: - Want to add more details, as I ran into this and it was pretty annoying to identify initially as the query looked correct. Not sure what exactly triggers it, but it does seem to require multiple fields. The below steps can reproduce. I did this on 7.7.2 1. Define a couple fields. These can be whatever. I made them text_general with the names *field_1* and *field_2* 2. Index a couple documents: {noformat} { "field_1":"some field", "field_2":"some other field" }, { "field_1":"2.0", "field_2":"~" }{noformat} 3. Run query with edismax {code:java} defType:edismax q=field_1:query^2.0 OR field_2:query~2 OR alias:query f.alias.qf=invalid_field qf=field_1 field_2 stopwords=true{code} Returns: {noformat} "response":{ "numFound":1," start":0," docs":[ { "field_1":["2.0"], "field_2":["~"], "id":"2c5f0f77-5c19-42b0-ad20-367187ed13ac", "_version_":1648113233049944064 } ] {noformat} Debug response: {noformat} "parsedquery":"+(field_1:query field_1:2.0 DisjunctionMaxQuery((field_1:or | field_2:or)) field_2:query field_2:2 DisjunctionMaxQuery((field_1:or | field_2:or)))", "parsedquery_toString":"+(field_1:query field_1:2.0 (field_1:or | field_2:or) field_2:query field_2:2 (field_1:or | field_2:or))", {noformat} was (Author: cschneider86): Want to add more details, as I ran into this and it was pretty annoying to identify initially as the query looked correct. Not sure what exactly triggers it, but it does seem to require multiple fields. The below can reproduce exactly. I did this on 7.7.2 1. Define a couple fields. These can be whatever. I made them text_general with the names `field_1` and `field_2` 2. Index a couple documents: { "field_1":"some field", "field_2":"some other field", }, { "field_1":"2.0", "field_2":"~", } 3. Run query with edismax defType:edismax q=field_1:query^2.0 OR field_2:query~2 OR alias:query f.alias.qf=invalid_field qf=field_1 field_2 stopwords=true Returns: "response":{ "numFound":1," start":0," docs":[{ "field_1":["2.0"], "field_2":["~"], "id":"2c5f0f77-5c19-42b0-ad20-367187ed13ac", "_version_":1648113233049944064 }] Debug response: "parsedquery":"+(field_1:query field_1:2.0 DisjunctionMaxQuery((field_1:or | field_2:or)) field_2:query field_2:2 DisjunctionMaxQuery((field_1:or | field_2:or)))", "parsedquery_toString":"+(field_1:query field_1:2.0 (field_1:or | field_2:or) field_2:query field_2:2 (field_1:or | field_2:or))", > Edismax field alias bug > --- > > Key: SOLR-6376 > URL: https://issues.apache.org/jira/browse/SOLR-6376 > Project: Solr > Issue Type: Bug > Components: query parsers >Affects Versions: 4.6.1, 4.7, 4.7.2, 4.8, 4.9, 4.10.1 >Reporter: Thomas Egense >Priority: Minor > Labels: difficulty-easy, edismax, impact-low > Attachments: SOLR-6376.patch, SOLR-6376.patch > > > If you create a field alias that maps to a nonexistent field, the query will > be parsed to utter garbage. > The bug can reproduced very easily. Add the following line to the /browse > request handler in the tutorial example solrconfig.xml > name features XXX > (XXX is a nonexistent field) > This simple query will actually work correctly: > name_features:video > and it will be parsed to (features:video | name:video) and return 3 results. > It has simply discarded the nonexistent field and the result set is correct. > However if you change the query to: > name_features:video AND name_features:video > you will now get 0 result and the query is parsed to > +(((features:video | name:video) (id:AND^10.0 | author:and^2.0 | > title:and^10.0 | cat:AND^1.4 | text:and^0.5 | keywords:and^5.0 | manu:and^1.1 > | description:and^5.0 | resourcename:and | name:and^1.2 | features:and) > (features:video | name:video))~3) > Notice the AND operator is now used a term! The parsed query can turn out > even worse and produce query parts such as: > title:2~2 > title:and^2.0^10.0 > Prefered solution: During start up, shut down Solr if there is a nonexistant > field alias. Just as is the case if the cycle-detection detects a cycle: > Acceptable solution: Ignore the nonexistant field totally. > Thomas Egense -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13856) 8.x HdfsWriteToMultipleCollectionsTest jenkins failures due to TImeoutException
[ https://issues.apache.org/jira/browse/SOLR-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957238#comment-16957238 ] Kevin Risden commented on SOLR-13856: - H nothing jumps out at me immediately either. At least I didn't screw up anything on the master/8x branches related to the Hadoop upgrade from the surface. Any idea if this is a relatively new failure or if this is something that just has been hidden underneath and failing for a while. > 8.x HdfsWriteToMultipleCollectionsTest jenkins failures due to > TImeoutException > --- > > Key: SOLR-13856 > URL: https://issues.apache.org/jira/browse/SOLR-13856 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Chris M. Hostetter >Priority: Major > Attachments: 8.3.fail1.log.txt, 8.3.fail2.log.txt, 8.3.fail3.log.txt, > 8.x.fail1.log.txt, 8.x.fail2.log.txt, 8.x.fail3.log.txt, > apache_Lucene-Solr-NightlyTests-8.3_25.log.txt, > apache_Lucene-Solr-repro_3681.log.txt > > > I've noticed a trend in jenkins failures where > HdfsWriteToMultipleCollectionsTest... > * does _NOT_ ever seem to fail on master even w/heavy beasting > * fails on 8.x (28c1049a258bbd060a80803c72e1c6cadc784dab) and 8.3 > (25968e3b75e5e9a4f2a64de10500aae10a257bdd) easily > ** failing seeds frequently reproduce, but not 100% > ** seeds reproduce even when tested using newer (ie: java11) JVMs > ** doesn't fail when commenting out HDFS aspects of test > *** suggests failure cause is somehow specific to HDFS, not differences in > the 8x/master HTTP/solr indexing stack... > *However:* There are currently zero differences between the *.hdfs.* packaged > solr code (src or test) on branch_8x vs master; likewise 8x and master also > use the exact same hadoop jars. > So what the hell is different? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13403) Terms component fails for DatePointField
[ https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-13403: Attachment: SOLR-13403.patch > Terms component fails for DatePointField > > > Key: SOLR-13403 > URL: https://issues.apache.org/jira/browse/SOLR-13403 > Project: Solr > Issue Type: Bug > Components: SearchComponents - other >Reporter: Munendra S N >Assignee: Munendra S N >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13403.patch, SOLR-13403.patch, SOLR-13403.patch, > SOLR-13403.patch > > > Getting terms for PointFields except DatePointField. For DatePointField, the > request fails NPE -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13403) Terms component fails for DatePointField
[ https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-13403: Status: Patch Available (was: Reopened) > Terms component fails for DatePointField > > > Key: SOLR-13403 > URL: https://issues.apache.org/jira/browse/SOLR-13403 > Project: Solr > Issue Type: Bug > Components: SearchComponents - other >Reporter: Munendra S N >Assignee: Munendra S N >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13403.patch, SOLR-13403.patch, SOLR-13403.patch, > SOLR-13403.patch > > > Getting terms for PointFields except DatePointField. For DatePointField, the > request fails NPE -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13403) Terms component fails for DatePointField
[ https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957242#comment-16957242 ] Munendra S N commented on SOLR-13403: - [^SOLR-13403.patch] The failure occurs when the collection/core doesn't contain any docs. It was possible in distrib mode that a shard might not any docs. I have added tests for empty index > Terms component fails for DatePointField > > > Key: SOLR-13403 > URL: https://issues.apache.org/jira/browse/SOLR-13403 > Project: Solr > Issue Type: Bug > Components: SearchComponents - other >Reporter: Munendra S N >Assignee: Munendra S N >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13403.patch, SOLR-13403.patch, SOLR-13403.patch, > SOLR-13403.patch > > > Getting terms for PointFields except DatePointField. For DatePointField, the > request fails NPE -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13857) QueryParser.jj produces code that will not compile
[ https://issues.apache.org/jira/browse/SOLR-13857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957247#comment-16957247 ] Gus Heck commented on SOLR-13857: - AFAICT the interface is fundamental to javacc, and not determined by the jj file. Probably takes javacc update, so for now I'm figuring on adding an explanatory tombstone comment at every retained edit, so that a diff after regeneration effectively tells you what to do. > QueryParser.jj produces code that will not compile > -- > > Key: SOLR-13857 > URL: https://issues.apache.org/jira/browse/SOLR-13857 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Reporter: Gus Heck >Assignee: Gus Heck >Priority: Major > > There are 2 problems that have crept into the parser generation system. > # SOLR-8764 removed deprecated methods that are part of a generated > interface (and the implementation thereof). It's kind of stinky that Javacc > is generating an interface that includes deprecated methods, but deleting > them from the generated class means that re-generation causes compiler > errors, so this should probably be reverted. > # SOLR-11662 changed the signature of > org.apache.solr.parser.QueryParser#newFieldQuery to add a parameter, but did > not update the corresponding portion of the QueryParser.jj file, and so the > method signature reverts upon regeneration, causing compile errors. > # There are a few places where string concatenation was turned to .append() > The pull request to be attached soon fixes these two issues such that running > ant javacc-QueryParser will once again produce code that compiles. > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git
[ https://issues.apache.org/jira/browse/LUCENE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957251#comment-16957251 ] Jan Høydahl commented on LUCENE-9020: - {quote}The process to publish today is as simple as {{svn import }}, which is rather smooth from that point of view, so I feel like you may be trying to articulate a different problem? {quote} Yes, that is true. If we're going to abandon the SVN based CMS for our website (it is deprecated) and migrate to GIT for hosting the site, we would probably face issues if we continue to dump and check in gigabytes of small HTML files into the git repo, as it won't handle it as well as SVN. That's also the reason we have a hack today to import those static files directly into the production tree instead of checking them into staging, which would kill the CMS. So instead of limiting ourselves to doing things the same way as before which was due to how the old CMS worked, we should instead ask ourselves "How would we do it today?". Tons of open source projects host their docs on ReadTheDocs and similar and they offer nice services with versioning, search, building docs from git branch etc. We should probably collaborate with INFRA to see if they have some advice. We could of course just start with the equivalent of what we do now and check in the ref guide HTML into the new lucene-site git repo and improve on that if it does not scale or whatever. > Find a way to publish Solr RefGuide without checking into git > - > > Key: LUCENE-9020 > URL: https://issues.apache.org/jira/browse/LUCENE-9020 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in all versions of RefGuide (hundreds of small html files) > into svn to publish as part of the site. With new site we should find a > smoother way to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-12368) in-place DV updates should no longer have to jump through hoops if field does not yet exist
[ https://issues.apache.org/jira/browse/SOLR-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957256#comment-16957256 ] ASF subversion and git services commented on SOLR-12368: Commit d5fab53bafb7844293421a42c19146715f909819 in lucene-solr's branch refs/heads/branch_8_3 from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d5fab53 ] SOLR-12368: fix indent in changes entry > in-place DV updates should no longer have to jump through hoops if field does > not yet exist > --- > > Key: SOLR-12368 > URL: https://issues.apache.org/jira/browse/SOLR-12368 > Project: Solr > Issue Type: Improvement >Reporter: Chris M. Hostetter >Assignee: Munendra S N >Priority: Major > Fix For: 8.3 > > Attachments: SOLR-12368.patch, SOLR-12368.patch, SOLR-12368.patch, > SOLR-12368.patch > > > When SOLR-5944 first added "in-place" DocValue updates to Solr, one of the > edge cases thta had to be dealt with was the limitation imposed by > IndexWriter that docValues could only be updated if they already existed - if > a shard did not yet have a document w/a value in the field where the update > was attempted, we would get an error. > LUCENE-8316 seems to have removed this error, which i believe means we can > simplify & speed up some of the checks in Solr, and support this situation as > well, rather then falling back on full "read stored fields & reindex" atomic > update -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13823) ClassCastException when using group.query and return score
[ https://issues.apache.org/jira/browse/SOLR-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Munendra S N updated SOLR-13823: Status: Patch Available (was: Reopened) > ClassCastException when using group.query and return score > -- > > Key: SOLR-13823 > URL: https://issues.apache.org/jira/browse/SOLR-13823 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Affects Versions: 8.2, 8.1.1 >Reporter: Uwe Jäger >Assignee: Munendra S N >Priority: Major > Labels: Grouping > Attachments: SOLR-13823.patch > > Time Spent: 50m > Remaining Estimate: 0h > > When grouping the results of a query with group.query there is a > ClassCastException in org.apache.solr.search.Grouping.CommandQuery.finish > (line 890) since the collector is wrapped in a MultiCollector. > The wanted topCollector is available in the inner class so it can be used > directly and the cast is not necessary at all. After that change there are > still the scores missing in the result, so populating the scores is > necessary, too. > I will create a PR showing the error and also containing a fix. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13856) 8.x HdfsWriteToMultipleCollectionsTest jenkins failures due to TImeoutException
[ https://issues.apache.org/jira/browse/SOLR-13856?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris M. Hostetter updated SOLR-13856: -- Attachment: HdfsWriteToMultipleCollectionsTest.fails.txt Status: Open (was: Open) {quote}Any idea if this is a relatively new failure or if this is something that just has been hidden underneath and failing for a while {quote} Well, it's a nightly test, so it doesn't run that frequently in general... I don't have the logs going back more then 7 days, so i can't confirm how long these *exact* errors have ocured, but i can tell you how what jobs have seen _any_ asserts fail in {{HdfsWriteToMultipleCollectionsTest.test}} during the past ~2years .. [http://fucit.org/solr-jenkins-reports/reports/archive/] {noformat} reports/archive/daily$ ls -1 | grep method-failures.csv.gz | xargs zgrep -H 'HdfsWriteToMultipleCollectionsTest,test' | perl -ple 'if (/^(\d+-\d+-\d+)\..+,(.*)$/) { $_ = "$1\t$2"}' > ~/tmp/HdfsWriteToMultipleCollectionsTest.fails.txt {noformat} (Note: that rules out suite level failures, like suite timeouts, object leaks, etc... – it should just be assertion failures inside the {{test()}} method) I've attached that file, but: * if we ignore the 'repro' builds which could be on any branch * and if we focus solely on a single jenkins box at a time for apples to apples comparison Then it definitely looks like there has been an uptick in 8.x failures on the apache jenkins server since 2019-03-15 ... right around the time of SOLR-13330 (although that may just be coincidence) ... 6 total fails in 2019 compared to 2 in 2018 (although it looks like this test was BadApple'd for ~4months in 2018 .. so maybe that's just noise) {noformat} $ grep -v repro HdfsWriteToMultipleCollectionsTest.fails.txt | grep apache | sort 2018-01-01 apache/Lucene-Solr-NightlyTests-master/1443/ 2018-03-28 apache/Lucene-Solr-NightlyTests-7.x/185/ 2019-03-19 apache/Lucene-Solr-NightlyTests-8.x/48/ 2019-04-30 apache/Lucene-Solr-NightlyTests-8.x/86/ 2019-05-31 apache/Lucene-Solr-NightlyTests-8.1/40/ 2019-09-24 apache/Lucene-Solr-NightlyTests-master/1969/ 2019-10-16 apache/Lucene-Solr-NightlyTests-8.3/25/ {noformat} On the other hand, Uwe's jenkins hasn't seen any failures of this test method *ever* (i did check the original data though, he does occasionally report suite level failures, so he's definitely running the hdfs tests) ... {noformat} $ grep -v repro HdfsWriteToMultipleCollectionsTest.fails.txt | grep thetaphi | sort {noformat} And on steve's servers: no aparent pattern (but i also don't think steve's box run's full time ... there may be months when it wasn't running any tests at all)... {noformat} $ grep -v repro HdfsWriteToMultipleCollectionsTest.fails.txt | grep sarowe | sort 2018-12-26 sarowe/Lucene-Solr-Nightly-master/982/ 2019-01-28 sarowe/Lucene-Solr-Nightly-7.x/473/ 2019-04-30 sarowe/Lucene-Solr-Nightly-master/1055/ {noformat} ...so ... yeah. no clue. > 8.x HdfsWriteToMultipleCollectionsTest jenkins failures due to > TImeoutException > --- > > Key: SOLR-13856 > URL: https://issues.apache.org/jira/browse/SOLR-13856 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Chris M. Hostetter >Priority: Major > Attachments: 8.3.fail1.log.txt, 8.3.fail2.log.txt, 8.3.fail3.log.txt, > 8.x.fail1.log.txt, 8.x.fail2.log.txt, 8.x.fail3.log.txt, > HdfsWriteToMultipleCollectionsTest.fails.txt, > apache_Lucene-Solr-NightlyTests-8.3_25.log.txt, > apache_Lucene-Solr-repro_3681.log.txt > > > I've noticed a trend in jenkins failures where > HdfsWriteToMultipleCollectionsTest... > * does _NOT_ ever seem to fail on master even w/heavy beasting > * fails on 8.x (28c1049a258bbd060a80803c72e1c6cadc784dab) and 8.3 > (25968e3b75e5e9a4f2a64de10500aae10a257bdd) easily > ** failing seeds frequently reproduce, but not 100% > ** seeds reproduce even when tested using newer (ie: java11) JVMs > ** doesn't fail when commenting out HDFS aspects of test > *** suggests failure cause is somehow specific to HDFS, not differences in > the 8x/master HTTP/solr indexing stack... > *However:* There are currently zero differences between the *.hdfs.* packaged > solr code (src or test) on branch_8x vs master; likewise 8x and master also > use the exact same hadoop jars. > So what the hell is different? -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley commented on issue #953: LUCENE-9006: WDGF catenateAll should come before parts
dsmiley commented on issue #953: LUCENE-9006: WDGF catenateAll should come before parts URL: https://github.com/apache/lucene-solr/pull/953#issuecomment-545111987 Thanks for the approval. Is there a simple test somewhere that illustrates that the graph is incorrect? Or if you were to write one that does, maybe `org.apache.lucene.analysis.BaseTokenStreamTestCase#getGraphStrings(org.apache.lucene.analysis.TokenStream)` would show the problem for "8-input"?I edited testCatenateAllEmittedBeforeParts with this code: ```Set graphStrings = getGraphStrings(a, "8-input"); System.out.println(graphStrings.toString());``` which produced: [input, 8-input, 8input] which looks correct; no? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley edited a comment on issue #953: LUCENE-9006: WDGF catenateAll should come before parts
dsmiley edited a comment on issue #953: LUCENE-9006: WDGF catenateAll should come before parts URL: https://github.com/apache/lucene-solr/pull/953#issuecomment-545111987 Thanks for the approval. Is there a simple test somewhere that illustrates that the graph is incorrect? Or if you were to write one that does, maybe `org.apache.lucene.analysis.BaseTokenStreamTestCase#getGraphStrings(org.apache.lucene.analysis.TokenStream)` would show the problem for "8-input"?I edited testCatenateAllEmittedBeforeParts with this code: ``` Set graphStrings = getGraphStrings(a, "8-input"); System.out.println(graphStrings.toString()); ``` which produced: [input, 8-input, 8input] which looks correct; no? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13854) Remove deprecated SolrMetricProducer.initializeMetrics API
[ https://issues.apache.org/jira/browse/SOLR-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957298#comment-16957298 ] ASF subversion and git services commented on SOLR-13854: Commit 1d7cd6157570ca12ba3480b082479a21dd5aa660 in lucene-solr's branch refs/heads/master from Andrzej Bialecki [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1d7cd61 ] SOLR-13854: Remove deprecated SolrMetricProducer.initializeMetrics API. > Remove deprecated SolrMetricProducer.initializeMetrics API > -- > > Key: SOLR-13854 > URL: https://issues.apache.org/jira/browse/SOLR-13854 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Fix For: master (9.0) > > > SOLR-13677 introduced an improved API for registration and cleanup of metrics > for Solr components. The previous API has been deprecated in 8x and it should > be removed in 9.0. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8987) Move Lucene web site from svn to git
[ https://issues.apache.org/jira/browse/LUCENE-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957310#comment-16957310 ] Adam Walz commented on LUCENE-8987: --- [~janhoy] Do you need any help with this issue? I'm interested in getting started contributing to Lucene, and the docs seem like a good place to get familiar with the community. > Move Lucene web site from svn to git > > > Key: LUCENE-8987 > URL: https://issues.apache.org/jira/browse/LUCENE-8987 > Project: Lucene - Core > Issue Type: Task > Components: general/website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: lucene-site-repo.png > > > INFRA just enabled [a new way of configuring website > build|https://s.apache.org/asfyaml] from a git branch, [see dev list > email|https://lists.apache.org/thread.html/b6f7e40bece5e83e27072ecc634a7815980c90240bc0a2ccb417f1fd@%3Cdev.lucene.apache.org%3E]. > It allows for automatic builds of both staging and production site, much > like the old CMS. We can choose to auto publish the html content of an > {{output/}} folder, or to have a bot build the site using > [Pelican|https://github.com/getpelican/pelican] from a {{content/}} folder. > The goal of this issue is to explore how this can be done for > [http://lucene.apache.org|http://lucene.apache.org/] by, by creating a new > git repo {{lucene-site}}, copy over the site from svn, see if it can be > "Pelicanized" easily and then test staging. Benefits are that more people > will be able to edit the web site and we can take PRs from the public (with > GitHub preview of pages). > Non-goals: > * Create a new web site or a new graphic design > * Change from Markdown to Asciidoc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9020) Find a way to publish Solr RefGuide without checking into git
[ https://issues.apache.org/jira/browse/LUCENE-9020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957329#comment-16957329 ] Jan Høydahl commented on LUCENE-9020: - To be fair, our RefGuide is only 39Mb across 260 HTML & 178 images, and we host 11 versions of the guide our site now. So git would probably cope quite well with a plain move from {{svn import}} to simply checking in a new folder in lucene-site git (the publish branch, not master). > Find a way to publish Solr RefGuide without checking into git > - > > Key: LUCENE-9020 > URL: https://issues.apache.org/jira/browse/LUCENE-9020 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Jan Høydahl >Priority: Major > > Currently we check in all versions of RefGuide (hundreds of small html files) > into svn to publish as part of the site. With new site we should find a > smoother way to do this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster
[ https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957332#comment-16957332 ] Jonathan J Senchyna commented on SOLR-13141: As a temporary workaround, it appears that bootstrapping with 1 replica, and then scaling out to multiple replicas after CDCR is enabled appears to work. > CDCR bootstrap does not replicate index to the replicas of target cluster > - > > Key: SOLR-13141 > URL: https://issues.apache.org/jira/browse/SOLR-13141 > Project: Solr > Issue Type: Bug > Components: CDCR >Affects Versions: 7.5, 7.6 > Environment: This is system independent problem - exists on windows > and linux - reproduced by independent developers >Reporter: Krzysztof Watral >Assignee: Shalin Shekhar Mangar >Priority: Critical > Fix For: master (9.0), 8.3 > > Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication > wasnt working at all.txt, type 2 - only few documents were being > replicated.txt > > > i have encountered some problems with CDCR that are related to the value of > {{replicationFactor}} param. > I ran the solr cloud on two datacenters with 2 nodes on each: > * dca: > ** dca_node_1 > ** dca_node_2 > * dcb > ** dcb_node_1 > ** dcb_node_2 > Then in sequence: > * I configured the CDCR on copy of *_default* config set named > *_default_cdcr* > * I created collection "customer" on both DC from *_default_cdcr* config set > with the following parameters: > ** {{numShards}} = 2 > ** {{maxShardsPerNode}} = 2 > ** {{replicationFactor}} = 2 > * I disabled cdcr buffer on collections > * I ran CDCR on both DC > CDCR has started without errors in logs. During indexation I have encountered > problem [^type 2 - only few documents were being replicated.txt], restart > didn't help (documents has not been synchronized between DC ) > Then: > * I stopped CDCR on both DC > * I stopped all solr nodes > * I restarted zookeepers on both DC > * I started all solr nodes one by one > * few minutes later I stared CDCR on both DC > * CDCR has starded with errors (replication between DC is not working) - > [^type 1 - replication wasnt working at all.txt] > {panel} > I've also discovered that problems appears only in case, when the > {{replicationFactor}} parameter is higher than one > {panel} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster
[ https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957332#comment-16957332 ] Jonathan J Senchyna edited comment on SOLR-13141 at 10/22/19 8:27 PM: -- As a temporary workaround, it bootstrapping with 1 replica, and then scaling out to multiple replicas after CDCR is enabled appears to work. was (Author: jsench): As a temporary workaround, it appears that bootstrapping with 1 replica, and then scaling out to multiple replicas after CDCR is enabled appears to work. > CDCR bootstrap does not replicate index to the replicas of target cluster > - > > Key: SOLR-13141 > URL: https://issues.apache.org/jira/browse/SOLR-13141 > Project: Solr > Issue Type: Bug > Components: CDCR >Affects Versions: 7.5, 7.6 > Environment: This is system independent problem - exists on windows > and linux - reproduced by independent developers >Reporter: Krzysztof Watral >Assignee: Shalin Shekhar Mangar >Priority: Critical > Fix For: master (9.0), 8.3 > > Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication > wasnt working at all.txt, type 2 - only few documents were being > replicated.txt > > > i have encountered some problems with CDCR that are related to the value of > {{replicationFactor}} param. > I ran the solr cloud on two datacenters with 2 nodes on each: > * dca: > ** dca_node_1 > ** dca_node_2 > * dcb > ** dcb_node_1 > ** dcb_node_2 > Then in sequence: > * I configured the CDCR on copy of *_default* config set named > *_default_cdcr* > * I created collection "customer" on both DC from *_default_cdcr* config set > with the following parameters: > ** {{numShards}} = 2 > ** {{maxShardsPerNode}} = 2 > ** {{replicationFactor}} = 2 > * I disabled cdcr buffer on collections > * I ran CDCR on both DC > CDCR has started without errors in logs. During indexation I have encountered > problem [^type 2 - only few documents were being replicated.txt], restart > didn't help (documents has not been synchronized between DC ) > Then: > * I stopped CDCR on both DC > * I stopped all solr nodes > * I restarted zookeepers on both DC > * I started all solr nodes one by one > * few minutes later I stared CDCR on both DC > * CDCR has starded with errors (replication between DC is not working) - > [^type 1 - replication wasnt working at all.txt] > {panel} > I've also discovered that problems appears only in case, when the > {{replicationFactor}} parameter is higher than one > {panel} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-13141) CDCR bootstrap does not replicate index to the replicas of target cluster
[ https://issues.apache.org/jira/browse/SOLR-13141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957332#comment-16957332 ] Jonathan J Senchyna edited comment on SOLR-13141 at 10/22/19 8:27 PM: -- As a temporary workaround, bootstrapping with 1 replica, and then scaling out to multiple replicas after CDCR is enabled appears to work. was (Author: jsench): As a temporary workaround, it bootstrapping with 1 replica, and then scaling out to multiple replicas after CDCR is enabled appears to work. > CDCR bootstrap does not replicate index to the replicas of target cluster > - > > Key: SOLR-13141 > URL: https://issues.apache.org/jira/browse/SOLR-13141 > Project: Solr > Issue Type: Bug > Components: CDCR >Affects Versions: 7.5, 7.6 > Environment: This is system independent problem - exists on windows > and linux - reproduced by independent developers >Reporter: Krzysztof Watral >Assignee: Shalin Shekhar Mangar >Priority: Critical > Fix For: master (9.0), 8.3 > > Attachments: SOLR-13141.patch, SOLR-13141.patch, type 1 - replication > wasnt working at all.txt, type 2 - only few documents were being > replicated.txt > > > i have encountered some problems with CDCR that are related to the value of > {{replicationFactor}} param. > I ran the solr cloud on two datacenters with 2 nodes on each: > * dca: > ** dca_node_1 > ** dca_node_2 > * dcb > ** dcb_node_1 > ** dcb_node_2 > Then in sequence: > * I configured the CDCR on copy of *_default* config set named > *_default_cdcr* > * I created collection "customer" on both DC from *_default_cdcr* config set > with the following parameters: > ** {{numShards}} = 2 > ** {{maxShardsPerNode}} = 2 > ** {{replicationFactor}} = 2 > * I disabled cdcr buffer on collections > * I ran CDCR on both DC > CDCR has started without errors in logs. During indexation I have encountered > problem [^type 2 - only few documents were being replicated.txt], restart > didn't help (documents has not been synchronized between DC ) > Then: > * I stopped CDCR on both DC > * I stopped all solr nodes > * I restarted zookeepers on both DC > * I started all solr nodes one by one > * few minutes later I stared CDCR on both DC > * CDCR has starded with errors (replication between DC is not working) - > [^type 1 - replication wasnt working at all.txt] > {panel} > I've also discovered that problems appears only in case, when the > {{replicationFactor}} parameter is higher than one > {panel} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] dsmiley closed pull request #953: LUCENE-9006: WDGF catenateAll should come before parts
dsmiley closed pull request #953: LUCENE-9006: WDGF catenateAll should come before parts URL: https://github.com/apache/lucene-solr/pull/953 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9006) Ensure WordDelimiterGraphFilter always emits catenateAll token early
[ https://issues.apache.org/jira/browse/LUCENE-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957336#comment-16957336 ] ASF subversion and git services commented on LUCENE-9006: - Commit 517bfd0ab75adb59ad85797118d263bebcf11f52 in lucene-solr's branch refs/heads/master from David Smiley [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=517bfd0 ] LUCENE-9006: WDGF catenateAll should come before parts Fixes #953 > Ensure WordDelimiterGraphFilter always emits catenateAll token early > > > Key: LUCENE-9006 > URL: https://issues.apache.org/jira/browse/LUCENE-9006 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Ideally, the first token of WDGF is the preserveOriginal (if configured to > emit), and the second should be the catenateAll (if configured to emit). The > deprecated WDF does this but WDGF can sometimes put the first other token > earlier when there is a non-emitted candidate sub-token. > Example input "8-other" when only generateWordParts and catenateAll -- *not* > generateNumberParts. WDGF internally sees the '8' but moves on. Ultimately, > the "other" token and the catenated "8other" will appear at the same internal > position, which by luck fools the sorter to emit "other" first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9006) Ensure WordDelimiterGraphFilter always emits catenateAll token early
[ https://issues.apache.org/jira/browse/LUCENE-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957337#comment-16957337 ] ASF subversion and git services commented on LUCENE-9006: - Commit bb3bcddedacc2a09e02077496d2c98cf12ec89e8 in lucene-solr's branch refs/heads/branch_8x from David Smiley [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bb3bcdd ] LUCENE-9006: WDGF catenateAll should come before parts Fixes #953 (cherry picked from commit 517bfd0ab75adb59ad85797118d263bebcf11f52) > Ensure WordDelimiterGraphFilter always emits catenateAll token early > > > Key: LUCENE-9006 > URL: https://issues.apache.org/jira/browse/LUCENE-9006 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Ideally, the first token of WDGF is the preserveOriginal (if configured to > emit), and the second should be the catenateAll (if configured to emit). The > deprecated WDF does this but WDGF can sometimes put the first other token > earlier when there is a non-emitted candidate sub-token. > Example input "8-other" when only generateWordParts and catenateAll -- *not* > generateNumberParts. WDGF internally sees the '8' but moves on. Ultimately, > the "other" token and the catenated "8other" will appear at the same internal > position, which by luck fools the sorter to emit "other" first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8987) Move Lucene web site from svn to git
[ https://issues.apache.org/jira/browse/LUCENE-8987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957347#comment-16957347 ] Jan Høydahl commented on LUCENE-8987: - Sure, all help welcome! This issue is about migrating the site from svn and an in-house static site builder over to git and a new supported static site builder. Feel free to engage in the discussions any of the sub tasks. It should be possible to do code contributions through PRs towards the lucene-site repo and then one of us committers will review and merge. > Move Lucene web site from svn to git > > > Key: LUCENE-8987 > URL: https://issues.apache.org/jira/browse/LUCENE-8987 > Project: Lucene - Core > Issue Type: Task > Components: general/website >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Attachments: lucene-site-repo.png > > > INFRA just enabled [a new way of configuring website > build|https://s.apache.org/asfyaml] from a git branch, [see dev list > email|https://lists.apache.org/thread.html/b6f7e40bece5e83e27072ecc634a7815980c90240bc0a2ccb417f1fd@%3Cdev.lucene.apache.org%3E]. > It allows for automatic builds of both staging and production site, much > like the old CMS. We can choose to auto publish the html content of an > {{output/}} folder, or to have a bot build the site using > [Pelican|https://github.com/getpelican/pelican] from a {{content/}} folder. > The goal of this issue is to explore how this can be done for > [http://lucene.apache.org|http://lucene.apache.org/] by, by creating a new > git repo {{lucene-site}}, copy over the site from svn, see if it can be > "Pelicanized" easily and then test staging. Benefits are that more people > will be able to edit the web site and we can take PRs from the public (with > GitHub preview of pages). > Non-goals: > * Create a new web site or a new graphic design > * Change from Markdown to Asciidoc -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13808) Query DSL should let to cache filter
[ https://issues.apache.org/jira/browse/SOLR-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13808: Status: Patch Available (was: Open) > Query DSL should let to cache filter > > > Key: SOLR-13808 > URL: https://issues.apache.org/jira/browse/SOLR-13808 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13808.patch > > > Query DSL let to express Lucene BQ's filter > > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar"}}} }}{code} > However, it might easily catch the need in caching it in filter cache. This > might rely on ExtensibleQuery and QParser: > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar", cache:true}}} }} > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13808) Query DSL should let to cache filter
[ https://issues.apache.org/jira/browse/SOLR-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13808: Attachment: SOLR-13808.patch Status: Open (was: Open) > Query DSL should let to cache filter > > > Key: SOLR-13808 > URL: https://issues.apache.org/jira/browse/SOLR-13808 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13808.patch > > > Query DSL let to express Lucene BQ's filter > > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar"}}} }}{code} > However, it might easily catch the need in caching it in filter cache. This > might rely on ExtensibleQuery and QParser: > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar", cache:true}}} }} > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-13859) ADDREPLICA stuck in OverseerCollectionMessageHandler.waitToSeeReplicasInState
Tomas Eduardo Fernandez Lobbe created SOLR-13859: Summary: ADDREPLICA stuck in OverseerCollectionMessageHandler.waitToSeeReplicasInState Key: SOLR-13859 URL: https://issues.apache.org/jira/browse/SOLR-13859 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Reporter: Tomas Eduardo Fernandez Lobbe I noticed this every now and then in tests, ADDREPLICA command timeouts and it seems like the exceptions shows the command is stuck in {{OverseerCollectionMessageHandler.waitToSeeReplicasInState(OverseerCollectionMessageHandler.java:699)}}. There is section of the log {noformat} [junit4] 2> 160264 INFO (qtp1234431125-235) [n:127.0.0.1:56754_solr ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :addreplica with params action=ADDREPLICA&collection=tlog_replica_test_remove_leader&shard=shard1&type=TLOG&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 160269 INFO (OverseerThreadFactory-14-thread-5-processing-n:127.0.0.1:56754_solr) [n:127.0.0.1:56754_solr c:tlog_replica_test_remove_leader s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Node Identified 127.0.0.1:56754_solr for creating new replica of shard shard1 for collection tlog_replica_test_remove_leader [junit4] 2> 160271 INFO (OverseerThreadFactory-14-thread-5-processing-n:127.0.0.1:56754_solr) [n:127.0.0.1:56754_solr c:tlog_replica_test_remove_leader s:shard1 ] o.a.s.c.a.c.AddReplicaCmd Returning CreateReplica command. [junit4] 2> 160274 INFO (OverseerStateUpdate-72113680894263303-127.0.0.1:56754_solr-n_00) [n:127.0.0.1:56754_solr ] o.a.s.c.o.SliceMutator createReplica() { [junit4] 2> "operation":"addreplica", [junit4] 2> "collection":"tlog_replica_test_remove_leader", [junit4] 2> "shard":"shard1", [junit4] 2> "core":"tlog_replica_test_remove_leader_shard1_replica_t5", [junit4] 2> "state":"down", [junit4] 2> "base_url":"http://127.0.0.1:56754/solr";, [junit4] 2> "node_name":"127.0.0.1:56754_solr", [junit4] 2> "type":"TLOG"} [junit4] 2> 160385 INFO (zkCallback-163-thread-3) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/tlog_replica_test_remove_leader/state.json] for collection [tlog_replica_test_remove_leader] has occurred - updating... (live nodes size: [2]) [junit4] 2> 160385 INFO (zkCallback-163-thread-2) [ ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent state:SyncConnected type:NodeDataChanged path:/collections/tlog_replica_test_remove_leader/state.json] for collection [tlog_replica_test_remove_leader] has occurred - updating... (live nodes size: [2]) [junit4] 2> 210134 INFO (qtp1234431125-603) [n:127.0.0.1:56754_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.jvm:os.processCpuLoad&key=solr.node:CONTAINER.fs.coreRoot.usableSpace&key=solr.jvm:os.systemLoadAverage&key=solr.jvm:memory.heap.used} status=0 QTime=1 [junit4] 2> 210269 INFO (qtp50249358-694) [n:127.0.0.1:56755_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.tlog_replica_test_remove_leader.shard1.replica_t2:INDEX.sizeInBytes&key=solr.core.tlog_replica_test_remove_leader.shard1.replica_t2:UPDATE./update.requests&key=solr.core.tlog_replica_test_r emove_leader.shard1.replica_t2:QUERY./select.requests} status=0 QTime=131 [junit4] 2> 210272 INFO (qtp50249358-689) [n:127.0.0.1:56755_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.jvm:os.processCpuLoad&key=solr.node:CONTAINER.fs.coreRoot.usableSpace&key=solr.jvm:os.systemLoadAverage&key=solr.jvm:memory.heap.used} status=0 QTime=1 [junit4] 2> 250262 INFO (TEST-TestTlogReplica.testRemoveLeader-seed#[9E36ECDD7B3349CD]) [ ] o.a.s.c.TestTlogReplica tearDown deleting collection [junit4] 2> 250265 INFO (qtp1234431125-603) [n:127.0.0.1:56754_solr ] o.a.s.h.a.CollectionsHandler Invoked Collection Action :delete with params name=tlog_replica_test_remove_leader&action=DELETE&wt=javabin&version=2 and sendToOCPQueue=true [junit4] 2> 270281 INFO (qtp1234431125-600) [n:127.0.0.1:56754_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.jvm:os.processCpuLoad&key=solr.node:CONTAINER.fs.coreRoot.usableSpace&key=solr.jvm:os.systemLoadAverage&key=solr.jvm:memory.heap.used} status=0 QTime=1 [junit4] 2> 270334 INFO (qtp50249358-689) [n:127.0.0.1:56755_solr ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/metrics params={wt=javabin&version=2&key=solr.core.tlog_replica_test_remove_leader.shard1.repli
[jira] [Updated] (SOLR-13808) Query DSL should let to cache filter
[ https://issues.apache.org/jira/browse/SOLR-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13808: Attachment: (was: SOLR-13808.patch) > Query DSL should let to cache filter > > > Key: SOLR-13808 > URL: https://issues.apache.org/jira/browse/SOLR-13808 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13808.patch > > > Query DSL let to express Lucene BQ's filter > > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar"}}} }}{code} > However, it might easily catch the need in caching it in filter cache. This > might rely on ExtensibleQuery and QParser: > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar", cache:true}}} }} > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13808) Query DSL should let to cache filter
[ https://issues.apache.org/jira/browse/SOLR-13808?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-13808: Attachment: SOLR-13808.patch Status: Patch Available (was: Patch Available) > Query DSL should let to cache filter > > > Key: SOLR-13808 > URL: https://issues.apache.org/jira/browse/SOLR-13808 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mikhail Khludnev >Priority: Major > Attachments: SOLR-13808.patch > > > Query DSL let to express Lucene BQ's filter > > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar"}}} }}{code} > However, it might easily catch the need in caching it in filter cache. This > might rely on ExtensibleQuery and QParser: > {code:java} > { query: {bool: { filter: {term: {f:name,query:"foo bar", cache:true}}} }} > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (SOLR-13860) Enable back TestTlogReplica
Tomas Eduardo Fernandez Lobbe created SOLR-13860: Summary: Enable back TestTlogReplica Key: SOLR-13860 URL: https://issues.apache.org/jira/browse/SOLR-13860 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud, Tests Reporter: Tomas Eduardo Fernandez Lobbe Assignee: Tomas Eduardo Fernandez Lobbe {{TestTlogReplica}} was disabled in the past due to random failures. This Jira is to fox those failures and enable back the test -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] tflobbe opened a new pull request #965: SOLR-13860: Enable back TestTlogReplica
tflobbe opened a new pull request #965: SOLR-13860: Enable back TestTlogReplica URL: https://github.com/apache/lucene-solr/pull/965 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Updated] (SOLR-13831) Support defining arbitrary autoscaling simulation scenarios
[ https://issues.apache.org/jira/browse/SOLR-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-13831: Attachment: SOLR-13831.patch > Support defining arbitrary autoscaling simulation scenarios > --- > > Key: SOLR-13831 > URL: https://issues.apache.org/jira/browse/SOLR-13831 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-13831.patch, SOLR-13831.patch > > > In many cases where the {{bin/solr autoscaling}} tool is used it would be > very useful to be able to specify a concrete scenario to play out, eg: > * load a snapshot (or create a simulated cluster of N nodes) > * calculate suggestions > * apply suggestions > * kill one or more nodes > * loop N times > * make some arbitrary SolrRequest-s > * save snapshot > * etc... > > This could be expressed as a very simple DSL that can be loaded from a text > file, with the following format: > {code:java} > # comments > // or comments > create_cluster numNodes=5 // inline comment > solr_request > /admin/collections?action=CREATE&name=testCollection&numShards=2&replicationFactor=2 > loop_start iterations=10 > calculate_suggestions > apply_suggestions > loop_end > save_snapshot path=/foo{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13831) Support defining arbitrary autoscaling simulation scenarios
[ https://issues.apache.org/jira/browse/SOLR-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957408#comment-16957408 ] Andrzej Bialecki commented on SOLR-13831: - Updated patch: * support for index_docs, set_node_metrics, set_shard_metrics and assert * some renaming: set_listener -> event_listener, wait_listener -> wait_event * support for other methods than GET and {{stream.body}} in solr_request. I think that at this point the functionality is complete enough to be useful, so I plan to commit this shortly. Further enhancements can be tracked separately. > Support defining arbitrary autoscaling simulation scenarios > --- > > Key: SOLR-13831 > URL: https://issues.apache.org/jira/browse/SOLR-13831 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-13831.patch, SOLR-13831.patch > > > In many cases where the {{bin/solr autoscaling}} tool is used it would be > very useful to be able to specify a concrete scenario to play out, eg: > * load a snapshot (or create a simulated cluster of N nodes) > * calculate suggestions > * apply suggestions > * kill one or more nodes > * loop N times > * make some arbitrary SolrRequest-s > * save snapshot > * etc... > > This could be expressed as a very simple DSL that can be loaded from a text > file, with the following format: > {code:java} > # comments > // or comments > create_cluster numNodes=5 // inline comment > solr_request > /admin/collections?action=CREATE&name=testCollection&numShards=2&replicationFactor=2 > loop_start iterations=10 > calculate_suggestions > apply_suggestions > loop_end > save_snapshot path=/foo{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13831) Support defining arbitrary autoscaling simulation scenarios
[ https://issues.apache.org/jira/browse/SOLR-13831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957413#comment-16957413 ] Andrzej Bialecki commented on SOLR-13831: - Example scenario that exercises the {{IndexSizeTrigger}}: {code} create_cluster numNodes=100 solr_request /admin/collections?action=CREATE&autoAddReplicas=true&name=testCollection&numShards=2&replicationFactor=2&maxShardsPerNode=2 wait_collection collection=testCollection&shards=2&replicas=2 solr_request /admin/autoscaling?httpMethod=POST&stream.body= {'set-trigger':{'name':'indexSizeTrigger','event':'indexSize','waitFor':'10s','aboveDocs':1000,'enabled':true, 'actions':[{'name':'compute_plan','class':'solr.ComputePlanAction'},{'name':'execute_plan','class':'solr.ExecutePlanAction'}]}} event_listener trigger=indexSizeTrigger&stage=SUCCEEDED index_docs collection=testCollection&numDocs=3000 run wait_event trigger=indexSizeTrigger&wait=60 assert condition=not_null&key=_trigger_event_indexSizeTrigger assert condition=equals&key=_trigger_event_indexSizeTrigger/eventType&expected=INDEXSIZE assert condition=equals&key=_trigger_event_indexSizeTrigger/properties/requestedOps[0]/action&expected=SPLITSHARD wait_collection collection=testCollection&shards=6&withInactive=true&requireLeaders=false&replicas=2 {code} > Support defining arbitrary autoscaling simulation scenarios > --- > > Key: SOLR-13831 > URL: https://issues.apache.org/jira/browse/SOLR-13831 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Attachments: SOLR-13831.patch, SOLR-13831.patch > > > In many cases where the {{bin/solr autoscaling}} tool is used it would be > very useful to be able to specify a concrete scenario to play out, eg: > * load a snapshot (or create a simulated cluster of N nodes) > * calculate suggestions > * apply suggestions > * kill one or more nodes > * loop N times > * make some arbitrary SolrRequest-s > * save snapshot > * etc... > > This could be expressed as a very simple DSL that can be loaded from a text > file, with the following format: > {code:java} > # comments > // or comments > create_cluster numNodes=5 // inline comment > solr_request > /admin/collections?action=CREATE&name=testCollection&numShards=2&replicationFactor=2 > loop_start iterations=10 > calculate_suggestions > apply_suggestions > loop_end > save_snapshot path=/foo{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8062) Never cache GlobalOrdinalQuery
[ https://issues.apache.org/jira/browse/LUCENE-8062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957454#comment-16957454 ] ASF subversion and git services commented on LUCENE-8062: - Commit 421718607ebcf448f9931b462cce87f1e86670ce in lucene-solr's branch refs/heads/jira/SOLR-13822 from jimczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4217186 ] LUCENE-8062: Update CHANGES entry after backport to 8.3 > Never cache GlobalOrdinalQuery > -- > > Key: LUCENE-8062 > URL: https://issues.apache.org/jira/browse/LUCENE-8062 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Fix For: 7.2, 8.0 > > Attachments: LUCENE-8062.patch, LUCENE-8062.patch > > > GlobalOrdinalsQuery holds a possibly large bitset of global ordinals that can > pollute the query cache because the size of the query is not accounted in the > memory usage of the cache. > Moreover two instances of this query must share the same top reader context > to be considered equal so they are not the ideal candidate for segment level > caching. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13854) Remove deprecated SolrMetricProducer.initializeMetrics API
[ https://issues.apache.org/jira/browse/SOLR-13854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957456#comment-16957456 ] ASF subversion and git services commented on SOLR-13854: Commit 1d7cd6157570ca12ba3480b082479a21dd5aa660 in lucene-solr's branch refs/heads/jira/SOLR-13822 from Andrzej Bialecki [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1d7cd61 ] SOLR-13854: Remove deprecated SolrMetricProducer.initializeMetrics API. > Remove deprecated SolrMetricProducer.initializeMetrics API > -- > > Key: SOLR-13854 > URL: https://issues.apache.org/jira/browse/SOLR-13854 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Andrzej Bialecki >Assignee: Andrzej Bialecki >Priority: Major > Fix For: master (9.0) > > > SOLR-13677 introduced an improved API for registration and cleanup of metrics > for Solr components. The previous API has been deprecated in 8x and it should > be removed in 9.0. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13822) Isolated Classloading from packages
[ https://issues.apache.org/jira/browse/SOLR-13822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957458#comment-16957458 ] ASF subversion and git services commented on SOLR-13822: Commit da099834f586b24c2c61035eee229ef2363ba332 in lucene-solr's branch refs/heads/jira/SOLR-13822 from Noble Paul [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=da09983 ] Merge branch 'master' into jira/SOLR-13822 > Isolated Classloading from packages > --- > > Key: SOLR-13822 > URL: https://issues.apache.org/jira/browse/SOLR-13822 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Noble Paul >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Design is here: > https://docs.google.com/document/d/15b3m3i3NFDKbhkhX_BN0MgvPGZaBj34TKNF2-UNC3U8/edit?ts=5d86a8ad# -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8992) Share minimum score across segments in concurrent search
[ https://issues.apache.org/jira/browse/LUCENE-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957449#comment-16957449 ] ASF subversion and git services commented on LUCENE-8992: - Commit cfa49401671b5f9958d46c04120df8c7e3f358be in lucene-solr's branch refs/heads/jira/SOLR-13822 from jimczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cfa4940 ] LUCENE-8992: Update CHANGES after backport to 8x > Share minimum score across segments in concurrent search > > > Key: LUCENE-8992 > URL: https://issues.apache.org/jira/browse/LUCENE-8992 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Fix For: master (9.0), 8.4 > > Time Spent: 6h 10m > Remaining Estimate: 0h > > As a follow up of LUCENE-8978 we should share the minimum score in > concurrent search > for top field collectors that sort on relevance first. The same logic should > be applicable with the only condition that the primary sort is by relevance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13824) JSON Request API to reject anything after JSON object is over
[ https://issues.apache.org/jira/browse/SOLR-13824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957450#comment-16957450 ] ASF subversion and git services commented on SOLR-13824: Commit afdb80069cc7a7972411b90dd08847cac574e3dd in lucene-solr's branch refs/heads/jira/SOLR-13822 from Mikhail Khludnev [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=afdb800 ] SOLR-13824: reject prematurely closed curly bracket in JSON. > JSON Request API to reject anything after JSON object is over > - > > Key: SOLR-13824 > URL: https://issues.apache.org/jira/browse/SOLR-13824 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: JSON Request API >Reporter: Mikhail Khludnev >Assignee: Mikhail Khludnev >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13824.patch, SOLR-13824.patch, SOLR-13824.patch, > SOLR-13824.patch > > > {code:java} > json={query:"content:foo", facet:{zz:{field:id}}} > {code} > this works fine, but if we mistype {{}}} instead of {{,}} > {code:java} > json={query:"content:foo"} facet:{zz:{field:id}}} > {code} > It's captured only partially, here's we have under debug > {code:java} > "json":{"query":"content:foo"}, > {code} > I suppose it should throw an error with 400 code. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13403) Terms component fails for DatePointField
[ https://issues.apache.org/jira/browse/SOLR-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957452#comment-16957452 ] ASF subversion and git services commented on SOLR-13403: Commit 597241a412a0a27fa3a915df2934de3fdb5a376f in lucene-solr's branch refs/heads/jira/SOLR-13822 from Munendra S N [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=597241a ] SOLR-13403: disable distrib test for point fields in terms > Terms component fails for DatePointField > > > Key: SOLR-13403 > URL: https://issues.apache.org/jira/browse/SOLR-13403 > Project: Solr > Issue Type: Bug > Components: SearchComponents - other >Reporter: Munendra S N >Assignee: Munendra S N >Priority: Major > Fix For: 8.4 > > Attachments: SOLR-13403.patch, SOLR-13403.patch, SOLR-13403.patch, > SOLR-13403.patch > > > Getting terms for PointFields except DatePointField. For DatePointField, the > request fails NPE -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9006) Ensure WordDelimiterGraphFilter always emits catenateAll token early
[ https://issues.apache.org/jira/browse/LUCENE-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957457#comment-16957457 ] ASF subversion and git services commented on LUCENE-9006: - Commit 517bfd0ab75adb59ad85797118d263bebcf11f52 in lucene-solr's branch refs/heads/jira/SOLR-13822 from David Smiley [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=517bfd0 ] LUCENE-9006: WDGF catenateAll should come before parts Fixes #953 > Ensure WordDelimiterGraphFilter always emits catenateAll token early > > > Key: LUCENE-9006 > URL: https://issues.apache.org/jira/browse/LUCENE-9006 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Time Spent: 40m > Remaining Estimate: 0h > > Ideally, the first token of WDGF is the preserveOriginal (if configured to > emit), and the second should be the catenateAll (if configured to emit). The > deprecated WDF does this but WDGF can sometimes put the first other token > earlier when there is a non-emitted candidate sub-token. > Example input "8-other" when only generateWordParts and catenateAll -- *not* > generateNumberParts. WDGF internally sees the '8' but moves on. Ultimately, > the "other" token and the catenated "8other" will appear at the same internal > position, which by luck fools the sorter to emit "other" first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9010) extend TopGroups.merge test coverage
[ https://issues.apache.org/jira/browse/LUCENE-9010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957455#comment-16957455 ] ASF subversion and git services commented on LUCENE-9010: - Commit f8292f5372502598dc8cabc50642c4f783e1c811 in lucene-solr's branch refs/heads/jira/SOLR-13822 from Christine Poerschke [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f8292f5 ] LUCENE-9010: extend TopGroups.merge test coverage > extend TopGroups.merge test coverage > > > Key: LUCENE-9010 > URL: https://issues.apache.org/jira/browse/LUCENE-9010 > Project: Lucene - Core > Issue Type: Sub-task >Reporter: Christine Poerschke >Priority: Minor > Attachments: LUCENE-9010.patch, LUCENE-9010.patch > > > This sub-task proposes to add test coverage for the {{TopGroups.merge}} > method, separately from but as preparation for LUCENE-8996 fixing the > 'maxScore is sometimes missing' bug. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-8992) Share minimum score across segments in concurrent search
[ https://issues.apache.org/jira/browse/LUCENE-8992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957448#comment-16957448 ] ASF subversion and git services commented on LUCENE-8992: - Commit 066d324006507e9830179a9801bf8860d2ffc9b2 in lucene-solr's branch refs/heads/jira/SOLR-13822 from Jim Ferenczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=066d324 ] Merge pull request #904 from jimczi/shared_min_score LUCENE-8992: Share minimum score across segment in concurrent search > Share minimum score across segments in concurrent search > > > Key: LUCENE-8992 > URL: https://issues.apache.org/jira/browse/LUCENE-8992 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Fix For: master (9.0), 8.4 > > Time Spent: 6h 10m > Remaining Estimate: 0h > > As a follow up of LUCENE-8978 we should share the minimum score in > concurrent search > for top field collectors that sort on relevance first. The same logic should > be applicable with the only condition that the primary sort is by relevance. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (LUCENE-9022) Never cache GlobalOrdinalsWithScoreQuery
[ https://issues.apache.org/jira/browse/LUCENE-9022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957453#comment-16957453 ] ASF subversion and git services commented on LUCENE-9022: - Commit c68470e57764f9c954008887d302a07a8bf6ae14 in lucene-solr's branch refs/heads/jira/SOLR-13822 from Jim Ferenczi [ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c68470e ] LUCENE-9022: Never cache GlobalOrdinalsWithScoreQuery (#963) > Never cache GlobalOrdinalsWithScoreQuery > > > Key: LUCENE-9022 > URL: https://issues.apache.org/jira/browse/LUCENE-9022 > Project: Lucene - Core > Issue Type: Improvement >Reporter: Jim Ferenczi >Priority: Minor > Fix For: master (9.0), 8.3 > > Time Spent: 20m > Remaining Estimate: 0h > > Today we disable caching for GlobalOrdinalsQuery > (https://issues.apache.org/jira/browse/LUCENE-8062) but not > GlobalOrdinalsWithScoreQuery. However the GlobalOrdinalsWithScoreQuery is > sometimes executed in a filter context (despite the name, e.g. when min/max > are provided) so we should protect this query for the same reasons invoked in > LUCENE-8062. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Created] (LUCENE-9024) Optimize IntroSelector for worst case
Paul Sanwald created LUCENE-9024: Summary: Optimize IntroSelector for worst case Key: LUCENE-9024 URL: https://issues.apache.org/jira/browse/LUCENE-9024 Project: Lucene - Core Issue Type: Improvement Components: core/other Reporter: Paul Sanwald There is a TODO in IntroSelector.java to use the median of medians algorithm instead of HeapSort for the worst case, as median of medians offers a better time complexity. I've discussed this with [~jpountz] and he has agreed to review my work on this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] pcsanwald opened a new pull request #966: LUCENE-9024 Optimize IntroSelector to use median of medians
pcsanwald opened a new pull request #966: LUCENE-9024 Optimize IntroSelector to use median of medians URL: https://github.com/apache/lucene-solr/pull/966 # Description Today, `IntroSelector.slowSelect` falls back to a heap sort; This PR moves that implementation to use a median of medians approach, suggested by a TODO in the code. # Solution Implemented a median of medians algorithm. # Tests I didn't add any new tests for this as I believe the current test cases, `TestIntroSelector`, should cover the basic functionality. However, one thing I'd like input on: Today, `select(int from, int to, int k)`, the `to` parameter is _exclusive_. Since it's also a specified parameter in the test, the test never checks the case where the full array should be used. `doTestSelect` has a call to `Arrays.sort(expected, from, to);`, but `to` will never be the last index in the array because the size of the array is allocated to `from + to + random().nextInt(5)`, so the actual size of the array will be a minimum of `from + to`. I'd be happy to address this problem in this PR, or in a subsequent PR if that's preferred. # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] gus-asf opened a new pull request #967: SOLR-13857 - fix QueryParser.jj and FastCharStream such that generated
gus-asf opened a new pull request #967: SOLR-13857 - fix QueryParser.jj and FastCharStream such that generated URL: https://github.com/apache/lucene-solr/pull/967 code will compile, and required minor edits that need to be reproduced after regeneration to pass precommit etc are clearly marked. # Description Please provide a short description of the changes you're making with this pull request. # Solution Please provide a short description of the approach taken to implement your solution. # Tests Please describe the tests you've developed or run to confirm this patch implements the feature or solves the problem. # Checklist Please review the following and check all that apply: - [ ] I have reviewed the guidelines for [How to Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms to the standards described there to the best of my ability. - [ ] I have created a Jira issue and added the issue ID to my pull request title. - [ ] I am authorized to contribute this code to the ASF and have removed any code I do not have a license to distribute. - [ ] I have given Solr maintainers [access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork) to contribute to my PR branch. (optional but recommended) - [ ] I have developed this patch against the `master` branch. - [ ] I have run `ant precommit` and the appropriate test suite. - [ ] I have added tests for my changes. - [ ] I have added documentation for the [Ref Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) (for Solr changes only). This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Commented] (SOLR-13855) DistributedZkUpdateProcessor isn't propagating finish()
[ https://issues.apache.org/jira/browse/SOLR-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16957534#comment-16957534 ] David Smiley commented on SOLR-13855: - In about 9 hours from now (when I start my next work day) if I don't see your linked PR (or patch) then I will take over fixing it immediately because it appears this is holding up Solr 8.3. I'll probably manually check that it works by running Solr, for expediency. If you later do a test then I'll add it whenever you provide it. BTW note RunUpdateProcessor.finish() calls {{org.apache.solr.update.UpdateLog#finish}} and it's plausible that not doing so may mean the updateLog isn't doing its job of providing Solr fault-tolerance guarantees. :-( > DistributedZkUpdateProcessor isn't propagating finish() > --- > > Key: SOLR-13855 > URL: https://issues.apache.org/jira/browse/SOLR-13855 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 8.1 >Reporter: David Smiley >Priority: Major > > In SOLR-12955, DistributedUpdateProcessorFactory was split up into a > subclass, DistributedZkUpdateProcessor. This refactoring has a bug in which > finish() is not propagated to the remaining URPs in the chain when > DistributedZkUpdateProcessor is in play. This is noticeable when > LogUpdateProcessorFactory is later down the line. > CC [~barrotsteindev] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Assigned] (SOLR-13855) DistributedZkUpdateProcessor isn't propagating finish()
[ https://issues.apache.org/jira/browse/SOLR-13855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley reassigned SOLR-13855: --- Fix Version/s: 8.3 Assignee: David Smiley Priority: Blocker (was: Major) > DistributedZkUpdateProcessor isn't propagating finish() > --- > > Key: SOLR-13855 > URL: https://issues.apache.org/jira/browse/SOLR-13855 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: UpdateRequestProcessors >Affects Versions: 8.1 >Reporter: David Smiley >Assignee: David Smiley >Priority: Blocker > Fix For: 8.3 > > > In SOLR-12955, DistributedUpdateProcessorFactory was split up into a > subclass, DistributedZkUpdateProcessor. This refactoring has a bug in which > finish() is not propagated to the remaining URPs in the chain when > DistributedZkUpdateProcessor is in play. This is noticeable when > LogUpdateProcessorFactory is later down the line. > CC [~barrotsteindev] -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-9006) Ensure WordDelimiterGraphFilter always emits catenateAll token early
[ https://issues.apache.org/jira/browse/LUCENE-9006?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Smiley resolved LUCENE-9006. -- Fix Version/s: 8.4 Resolution: Fixed > Ensure WordDelimiterGraphFilter always emits catenateAll token early > > > Key: LUCENE-9006 > URL: https://issues.apache.org/jira/browse/LUCENE-9006 > Project: Lucene - Core > Issue Type: Improvement > Components: modules/analysis >Reporter: David Smiley >Assignee: David Smiley >Priority: Major > Fix For: 8.4 > > Time Spent: 40m > Remaining Estimate: 0h > > Ideally, the first token of WDGF is the preserveOriginal (if configured to > emit), and the second should be the catenateAll (if configured to emit). The > deprecated WDF does this but WDGF can sometimes put the first other token > earlier when there is a non-emitted candidate sub-token. > Example input "8-other" when only generateWordParts and catenateAll -- *not* > generateNumberParts. WDGF internally sees the '8' but moves on. Ultimately, > the "other" token and the catenated "8other" will appear at the same internal > position, which by luck fools the sorter to emit "other" first. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages
noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages URL: https://github.com/apache/lucene-solr/pull/957#discussion_r337859625 ## File path: solr/core/src/java/org/apache/solr/handler/SolrConfigHandler.java ## @@ -245,10 +246,23 @@ private void handleGET() { if (componentName != null) { Map map = (Map) val.get(parts.get(1)); if (map != null) { -val.put(parts.get(1), makeMap(componentName, map.get(componentName))); +Object o = map.get(componentName); Review comment: This was just format change introduced by IDE This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages
noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages URL: https://github.com/apache/lucene-solr/pull/957#discussion_r337861358 ## File path: solr/core/src/java/org/apache/solr/core/PluginBag.java ## @@ -138,14 +140,27 @@ static void initInstance(Object inst, PluginInfo info) { log.debug("{} : '{}' created with startup=lazy ", meta.getCleanTag(), info.name); return new LazyPluginHolder(meta, info, core, core.getResourceLoader(), false); } else { - T inst = core.createInstance(info.className, (Class) meta.clazz, meta.getCleanTag(), null, core.getResourceLoader()); - initInstance(inst, info); - return new PluginHolder<>(info, inst); + if (info.pkgName != null) { Review comment: inlined This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages
noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages URL: https://github.com/apache/lucene-solr/pull/957#discussion_r337860209 ## File path: solr/core/src/java/org/apache/solr/core/SolrCore.java ## @@ -274,6 +282,15 @@ public SolrResourceLoader getResourceLoader() { return resourceLoader; } + public SolrResourceLoader getResourceLoader(String pkg) { Review comment: I've added javadocs for the same This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org
[GitHub] [lucene-solr] noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages
noblepaul commented on a change in pull request #957: SOLR-13822: Isolated Classloading from packages URL: https://github.com/apache/lucene-solr/pull/957#discussion_r337867647 ## File path: solr/core/src/java/org/apache/solr/update/processor/UpdateRequestProcessorChain.java ## @@ -339,27 +359,20 @@ public UpdateRequestProcessorFactory get() { return lazyFactory; } -public class LazyUpdateRequestProcessorFactory extends UpdateRequestProcessorFactory { - private final PluginBag.LazyPluginHolder holder; - UpdateRequestProcessorFactory delegate; +public static class LazyUpdateRequestProcessorFactory extends UpdateRequestProcessorFactory { + private final PluginBag.PluginHolder holder; Review comment: These classes existed before . I didn't want to change too much. We can probably have . a refactoring later This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org For additional commands, e-mail: issues-h...@lucene.apache.org