Re: having solr generate and execute other related queries automatically

2009-11-12 Thread Tim Underwood
>
>
> Unfortunately no.  the +20 queries are distinct from each other, even tho
> they share some of the original query parameters (and some facet
> information
> from the original query facets).
>
> what I was envisioning was something that works like a facet, but instead
> of
> returning information about the first query, it would return information
> about queries similar to the first query.
>

Maybe I misunderstand what you are trying to do (or the facet.query
feature).  If I did an initial query on my data-set that left me with the
following questions:

1. How many products are in brand 1?
2. How many products are in brand 2?
3. How many products are in brand 5 and category 4051?
4. etc...  (however many other arbitrary queries I want to get counts for)

I could use facet.query parameters to answer those with something like:

http://localhost:8983/solr/select/?q=*%3A*&start=0&rows=0&facet=on&facet.query=brand_id:1&facet.query=brand_id:2&facet.query=+%2Bbrand_id:5+%2Bcategory_id:4051

Where the parameters are:

q=*:*
start=0
rows=0
facet=on
facet.query=brand_id:1
facet.query=brand_id:2
facet.query=+brand_id:5 +category_id:4051

My response looks like:




 
  1450
  1047
  21
 
 
 




Are you talking about a different problem?  Do you have a simple example?

-Tim


ArrayIndexOutOfBoundsException when highlighting (Solr 1.4)

2010-01-21 Thread Tim Underwood
I'm seeing an java.lang.ArrayIndexOutOfBoundsException when trying to
highlight for certain queries.  The error seems to be an issue with the
combination of the ShingleFilterFactory, PositionFilterFactory and
the LengthFilterFactory.

Here's my fieldType definition:


  





  
  
  
  
  
  
  
  
   



Here's the field definition:



Here's a sample doc:



  1
  A 1280 C



Doing a query for "A 1280 C" and requesting highlighting throws the
exception (full stack trace below):

http://localhost:8983/solr/select/?q=sku_new%3A%22A+1280+C%22&version=2.2&start=0&rows=10&indent=on&&hl=on&hl.fl=sku_new&fl=*

If I comment out the LengthFilterFactory from my query analyzer section
everything seems to work.  Commenting out just the PositionFilterFactory
also makes the exception go away and seems to work for this specific query.

Anybody else run into anything similar?  Anything wrong with my fieldType
definition?  Should I file this as a bug with Solr (or Lucene)?

-Tim



Full stack trace:

java.lang.ArrayIndexOutOfBoundsException: -1
at
org.apache.lucene.search.highlight.WeightedSpanTermExtractor.extract(WeightedSpanTermExtractor.java:202)
at
org.apache.lucene.search.highlight.WeightedSpanTermExtractor.getWeightedSpanTerms(WeightedSpanTermExtractor.java:414)
at
org.apache.lucene.search.highlight.QueryScorer.initExtractor(QueryScorer.java:216)
at
org.apache.lucene.search.highlight.QueryScorer.init(QueryScorer.java:184)
at
org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:226)
at
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:335)
at
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:89)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:195)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1089)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211)
at
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139)
at org.mortbay.jetty.Server.handle(Server.java:285)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:821)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:513)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:208)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378)
at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:226)
at
org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:442)


Re: ArrayIndexOutOfBoundsException when highlighting (Solr 1.4)

2010-01-22 Thread Tim Underwood
Issue created:  https://issues.apache.org/jira/browse/SOLR-1731


On Fri, Jan 22, 2010 at 5:42 AM, Koji Sekiguchi  wrote:

> Tim Underwood wrote:
>
>> I'm seeing an java.lang.ArrayIndexOutOfBoundsException when trying to
>> highlight for certain queries.  The error seems to be an issue with the
>> combination of the ShingleFilterFactory, PositionFilterFactory and
>> the LengthFilterFactory.
>>
>> Here's my fieldType definition:
>>
>> > positionIncrementGap="100"
>> omitNorms="true">
>>  
>>
>>> generateNumberParts="0" catenateWords="0" catenateNumbers="0"
>> catenateAll="1"/>
>>
>>
>>
>>  
>>  
>>  
>>  > outputUnigrams="true"/>
>>  
>>  > generateNumberParts="0" catenateWords="0" catenateNumbers="0"
>> catenateAll="1"/>
>>  
>>  
>>   
>>
>> 
>>
>> Here's the field definition:
>>
>> > omitNorms="true"/>
>>
>> Here's a sample doc:
>>
>> 
>>
>>  1
>>  A 1280 C
>>
>> 
>>
>> Doing a query for "A 1280 C" and requesting highlighting throws the
>> exception (full stack trace below):
>>
>>
>> http://localhost:8983/solr/select/?q=sku_new%3A%22A+1280+C%22&version=2.2&start=0&rows=10&indent=on&&hl=on&hl.fl=sku_new&fl=*
>>
>> If I comment out the LengthFilterFactory from my query analyzer section
>> everything seems to work.  Commenting out just the PositionFilterFactory
>> also makes the exception go away and seems to work for this specific
>> query.
>>
>> Anybody else run into anything similar?  Anything wrong with my fieldType
>> definition?  Should I file this as a bug with Solr (or Lucene)?
>>
>> -Tim
>>
>>
>>
> ArrayIndexOutOfBoundsException is not good. Please file the issue,
> with a minimum data to reproduce the problem would be great help for us. :)
>
> Koji
>
> --
> http://www.rondhuit.com/en/
>
>


Re: Using solr to store data

2010-02-04 Thread Tim Underwood
We just switched over to storing our data directly in Solr as
compressed JSON fields at http://frugalmechanic.com.  So far it's
working out great.  Our detail pages (e.g.:
http://frugalmechanic.com/auto-part/817453-33-2084-kn-high-performance-air-filter)
now make a single Solr request to grab the part data, pricing data,
and fitment data.  Before we'd make a call to Solr, and then probably
3-4 DB calls to load data.

As Lance pointed out, the downside is that whenever any of our part
data changes we have to re-index the entire document.  So updating
pricing for some of our larger retailers means reindexing a large
portion of our dataset.  But that's the tradeoff we were willing to
make going into the change and so far a daily re-index of the data
that takes 30-60mins isn't a big deal.  But later on we may split out
the data that changes frequently from the data that doesn't change
often.

We're working with about 2 million documents and our optimized index
files are currently at 3.2 GB.  Using compression on the large text
fields really helps keep the size down.

-Tim

On Wed, Feb 3, 2010 at 9:26 PM, Tommy Chheng  wrote:
> Hey AJ,
> For simplicity sake, I am using Solr to serve as storage and search for
> http://researchwatch.net.
> The dataset is 110K  NSF grants from 1999 to 2009. The faceting is all
> dynamic fields and I use a catch all to copy all fields to a default text
> field. All fields are also stored and used for individual grant view.
> The performance seems fine for my purposes. I haven't done any extensive
> benchmarking with it. The site was built using a light ROR/rsolr layer on a
> small EC2 instance.
>
> Feel free to bang against the site with jmeter if you want to stress test a
> sample server to failure.  :)
>
> --
> Tommy Chheng
> Developer & UC Irvine Graduate Student
> http://tommy.chheng.com
>
> On 2/3/10 5:41 PM, AJ Asver wrote:
>>
>> Hi all,
>>
>> I work on search at Scoopler.com, a real-time search engine which uses
>> Solr.
>>  We current use solr for indexing but then fetch data from our couchdb
>> cluster using the IDs solr returns.  We are now considering storing a
>> larger
>> portion of data in Solr's index itself so we don't have to hit the DB too.
>>  Assuming that we are still storing data on the db (for backend and back
>> up
>> purposes) are there any significant disadvantages to using solr as a data
>> store too?
>>
>> We currently run a master-slave setup on EC2 using x-large slave instances
>> to allow for the disk cache to use as much memory as possible.  I imagine
>> we
>> would definitely have to add more slave instances to accomodate the extra
>> data we're storing (and make sure it stays in memory).
>>
>> Any tips would be really helpful.
>> --
>> AJ Asver
>> Co-founder, Scoopler.com
>>
>> +44 (0) 7834 609830 / +1 (415) 670 9152
>> a...@scoopler.com
>>
>>
>> Follow me on Twitter: http://www.twitter.com/_aj
>> Add me on Linkedin: http://www.linkedin.com/in/ajasver
>> or YouNoodle: http://younoodle.com/people/ajmal_asver
>>
>> My Blog: http://ajasver.com
>>
>>
>


Re: Distributed search and haproxy and connection build up

2010-02-11 Thread Tim Underwood
Have you played around with the "option httpclose" or the "option
forceclose" configuration options in HAProxy (both documented here:
http://haproxy.1wt.eu/download/1.3/doc/configuration.txt)?

-Tim

On Wed, Feb 10, 2010 at 10:05 AM, Ian Connor  wrote:
> Thanks,
>
> I bypassed haproxy as a test and it did reduce the number of connections -
> but it did not seem as those these connections were hurting anything.
>
> Ian.
>
> On Tue, Feb 9, 2010 at 11:01 PM, Lance Norskog  wrote:
>
>> This goes through the Apache Commons HTTP client library:
>> http://hc.apache.org/httpclient-3.x/
>>
>> We used 'balance' at another project and did not have any problems.
>>
>> On Tue, Feb 9, 2010 at 5:54 AM, Ian Connor  wrote:
>> > I have been using distributed search with haproxy but noticed that I am
>> > suffering a little from tcp connections building up waiting for the OS
>> level
>> > closing/time out:
>> >
>> > netstat -a
>> > ...
>> > tcp6       1      0 10.0.16.170%34654:53789 10.0.16.181%363574:8893
>> > CLOSE_WAIT
>> > tcp6       1      0 10.0.16.170%34654:43932 10.0.16.181%363574:8890
>> > CLOSE_WAIT
>> > tcp6       1      0 10.0.16.170%34654:43190 10.0.16.181%363574:8895
>> > CLOSE_WAIT
>> > tcp6       0      0 10.0.16.170%346547:8984 10.0.16.181%36357:53770
>> > TIME_WAIT
>> > tcp6       1      0 10.0.16.170%34654:41782 10.0.16.181%363574:
>> > CLOSE_WAIT
>> > tcp6       1      0 10.0.16.170%34654:52169 10.0.16.181%363574:8890
>> > CLOSE_WAIT
>> > tcp6       1      0 10.0.16.170%34654:55947 10.0.16.181%363574:8887
>> > CLOSE_WAIT
>> > tcp6       0      0 10.0.16.170%346547:8984 10.0.16.181%36357:54040
>> > TIME_WAIT
>> > tcp6       1      0 10.0.16.170%34654:40030 10.0.16.160%363574:8984
>> > CLOSE_WAIT
>> > ...
>> >
>> > Digging a little into the haproxy documentation, it seems that they do
>> not
>> > support persistent connections.
>> >
>> > Does solr normally persist the connections between shards (would this
>> > problem happen even without haproxy)?
>> >
>> > Ian.
>> >
>>
>>
>>
>> --
>> Lance Norskog
>> goks...@gmail.com
>>
>
>
>
> --
> Regards,
>
> Ian Connor
>


Re: Healthcheck. Too many open files

2010-04-12 Thread Tim Underwood
I'm using HAProxy with 5 second healthcheck intervals and haven't seen
any problems on Solr 1.4.

My HAProxy config looks like this:

listen solr :5083
  option httpchk GET /solr/parts/admin/ping HTTP/1.1\r\nHost:\ www
  server solr01 192.168.0.101:9983 check inter 5000
  server solr02 192.168.0.102:9983 check inter 5000

Have you tried hitting /admin/ping (which handles checking for the
existence of your health file) instead of
/admin/file?file=healthcheck.txt?

-Tim

On Sat, Apr 10, 2010 at 9:26 PM, Blargy  wrote:
>
> Lance,
>
> We have have thousands of searches per minute so a minute of downtime is out
> of the question. If for whatever reason one of our solr slaves goes down I
> want to remove it ASAP from the loadbalancers rotation, hence the 2 second
> check.
>
> Maybe I am doing something wrong but the my HAProxy healthcheck is as
> follows:
> ...
> option  httpchk GET /solr/items/admin/file?file=healthcheck.txt
> ...
> so basically I am requesting that file to determine if that particular slave
> is up or not. Is this the preferred way of doing this? I kind of like the
> "Enable/Disable" feature of this healthcheck file.
>
> You mentioned:
>
> "It should not run out of file descriptors from doing this. The code
> does a 'new File(healthcheck file name).exists()' and throws away the
> descriptor. This should not be a resource leak for file desciptors."
>
> yet if i run the following on the command line:
> # lsof -p 
> Where xxx is the pid of the solr, I get the following output:
>
> ...
> java    4408 root  220r   REG               8,17  56085252  817639
> /var/solr/home/items/data/index/_4y.tvx
> java    4408 root  221r   REG               8,17  10499759  817645
> /var/solr/home/items/data/index/_4y.tvd
> java    4408 root  222r   REG               8,17 296791079  817647
> /var/solr/home/items/data/index/_4y.tvf
> java    4408 root  223r   REG               8,17   7010660  817648
> /var/solr/home/items/data/index/_4y.nrm
> java    4408 root  224r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  225r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  226r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  227r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  228r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  229r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  230r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> java    4408 root  231r   REG               8,17         0  817622
> /var/solr/home/items/conf/healthcheck.txt
> ... at it keeps going 
>
> and I've see it as high as 3000. I've had to update my ulimit to 1 to
> overcome this problem however I feel this is really just a bandaid to a
> deeper problem.
>
> Am I doing something wrong (Solr or HAProxy) or is this a possible resource
> leak?
>
> Thanks for any input!
> --
> View this message in context: 
> http://n3.nabble.com/Healthcheck-Too-many-open-files-tp710631p711141.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Unbuffered entity enclosing request can not be repeated.

2010-05-11 Thread Tim Underwood
I've run into this also while trying to index ~4 million documents
using CommonsHttpSolrServer.add(Iterator
docIterator) to stream all the documents.  It worked great with
working with only ~2 million documents but eventually it would take ~5
tries for a full indexing job to complete without running into the
"Unbuffered entity enclosing request can not be repeated" error.  I'd
say about 25% of the errors were within the first 1,000 documents and
the rest were usually when indexing was 80-90% complete.

I tried switching to the StreamingUpdateSolrServer but ran into issues
with it locking up (which I think have been fixed since then).  Now I
use CommonsHttpSolrServer.add(Collection docs) to
batch up 250-500 documents per call with wrapped exception handling to
retry on errors.

-Tim

On Tue, May 11, 2010 at 8:57 AM, Satish Kumar
 wrote:
> I upload only 50 documents per call. We have about 200K documents to index,
> and we index every night. Any suggestions on how to handle this? (I can
> catch this exception and do a retry.)
>
> On Mon, May 10, 2010 at 8:33 PM, Lance Norskog  wrote:
>
>> Yes, these occasionally happen with long indexing jobs. You might try
>> limiting the number of  documents per upload call.
>>
>> On Sun, May 9, 2010 at 9:16 PM, Satish Kumar
>>  wrote:
>> > Found these errors in Tomcat's log file:
>> >
>> > May 9, 2010 10:57:24 PM org.apache.solr.common.SolrException log
>> > SEVERE: java.lang.RuntimeException: [was class
>> > java.net.SocketTimeoutException] Read timed out
>> >        at
>> >
>> com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18)
>> >        at
>> > com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731)
>> >        at
>> >
>> com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.java:3657)
>> >        at
>> > com.ctc.wstx.sr.BasicStreamReader.getText(BasicStreamReader.java:809)
>> >        at org.apache.solr.handler.XMLLoader.readDoc(XMLLoader.java:279)
>> >        at
>> > org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:138)
>> >        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69)
>> >        at
>> >
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
>> >
>> >        at
>> >
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>> >
>> >        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>> >
>> >
>> >        at
>> >
>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
>> >
>> >        at
>> >
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
>> >
>> >        at
>> >
>> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
>> >
>> >        at
>> >
>> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>> >
>> >        at
>> >
>> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>> >
>> >
>> >
>> >
>> > May 9, 2010 10:57:24 PM org.apache.solr.core.SolrCore execute
>> >
>> >
>> > INFO: [] webapp=/solr path=/update params={wt=javabin&version=1}
>> status=500
>> > QTime=25938
>> >
>> > May 9, 2010 10:57:24 PM org.apache.solr.common.SolrException log
>> >
>> >
>> > SEVERE: java.lang.RuntimeException: [was class
>> > java.net.SocketTimeoutException] Read timed out
>> >
>> >        at
>> >
>> com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18)
>> >
>> >        at
>> > com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731)
>> >
>> >        at
>> >
>> com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.java:3657)
>> >
>> >        at
>> > com.ctc.wstx.sr.BasicStreamReader.getText(BasicStreamReader.java:809)
>> >
>> >        at org.apache.solr.handler.XMLLoader.readDoc(XMLLoader.java:279)
>> >
>> >
>> >        at
>> > org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:138)
>> >
>> >        at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:69)
>> >
>> >
>> >        at
>> >
>> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
>> >
>> >        at
>> >
>> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>> >
>> >        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1316)
>> >
>> >
>> >        at
>> >
>> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:338)
>> >
>> >        at
>> >
>> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
>> >
>> >
>> >
>> >
>> > May 9, 2010 10:57:33 PM
>> org.apache.solr.update.processor.LogUpdateProcessor
>> > finish
>> >
>> > INFO: {} 0 2
>> >
>> >
>> > May 9, 2010 10:57:33 PM org.apache.solr.common.SolrException log
>> >
>> >
>> > SEVERE: org.apache.solr.common.SolrException: Invalid chunk header
>> >
>> >
>> >        at org.apache.solr.handler.XMLLoader.load(XMLLoader.

Re: How to handle List in Solr 6.6

2018-11-06 Thread Tim Underwood
Hi,

It sounds like you are looking for the "Nested Child Documents"[1] and
"Block Join Query Parsers"[2] features in Solr.  The terminology is weird
(block join, child/of, parent/which) but it should do what you want.

Do take note of the warning in the docs:

One limitation of indexing nested documents is that the whole block of
> parent-children documents must be updated together whenever any changes are
> required. In other words, even if a single child document or the parent
> document is changed, the whole block of parent-child documents must be
> indexed together.


What this note does not include is that if you delete a parent document you
must also explicitly delete the child documents otherwise they end up being
attached to another parent document.  I forget if this applies when you
re-index a document or not but to be safe I always explicitly delete the
parent and child documents.  There are a number of JIRA tickets floating
around relating to cleaning up the user experience for this.

-Tim

[1]
https://lucene.apache.org/solr/guide/6_6/uploading-data-with-index-handlers.html#UploadingDatawithIndexHandlers-NestedChildDocuments
[2]
https://lucene.apache.org/solr/guide/6_6/other-parsers.html#OtherParsers-BlockJoinQueryParsers

On Tue, Nov 6, 2018 at 12:01 AM waseem-farooqui 
wrote:

> I am new with Solr and using Spring-data-solr to store my complete **pdf**
> files with its contents now there raise a situation in which I want to
> store
> the file rating, that can be rate by list of users means I would have
> object
> something like this in my **DataModel** `List` in which
> `FileRating` would have `user, comments, date, rating` the response json
> structure should be like this
>
> {
>   "document": "Fuzzy based semantic search.pdf",
>   "md5Hash": "md5",
>   "rated": [
> {
>   "user": "John",
>   "comments": "Not Very useful",
>   "rating": 2,
>   "date": "20/10/2018"
> },
> {
>   "user": "Terrion",
>   "comments": "Useful with context to semantic based fuzzy
> logics.",
>   "rating": 6,
>   "date": "20/10/2018"
> }
>   ]
> }
>   and I not getting any idea how is this possible in solr have looked
> `multivalued` type but I don't think it would work in my scenario because
> at
> the end of the day I want to search all documents with its rating and could
> be file rated by specific users.
>
> `Solr 6.6`
>
>
>
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Re: Java 9 & solr 7.7.0

2019-03-23 Thread Tim Underwood
We are successfully running Solr 7.6.0 (and 7.5.0 before it) on OpenJDK 11
without problems.  We are also using G1.  We do not use Solr Cloud but do
rely on the legacy replication.

-Tim

On Sat, Mar 23, 2019 at 10:13 AM Erick Erickson 
wrote:

> I am, in fact, trying to get a summary of all this together, we’ll see how
> successful I am.
>
> I can say that Solr is tested (and has been for quite some time) against
> JDK 8,9,10,11,12 and even 13. JDK9, from a 10,000 foot perspective, has a
> success rate in our automated tests that’s in line with all the other JDKs.
>
> That said, people seem to be settling on JDK11 anecdotally, what’s your
> reason for using 9 .vs. 11?
>
> Finally, there was one issue with JDK 9 and Kerberos that I’m unsure what
> the resolution is, if there is any. If you use Kerberos, be sure to test
> that first.
>
> Best,
> Erick
>
> > On Mar 23, 2019, at 9:47 AM, Jay Potharaju 
> wrote:
> >
> > Thanks I missed that info. Will try running with jdk9 and see if it
> addresses the issue.
> > Jay
> >
> >> On Mar 23, 2019, at 9:00 AM, Shawn Heisey  wrote:
> >>
> >>> On 3/23/2019 8:12 AM, Jay Potharaju wrote:
> >>> Can I use java 9 with 7.7.0. I am planning to test if fixes issue with
> high cpu that I am running into.
> >>> https://bugs.openjdk.java.net/browse/JDK-8129861
> >>> Was solr 7.7 tested with java 9?
> >>
> >> The info for the 7.0.0 release said it was qualified with Java 9, so
> you should be fine running 7.7.x in Java 9 as well.  I do not know if it
> works with Java 10, 11, or 12.
> >>
> >> Thanks,
> >> Shawn
>
>


Re: Java 9 & solr 7.7.0

2019-03-25 Thread Tim Underwood
Yes, we run on OpenJDK 11 (the Oracle open source build, not the commercial
version).

There are several free OpenJDK distributions to choose from:

Oracle Built:  https://jdk.java.net/11/
AdoptOpenJDK: https://adoptopenjdk.net/
Amazon Corretto: https://aws.amazon.com/corretto/
RedHat: https://developers.redhat.com/products/openjdk/overview/
Zulu: https://www.azul.com/downloads/zulu/

There are possibly more but those are the ones I know about.

The commercial versions from Oracle under a non-open-source license are the
ones here:
https://www.oracle.com/technetwork/java/javase/downloads/index.html

-Tim


On Mon, Mar 25, 2019 at 10:51 AM Jay Potharaju 
wrote:

> I just learnt that java 11 is . Is anyone using open jdk11 in
> production?
> Thanks
>
>
> > On Mar 23, 2019, at 5:15 PM, Jay Potharaju 
> wrote:
> >
> > I have not kept up with jdk versions ...will try with jdk 11 and see if
> it addresses the high cpu issue. Thanks
> >
> >
> >> On Mar 23, 2019, at 11:48 AM, Jay Potharaju 
> wrote:
> >>
> >> Thanks for that info Tim
> >>
> >>> On Mar 23, 2019, at 11:26 AM, Tim Underwood 
> wrote:
> >>>
> >>> We are successfully running Solr 7.6.0 (and 7.5.0 before it) on
> OpenJDK 11
> >>> without problems.  We are also using G1.  We do not use Solr Cloud but
> do
> >>> rely on the legacy replication.
> >>>
> >>> -Tim
> >>>
> >>> On Sat, Mar 23, 2019 at 10:13 AM Erick Erickson <
> erickerick...@gmail.com>
> >>> wrote:
> >>>
> >>>> I am, in fact, trying to get a summary of all this together, we’ll
> see how
> >>>> successful I am.
> >>>>
> >>>> I can say that Solr is tested (and has been for quite some time)
> against
> >>>> JDK 8,9,10,11,12 and even 13. JDK9, from a 10,000 foot perspective,
> has a
> >>>> success rate in our automated tests that’s in line with all the other
> JDKs.
> >>>>
> >>>> That said, people seem to be settling on JDK11 anecdotally, what’s
> your
> >>>> reason for using 9 .vs. 11?
> >>>>
> >>>> Finally, there was one issue with JDK 9 and Kerberos that I’m unsure
> what
> >>>> the resolution is, if there is any. If you use Kerberos, be sure to
> test
> >>>> that first.
> >>>>
> >>>> Best,
> >>>> Erick
> >>>>
> >>>>> On Mar 23, 2019, at 9:47 AM, Jay Potharaju 
> >>>> wrote:
> >>>>>
> >>>>> Thanks I missed that info. Will try running with jdk9 and see if it
> >>>> addresses the issue.
> >>>>> Jay
> >>>>>
> >>>>>>> On Mar 23, 2019, at 9:00 AM, Shawn Heisey 
> wrote:
> >>>>>>>
> >>>>>>> On 3/23/2019 8:12 AM, Jay Potharaju wrote:
> >>>>>>> Can I use java 9 with 7.7.0. I am planning to test if fixes issue
> with
> >>>> high cpu that I am running into.
> >>>>>>> https://bugs.openjdk.java.net/browse/JDK-8129861
> >>>>>>> Was solr 7.7 tested with java 9?
> >>>>>>
> >>>>>> The info for the 7.0.0 release said it was qualified with Java 9, so
> >>>> you should be fine running 7.7.x in Java 9 as well.  I do not know if
> it
> >>>> works with Java 10, 11, or 12.
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Shawn
> >>>>
> >>>>
>