der this user entered query: *one AND (two OR
field2:three)*
This will need to be non-trivially translated into:
*en:one AND (en:two OR field2:en|three). *
Is there other conventional way to pass in language string (one per search
request) to query analyzer/tokenizer ?
Thanks in advance,
Michael
Blargy,
I've been experimenting with this myself for a work project. What I
did was use a combination of the two running the indexed terms through
the Shingle factory and then through the edge n-gram filter. I did
this in order to be able to match terms like :
.net asp c#
asp .net c#
c# asp .net
or the reply Michael. Ill definitely try that out and let you know
> how it goes. Your solution sounds similar to the one I've read here:
> http://www.lucidimagination.com/blog/2009/09/08/auto-suggest-from-popular-queries-using-edgengrams/
>
> There are some good comments in ther
Hi everyone,
I'm indexing several documents that contain words that the StandardTokenizer
cannot detect as tokens. These are words like
C#
.NET
C++
which are important for users to be able to search for, but get treated as
"C", "NET", and "C".
How can I create a list of words that should be
tries a normal
"kill", of course), I'd like to hear the answer straight from the horse's
mouth...
I'm using Solr 1.4 nightly from about a month ago. Can I kill -9 without
fear of having to rebuild my index?
Thanks!
Michael
On Thu, Aug 6, 2009 at 11:38 AM, Michael _ wrote:
> Hi everyone,
> I'm indexing several documents that contain words that the
> StandardTokenizer cannot detect as tokens. These are words like
> C#
> .NET
> C++
> which are important for users to be able to sear
ore
so I have no idea if this is possible.
Any suggestions on what approach I should take? The less I have to modify
Solr, the better -- I'd prefer a query-side solution over writing a plugin
over forking the standard query handler.
Thanks in advance!
Michael
Anybody have any suggestions or hints? I'd love to score my queries in a
way that pays attention to how close together terms appear.
Michael
On Thu, Aug 13, 2009 at 12:01 PM, Michael wrote:
> Hello,
> I'd like to score documents higher that have the user's search term
That's great for my needs,
assuming the 100 slop doesn't increase query time horribly.
Michael
On Mon, Aug 17, 2009 at 10:15 AM, Mark Miller wrote:
> Dismax QueryParser with pf and ps params?
>
> http://wiki.apache.org/solr/DisMaxRequestHandler
>
> --
> - Mark
Great, thank you Mark!
Michael
On Mon, Aug 17, 2009 at 10:48 AM, Mark Miller wrote:
> PhraseQuery's do score higher if the terms are found closer together.
>
> does that imply that during the computation of the score for "a b
>>> c"~100, sloppyFr
ekey&sorter/order=DESC
<https://issues.apache.org/jira/secure/IssueNavigator.jspa?reset=true&&pid=12310230&fixfor=12313351&resolution=-1&sorter/field=issuekey&sorter/order=DESC>
Michael
On Tue, Aug 18, 2009 at 9:05 AM, Constantijn Visinescu
wrote:
> Last i heard
hat I should turn off if I can get my entire
index in RAM? Filter cache, query results cache, etc?
Thanks!
Michael
You could post-process the response and remove urls that don't match your
domain pattern.
On Mon, Aug 31, 2009 at 9:45 AM, Olivier H. Beauchesne wrote:
> Hi Mike,
>
> No, my problem is that the field article_outlinks is multivalued thus it
> contains several urls not related to my search. I would
Thanks, Avlesh! I'll try the filter cache.
Anybody familiar enough with the caching implementation to chime in?
Michael
On Mon, Aug 31, 2009 at 10:02 PM, Avlesh Singh wrote:
> Good question!
> The application level cache, say filter cache, would still help because it
> not only
Hi,
I have a Solr+Tomcat installation on an 8 CPU Linux box, and I just tried
sending parallel requests to it and measuring response time. I would expect
that it could handle up to 8 parallel requests without significant slowdown
of any individual request.
Instead, I found that Tomcat is serializ
e index is in a ramfs.
- Michael
On Tue, Sep 22, 2009 at 8:08 PM, Yonik Seeley wrote:
> What version of Solr are you using?
> Solr1.3 and Lucene 2.4 defaulted to an index reader implementation
> that had to synchronize, so search operations that are IO "heavy"
> can't pr
ot
multithreading as well as it should :)
Your thoughts?
Michael
On Wed, Sep 23, 2009 at 10:48 AM, Fuad Efendi wrote:
> For 8-CPU load-stress testing of Tomcat you are probably making mistake:
> - you should execute load-stress software and wait 5-30 minutes (depends on
> index size)
all 8 CPUs! While my test corpus is
45K docs, my actual corpus will be 30MM, and so I'd like to get all the
performance I can out of my box.
Michael
numbers except that Tomcat is not
> > multithreading as well as it should :)
>
> Hi Michael, I think it is very natural; 8 single processes not sharing
> anything are faster than 8 threads sharing something.
>
8 threads sharing something may have *some* overhead versus 8 proces
I'm
querying 8 processes or 8 threads in 1 process, right?
Michael
le to
work in a multicore environment?
Michael
On Wed, Sep 23, 2009 at 11:55 AM, Walter Underwood wrote:
> This sure seems like a good time to try LucidGaze for Solr. That would give
> some Solr-specific profiling data.
>
> http://www.lucidimagination.com/Downloads/LucidGaze-for-Solr
&g
On Wed, Sep 23, 2009 at 12:05 PM, Yonik Seeley
wrote:
> On Wed, Sep 23, 2009 at 11:47 AM, Michael wrote:
> > If this were IO bound, wouldn't I see the same results when sending my 8
> > requests to 8 Tomcats? There's only one "disk" (well, RAM) whether I'
s or something, then send a
"remove core 'core3weeksold' " command.
See http://wiki.apache.org/solr/CoreAdmin#CoreAdminHandler .
Michael
On Thu, Sep 24, 2009 at 12:31 AM, Silent Surfer wrote:
> Hi,
>
> Is there any way to dynamically point the Solr servers to an index/da
and a large
boost in speed after cutting extraneous storage from Solr -- the stored data
is mixed in with the index data and so it slows down searches.
You could also put all 200G onto one Solr instance rather than 10 for >7days
data, and accept that those searches will be slower.
Michael
On Fri
Thank you Grant and Lance for your comments -- I've run into a separate snag
which puts this on hold for a bit, but I'll return to finish digging into
this and post my results. - Michael
On Thu, Sep 24, 2009 at 9:23 PM, Lance Norskog wrote:
> Are you on Java 5, 6 or 7? Each rele
r the suggestions!
Michael
On Fri, Sep 25, 2009 at 10:02 AM, Michael wrote:
> Thank you Grant and Lance for your comments -- I've run into a separate
> snag which puts this on hold for a bit, but I'll return to finish digging
> into this and post my results. - Michael
>
> On Thu,
If I index a bunch of email documents, is there a way to say"show me all
email documents, but only one per To: email address"
so that if there are a total of 10 distinct To: fields in the corpus, I get
back 10 email documents?
I'm aware of http://wiki.apache.org/solr/Deduplication but I want to re
I'd like to have 5 cores on my box. core0 should automatically shard to
cores 1-4, which each have a quarter of my corpus.
I tried this in my solrconfig.xml:
${solr.core.shardsParam:}
and this in my solr.xml:
Unfortunately, this doesn't work, because cores
On Wed, Oct 7, 2009 at 1:46 PM, Michael wrote:
> Is there a way to not have the shards param at all for most cores, and for
> core0 to specify it?
E.g. core0 requests always get a "&shards=foo" appended, while other
cores don't have an "&shards" param at
On Fri, Oct 9, 2009 at 6:03 AM, Shalin Shekhar Mangar
wrote:
> Michael, the last line does not seem right. The tag has nothing
> called shardParam. If you want to add a core property called shardParam, you
> need to add something like this:
>
>
>
>
> value="l
cify a &shards query parameter.) Clearly
I'm doing something wrong, but I'm in the dark as to how to do it
right.
Any help would be appreciated!
Michael
On Fri, Oct 9, 2009 at 10:13 AM, Michael wrote:
> On Fri, Oct 9, 2009 at 6:03 AM, Shalin Shekhar Mangar
> wrote:
>> M
On Fri, Oct 9, 2009 at 10:26 AM, Michael wrote:
> Hm... still no success. Can anyone point me to a doc that explains
> how to define and reference core properties? I've had no luck
> searching Google.
OK, definition is described here:
http://wiki.apache.org/solr/CoreAdmin#prop
0 to the one specifying a &shards=
parameter:
I don't like the duplication of config, but at least it accomplishes my goal!
Michael
On Fri, Oct 9, 2009 at 10:37 AM, Michael wrote:
> On Fri, Oct 9, 2009 at 10:26 AM, Michael wrote:
>> Hm... still no success. Can anyone p
ng an entire plugin -- just a tag specifying a default
parameter to a . Individual tags don't have an
"enable" flag for me to conditionally set to false. Maybe I'm
misunderstanding what you're suggesting?
Thanks again,
Michael
OK, a hacky but working solution to making one core shard to all
others: have the default parameter *name* vary, so that one core gets
"&shards=foo" and all other cores get "&dummy=foo".
# solr.xml
...
# solrconfig.xml
${shardsVal
What tokenizer and filters are you using in what order? See schema.xml.
Also, you may wish to use ASCIIFoldingFilter, which covers more cases
than ISOLatin1AccentFilter.
Michael
On Mon, Oct 12, 2009 at 12:42 PM, R. Tan wrote:
> Hi,
> I'm querying with an accented keyword such as
rors.
How can I get insight into what is causing the failure? I assume it's
some configuration problem but don't know where to start.
Thanks in advance for any help! Config files are below.
Michael
Here is my solr.xml:
And here's the
, even manually clicking
"Replicate Now" on the slave will show failures without explaining
why.
With replicateAfter="startup" and "commit" I was able to get a slave
core in the same Solr instance to replicate upon startup and upon
add-doc-and-commit.
Michael
On Tue, N
ction. I also tried putting
"${replicateCore:${solr.core.name}}" in the solrconfig.xml, but the
default in that case is literally "${solr.core.name}" -- the variable
expansion isn't recursive.
Thanks in advance for any pointers.
Michael
ut between my commit and the stop, more documents can come in
that would need *another* commit...
Lots of people must have had this problem already, so I know the
answer is simple; I just can't find it!
Thanks.
Michael
gwk! This doesn't exactly meet our needs, but helped us get
to a solution. In short, we are manually committing in our outside
updater process (instead of letting Solr autocommit), and marking
which documents have been updated before a successful commit. Now
stopping solr is as easy as kill -9.
Michael
lina stop instead of kill -9.
It's good to know about the enabled feature -- my team was just
discussing whether something like that existed that we could use --
but as we'd also like to recover cleanly from power failures and other
Solr terminations, I think we'll track which docs are uncommitted
outside of Solr.
Michael
. It stops accepting connections, but java refuses
to actually die. Not sure what we're doing wrong on our end, but I
see this frequently and end up having to do a kill (usually not -9!).
I guess we'll stick with externally tracking which docs have
committed, so that when we inevitably have to kill Solr it doesn't
cause a problem.
Michael
a, Lance. I certainly agree with the idea of backstop
janitors. We don't have a good way of polling Solr for what's in
there or not -- we have a kind of asynchronous, multithreaded updating
system sending docs to Solr -- but we always can find out *externally*
which docs have been committed or not.
Michael
t purged. Am
I misunderstanding you?
Michael
PS: The extra 1.5G actually matters, as this is one of 8 cores and I'm
trying to keep it all in RAM.
On Tue, Nov 17, 2009 at 2:37 PM, Israel Ekpo wrote:
> On Tue, Nov 17, 2009 at 2:24 PM, Chris Hostetter
> wrote:
>
>>
>> : Bas
On Fri, Nov 20, 2009 at 12:35 PM, Yonik Seeley
wrote:
> On Fri, Nov 20, 2009 at 12:24 PM, Michael wrote:
>> So -- I thought I understood you to mean that if I frequently merge,
>> it's basically the same as an optimize, and cruft will get purged. Am
>> I misunders
Hello,
I've got a stored, indexed field that contains some actual text, and some
metainfo, like this:
one two three four [METAINFO] oneprime twoprime threeprime fourprime
I have written a Tokenizer that skips past the [METAINFO] marker and uses
the last four words as the tokens for the field,
/solr/toplevel/select/&q=field:value.
Is this a known bug, or am I just doing something wrong?
Thanks in advance!
- Michael
PS: The NPE, which is thrown by the midlevel cores:
Jan 7, 2010 4:01:02 PM org.apache.solr.common.SolrException l
a complex analyzer). When I
search for foo:value where foo is an analyzer that uses
StandardTokenizer
LowerCaseFilter
WordDelimeterFilter
TrimFilter
I *don't* get an NPE.
Thanks,
Michael
On Thu, Jan 7, 2010 at 4:25 PM, Yonik Seeley wrote:
> On Thu, Jan 7, 2010 at 4:17 PM, Mich
I have two datasets
I've been testing with - one with 56 U.S. cities (the "sparse" set)
and one with over 197000 towns and cities (the "dense" set). The dense
set exhibited no problems with consistency searching at various radii,
but the sparse set exhibited the same issues
(). In this case, the "for" loop is
skipped altogether and the method returns a CartesianShape object with
an empty boxIds list.
I notice the problem when I have small, geographically sparse datasets.
I'm going to shoot the jteam an email regarding this.
Michael D.
On Tue, Mar 3
the same test on two sets of
data, one searching progressively outward from a point in the US and
from one in Russia. The Russia test showed the inconsistent results
while the U.S. didn't.
Mike D.
On Wed, Mar 31, 2010 at 4:57 PM, Mccleese, Sean W (388A)
wrote:
> Michael,
>
> This w
getDocIdSet it seems. I also tried using
NumericRangeQueries in a QueryWrapperFilter but still no luck. Do I
have to customize the getDocSetId methods in some way? Any help or
pointers in the right direction would be appreciated.
Regards,
Michael Donelan
Ekpo wrote:
> He is referring to the org.apache.lucene.search.Filter classes.
>
> Michael,
>
> I did a search too and I could not really find any useful tutorials on the
> subject.
>
> You can take a look at how this is implemented in the Spatial Solr Plugin by
> the JTea
Try using "geo_distance" in the return fields.
On Thu, Apr 29, 2010 at 9:26 AM, Jean-Sebastien Vachon
wrote:
> Hi All,
>
> I am using JTeam's Spatial Plugin RC3 to perform spatial searches on my index
> and it works great. However, I can't seem to get it to return the computed
> distances.
>
>
Hello all,
I downloaded 5.4 and started doing a rolling upgrade from a 5.0
solrcloud cluster and discovered that there seems to be a compatibility
issue where doing a rolling upgrade from pre-5.4 which causes the 5.4 to
fail with unable to determine leader errors.
Is there a work around that
ok,
I just found the 5.4.1 RC2 download, it seems to work ok for a rolling
upgrade.
I will see about downgrading back to 5.4.0 afterwards to be on an
official release ...
On 01/19/2016 04:27 PM, Michael Joyner wrote:
Hello all,
I downloaded 5.4 and started doing a rolling upgrade from a
fix release. It should be out
around the weekend.
On Tue, Jan 19, 2016 at 1:48 PM, Michael Joyner wrote:
ok,
I just found the 5.4.1 RC2 download, it seems to work ok for a rolling
upgrade.
I will see about downgrading back to 5.4.0 afterwards to be on an official
release ...
On 01/19/2016 04
On 01/21/2016 01:22 PM, Ishan Chattopadhyaya wrote:
Perhaps you could stay on 5.4.1 RC2, since that is what 5.4.1 will be
(unless there are last moment issues).
On Wed, Jan 20, 2016 at 7:50 PM, Michael Joyner wrote:
Unfortunately, it really couldn't wait.
I did a rolling upgrade t
he reason for this is, that we have some logic at the solr
server which heavily depends on theses other java objects. Unfortunately we
cannot easily shift that logic to the client side.
Thank you!
Michael
We're running solr 4.4.0 running in this software
(https://github.com/CDRH/nebnews - Django based newspaper site). Solr is
running on Ubuntu 12.04 in Jetty. The site occasionally (once a day) goes down
with a Connection Refused error. I’m having a hard time troubleshooting the
issue and was loo
rs:
"hl=true&hl.fl=normal_text&hl.simple.pre=&hl.simple.post="
and return:
> "highlighting": { "chikora.com": {} }
>
("chikora.com" it's the id of the parent document)
it's looks this already solved here:
https://issues.apache.o
ore 2 because the distance between the words...)
Thank you,
Michael
Thanks you, @Doug Turnbull I tried http://splainer.io but it's not for my
query(not explain for the docs..).
here the picture again...
https://drive.google.com/file/d/0B-7dnH4rlntJc2ZWdmxMS3RDMGc/view?usp=sharing
On Tue, Mar 1, 2016 at 10:06 PM, Doug Turnbull <
dturnb...@opensourceconnections.com>
of the parent document)
it's looks this already solved here:
https://issues.apache.org/jira/browse/LUCENE-5929
but I don't understand how to use it.
Thanks,
Michael
P.S: sorry about my English.. working on it :)
Hi Emir,
In morning I delete those documents and know added them again to re-run the
query.. and know this is how I expect (0_0) and I can't to re-produce the
problem... this weird.. :\
On Wed, Mar 2, 2016 at 11:38 AM, Emir Arnautovic <
emir.arnauto...@sematext.com> wrote:
> Hi
use of
it this parent returned as a result (this child with the highest score?).
3. Am I boosting right?:
>{!parent which="is_parent:true" score="max"}
>(
>normal_text:("clients home"~1000)
>h_titles:("clients home"~1000)^3
>title:("clients home"~1000)^5
>depth:0^1.1
>)
Thank you,
Michael
Hi,
I have a collection with one shard in solrcloud (for development before
scaling) and when I'm trying to update new documents it's take about 20 sec
for 12mb of data.
What wrong with my config?
VM RAM - 28gb
JVM-Memory - 10gb
What else can I do?
Thanks,
Michael
wrote:
> This really doesn't have much information to go on.
>
> Have you reviewed: http://wiki.apache.org/lucene-java/ImproveIndexingSpeed
> ?
>
> What is "slow"? How are you updating? Are you batching updates? Are
> you committing often?
>
> Details m
Hi,
how can I *highlight* and *return* the most relevant child with
BlockJoinQuery.
for this:
> {!parent which="is_parent:*" score=max}(title:(terms)
I expect to get:
.
.
.
docs:[
{
doc parent
_childDocuments_:{the most relevant child}
}
{
doc parent2
_childDocuments_:{the most rel
x27;t comment, I only saw that there is some
> highlighting case for {!parent} queries. Sorry.
>
> On Mon, Mar 14, 2016 at 6:13 PM, michael solomon
> wrote:
>
> > Hi,
> > how can I *highlight* and *return* the most relevant child with
> > BlockJoinQuery.
> > for
What query do you try?
On Thu, Mar 17, 2016 at 12:22 PM, Anil wrote:
> HI,
>
> We are using solrcloud with zookeeper and each collection has 5 shareds and
> 2 replicas.
> we are seeing "org.apache.solr.client.solrj.SolrServerException: No live
> SolrServers available to handle this request". i d
+ rank/MaxRank = finel
score(between 0-1)
Thanks,
Michael
want that the collations will order by numFound. and obviesly that
"predictive analytics" have more results from "positive analytic".
Thanks,
Michael
Hi,
how can I index time boost nested documents JSON format?
Should I
simply ignore it?
Thanks,Michael
Thanks, and what we can do about that?
On Apr 2, 2016 5:28 PM, "Reth RM" wrote:
> Afaik, such feature doesn't exist currently, but looks like nice to have.
>
>
>
>
> On Thu, Mar 31, 2016 at 8:33 PM, michael solomon
> wrote:
>
> > Hi,
> > It
Hi,
image:
http://s24.postimg.org/u457bhzr9/Untitled.png
why the suggestion return "analytics" (great!) but the collation take
"analtics"?
Thanks,
Michael
done :)
https://issues.apache.org/jira/browse/SOLR-8934
On Sun, Apr 3, 2016 at 2:08 PM, Reth RM wrote:
> May be open a jira under improvement.
> https://issues.apache.org/jira/login.jsp?
>
>
> On Sat, Apr 2, 2016 at 11:30 PM, michael solomon
> wrote:
>
> > Thanks, an
Hi,
I'm using in JoinBlock query - {!parent which="is_parent:true"}
I need to return the most relevant child for each parent.
I saw in Google there two options for this:
1. Expand
2. ChildDocTransformerFactory
what the difference between them? which one I need to use?
Thanks a lot,
Michael
Hey all,
I am using lucene and solr version 4.2, and was wondering what would
be the best way to not allow regex queries with very large numbers.
Something like blah{1234567} or blah{1234, 123445678}
t be other ways
> of doing this.
>
> If you're allowing regexes on numeric fields, using real
> number fields (trie) and using range queries is a much
> better way to go.
>
> Best,
> Erick
>
>> On Sun, Apr 10, 2016 at 9:28 AM, Michael Harkins wrote:
>>
rent_field:"bla bla"^10
Thanks,
Michael
hich="is_parent:true"
> score=max}(child_field:bla)
> or
> parent_field:"bla bla"^10 +{!parent which="is_parent:true"
> score=max}(child_field:bla)
>
> there should be no spaces in child clause, otherwise extract it to param
> and refrer via v=$param
ery and and omit the
> closing bracket.
>
> On Tue, Apr 12, 2016 at 3:30 PM, michael solomon
> wrote:
>
> > Thanks,
> > when I'm trying:
> > city:"walla walla"^10 {!parent which="is_parent:true"
> > score=max}(normal_text:walla)
> >
error?
Should I simply ignore it?
Thanks,
Michael
Hey,
I am encountering an issue which looks a lot like
https://issues.apache.org/jira/browse/SOLR-6763.
However, it seems like the fix for that does not address the entire problem.
That fix will only work if we hit the zkClient.getChildren() call before the
reconnect logic has finished reconne
I added a comment on the INFRA issue.
I don't understand why it periodically "gets stuck".
Mike McCandless
http://blog.mikemccandless.com
On Fri, Oct 23, 2015 at 11:27 AM, Kevin Risden
wrote:
> It looks like both Apache Git mirror (git://git.apache.org/lucene-solr.git)
> and GitHub mirror (ht
StandardTokenizer splits your text into tokens, and the suggester
suggests tokens independently. It sounds as if you want the suggestions
to be based on the entire text (not just the current word), and that
only adjacent words in the original should appear as suggestions.
Assuming that's what
On 02/17/2015 03:46 AM, Volkan Altan wrote:
First of all thank you for your answer.
You're welcome - thanks for sending a more complete example of your
problem and expected behavior.
I don’t want to use KeywordTokenizer. Because, as long as the compound words
written by the user are availabl
There is also PostingsHighlighter -- I recommend it, if only for the
performance improvement, which is substantial, but I'm not completely
sure how it handles this issue. The one drawback I *am* aware of is
that it is insensitive to positions (so words from phrases get
highlighted even in isol
October 2014, Apache Solr™ 4.10.4 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.10.4
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted
;re using Tomcat 6.X with Solr 4.8.X
Cheers,
Michael
Hi,
I’m attempting to facet on the results of a custom solr function. I’ve been
trying all kinds of combinations that I think would work, but keep getting
errors. I’m starting to wonder if it is possible.
I’m using Solr 4.0 and here is how I am calling:
&facet.query={!func}myCustomSolrQuer
en 2 etc
Not sure if there’s a better way, but this works
From: , Michael Motulewicz
mailto:michael.motulew...@healthsparq.com>>
Reply-To: "solr-user@lucene.apache.org<mailto:solr-user@lucene.apache.org>"
mailto:solr-user@lucene.apache.org>>
Date: Monday, April
634 27 287.19 GB
Master (Replicable) 1430107573634 27 -
Slave (Searching) 1429762011916 23 287.14 GB
Any idea why the replication is not triggered here or what I could try
to fix it?
Solr Version is 4.10.3.
-Michael
Hi,
I have a SolrCloud setup, running 4.10.3. The setup consists of several cores,
each with a single shard and initially each shard has a single replica (so,
basically, one machine). I am using core discovery, and my deployment tools
create an empty core on newly provisioned machines.
The sce
Hi,
I am seeing some unexpected behavior when adding a new machine to my cluster. I
am running 4.10.3.
My setup has multiple collections, each collection has a single shard. I am
using core auto discovery on the hosts (my deployment mechanism ensures that
the directory structure is created and
make sure that the node with data
>becomes the leader. So just keep the cluster running while adding a new
>node.
>
>Also, stop relying on core discovery for setting up a node. At some point
>we will stop supporting this feature. Use the collection API to add new
>replicas.
>
IBM's J9 JVM unfortunately still has a number of nasty bugs affecting
Lucene; most likely you are hitting one of these. We used to test J9
in our continuous Jenkins jobs, but there were just too many
J9-specific failures and we couldn't get IBM's attention to resolve
them, so we stopped. For now
1 - 100 of 2002 matches
Mail list logo