Hi Shawn,
Need your help:
I am using master slave architecture in my system and here is the
solrconfig.xml:
${enable.master:false} startup commit 00:00:10 managed-schema
${enable.slave:false} http://${MASTER_CORE_URL}/${solr.core.name}
${POLL_TIME}
Problem:
I am Noticing that my slaves
I didn't see a real Java project there, but the directions to compile on
Linux are almost always applicable to Windows with Java. If you find a
project that says it uses Ant or Maven, all you need to do is download Ant
or Maven, the Java Development Kit and put both of them on the windows
path. The
Hi,
in
solr-6.3.0/solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
there is the following code starting line 107:
//TODO default to 2; but need to debug why BasicDistributedZk2Test fails
early on
String znode = message.getInt(DocCollection.STATE_FORMAT, 1) == 1
Problem is that we would like to run without down times. Rolling updates
worked fine so far except when creating a collection at the wrong time.
I just did another test with stateFormat=2. This seems to greatly
improve the situation. One collection creation got stuck but other
creations still w
Thanks, Mike, for emphasizing that point. I put that point in the blog post
as well - the recommended approach if it's sufficient for sure.
Erik
> On Jan 4, 2017, at 07:36, Mike Thomsen wrote:
>
> I didn't see a real Java project there, but the directions to compile on
> Linux are almo
On 1/4/2017 6:23 AM, Hendrik Haddorp wrote:
> Problem is that we would like to run without down times. Rolling
> updates worked fine so far except when creating a collection at the
> wrong time. I just did another test with stateFormat=2. This seems to
> greatly improve the situation. One collectio
On 1/4/2017 3:45 AM, kshitij tyagi wrote:
> Problem:
>
> I am Noticing that my slaves are not able to use proper caching as:
>
> 1. I am indexing on my master and committing frequently, what i am noticing
> is that my slaves are committing very frequently and cache is not being
> build properly and
Hello
Is it possible to override the ExtractClass for a specific document?
I would like to upload a XML Document, but this XML is not XML conform
I need this XML because it is part of a project where a corrupt XML is
need, for testing purpose.
The update/extract process failes every time wi
Can anyone explain how to get rid of this error?
java.lang.Exception: Assertions mismatch: -ea was not specified but
-Dtests.asserts=true
at __randomizedtesting.SeedInfo.seed([5B25E606A72BD541]:0)
at
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsR
Actually the state format defaults to 2 since many releases (all of
6.x at least). This default is enforced in CollectionsHandler much
before the code in ClusterStateMutator is executed.
On Wed, Jan 4, 2017 at 6:16 PM, Hendrik Haddorp wrote:
> Hi,
>
> in
> solr-6.3.0/solr/core/src/java/org/apache
This issue is resolved for Solr 6.4:
https://issues.apache.org/jira/browse/SOLR-9919
I also created an issue to resolve future bugs of this nature:
https://issues.apache.org/jira/browse/SOLR-9924
Thanks for the bug report!
Joel Bernstein
http://joelsolr.blogspot.com/
On Tue, Jan 3, 2017 at 9:04
On 1/4/2017 8:29 AM, Jennifer Coston wrote:
> Can anyone explain how to get rid of this error?
>
> java.lang.Exception: Assertions mismatch: -ea was not specified but
> -Dtests.asserts=true
> at __randomizedtesting.SeedInfo.seed([5B25E606A72BD541]:0)
> at
> org.apache.lucene.util.Test
On 1/4/2017 8:12 AM, sn0...@ulysses-erp.com wrote:
> Is it possible to override the ExtractClass for a specific document?
> I would like to upload a XML Document, but this XML is not XML conform
>
> I need this XML because it is part of a project where a corrupt XML is
> need, for testing purpose.
i get an exception
"org.apache.tika.exception.TikaException: Zip bomb
detected!
if i would like to parse a html file - and i think i know why.
because there are many many in cascade over 200 divs and
span are inside each.
Is it correct that there is this limit for html files?
---
You might get a more knowledgeable response from the Tika folks,
that's really not something Solr controls.
Best,
Erick
On Wed, Jan 4, 2017 at 8:50 AM, wrote:
> i get an exception "org.apache.tika.exception.TikaException:
> Zip bomb detected!
> if i would like to parse a html file - and i thin
You are right, the code looks like it. But why did I then see collection
data in the clusterstate.json file? If version 1 is not used I would
assume that no data ends up in there. When explicitly setting the state
format 2 the system seemed to behave differently. And if the code always
uses version
This came up back in September [1] and [2]. Same trigger...crazy number of
divs.
I think we could modify the AutoDetectParser to enable configuration of maximum
zip-bomb depth via tika-config.
If there's any interest in this, re-open TIKA-2091, and I'll take a look.
Best,
Tim
I had success doing something like this, which I found in some of the Solr
tests...
SolrResourceLoader loader = new SolrResourceLoader(solrHomeDir.toPath());
Path configSetPath = Paths.get(configSetHome).toAbsolutePath();
final NodeConfig config = new
NodeConfig.NodeConfigBuilder("embeddedSolrSer
Hello,
while creating a new collection, it fails to spin up solr cores on some
nodes due to "insufficient direct memory".
Here is the error:
- *3044_01_17_shard42_replica1:*
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
The max direct memory is likely too low.
Hendrik:
Historically in 4.x, there was code that would reconstruct the
clusterstate.json code. So you would see "deleted" collections come
back. One scenario was:
- Have a Solr node offline that had a replica for a collection.
- Delete that collection
- Bring the node back
- It would register it
Hi Erik,
I have actually also seen that behavior already. So will check what
happens when I set that property.
I still believe I'm getting the clusterstate.json set already before the
node comes up again. But I will try to verify that further tomorrow.
thanks,
Hendrik
On 04/01/17 22:10, Erick Er
On 1/4/2017 1:43 PM, Chetas Joshi wrote:
> while creating a new collection, it fails to spin up solr cores on some
> nodes due to "insufficient direct memory".
>
> Here is the error:
>
>- *3044_01_17_shard42_replica1:*
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Let us know how it goes. You'll probably want to remove the _contents_
of clusterstate.json and just leave it as a pair of brackets , i.e. {}
if for no other reason than it's confusing.
Times past the node needed to be there even if empty. Although I just
tried removing it completely on 6x and I w
Hi Shawn
Thanks for the explanation!
I have slab count set to 20 and I did not have global block cache.
I have a follow up question. Does setting slab count=1 affect the
write/read performance of Solr while reading the indices from HDFS? Is this
setting just used while creating new cores?
Thanks
Thanks for your response.
We definitely use solrQuery.set("json.facet", "the json query here");
Btw we are using Solr 5.2.1.
--
View this message in context:
http://lucene.472066.n3.nabble.com/SolrJ-doesn-t-work-with-Json-facet-api-tp4299867p4312459.html
Sent from the Solr - User mailing lis
25 matches
Mail list logo