This is sounding like an XY problem. What are you measuring
when you say RAM usage is 99%? is this virtual memory? See:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
What errors are you seeing when you say: "my node stops to receiving
documents"?
How are you sending 10M
I make a test at my SolrCloud. I try to send 100 millions documents into my
node which has no replica via Hadoop. When document count send to that node
is around 30 millions, RAM usage of my machine becomes 99% (Solr Heap Usage
is not 99%, it uses just 3GB - 4GB of RAM). After a time later my node
bq: but the uniqueId is generated by me. But when solr indexes and there
is an update in a doc, it deletes the doc and creates a new one, so it
generates a new UUID.
right, this is why I was saying that a UUID field may not fit your use
case. The _point_ of a UUID field is to generate a unique en
Solr does not index arbitrary XML, it only indexes XML in a very
specific format.
You could write some kind of SolrJ program that parsed your XML
docs and constructed the appropriate SolrInputDocuments.
You could use DIH with some of the XML/XSL transformations,
but be aware that the XSLT bits do
bq: Also, my boss told me it unequivocally has to be this way :p
Pesky bosses .
But how often is the index changing? If you're not doing any updates
to it, then the problem is moot the other way to approach this problem
is to just control when the index changes. Would it suffice to only have
Well, "it depends". If the tomcat that went down contains all
the replicas (leader and follower) then indexing will halt,
searching should continue with indications that you're getting
partial results back.
If at least one node for each shard is still active, you should
be fine. There may be some
Usually that error means you have a mix of old and new jars
in your classpath somehow. How that's only being triggered
when you have multiple nodes I'm not sure. By chance have
you copied any jars into different places somehow?
Best
Erick
On Fri, Aug 23, 2013 at 2:48 AM, 兴涛孙 wrote:
> hello,guy
Hi,
The uuid, that was been used like the id of a document, it's generated by
solr using an updatechain.
I just use the recommend method to generate uuid's.
I think an atomic update is not suitable for me, because I want that solr
indexes the feeds and not me. I don't want to send information to