http://www.solr-start.com/javadoc/solr-lucene/org/apache/solr/update/processor/TruncateFieldUpdateProcessorFactory.html
Regards,
Alex.
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On 26 June 2016 at 15:09, asteiner wrote:
> Hi
>
> I have a fi
Thanks Alex,
This processor truncates the fields during index (I just checked it), but I
need a different thing.
Let's say I have a content of 100 characters. I want whole of this field to
be indexed, but I want to store only the first 50 characters, which I will
use for highlighting.
i.e: conten
Do you need to store the entire field or just the first part for highlighting?
If the latter, then define the field you search as
indexed="true" stored="false"
and the field you want to highlight (truncated) as
indexed="false" stored="true".
True that defines one extra field but the cost of one
Yes. you should see errors in your log about out of disk space.
However, you'll have lots of other problems. In the background,
merging segments requires extra disk space. Say seg1 and seg2
are being merged into seg3. First seg1 and seg2 are copied
into seg3, then ser1 and seg2 are deleted.
But i
My guess is that you are hitting garbage collection issues on those shards
that are going into recovery. If a leader tries to contact a follower
in a single
shard and times out it effectively says "that one must be gone,
let's put it into recovery". Look for LeaderInitiatedRecovery (don't remember
One thing to add to Shawn's comments: How long does
it take to _acquire_ the data? I've often seen, say,
pulling the data from a RDBMS be the bottleneck
rather than Solr per-se. If you're using SolrJ (with
CloudSolrClient) try commenting out the line where you
add the docs to Solr (i.e. something l
Thanks a ton Erick for the help!
Thanks
Deeksha
From: Erick Erickson
Sent: Sunday, June 26, 2016 8:58 AM
To: solr-user
Subject: Re: SolrCloud trying to upload documents and shards do not have
storage anymore
Yes. you should see errors in your log about o
1> The difference is that the factory returns the original field
from a "sidecar" index. If you're content with just firing a standard
query at your main index and returning the associated fields
then you can do this from the main index. You won't be able
to do the sophisticated stuff with weights
does anybody can help me for this case pls?
On Jun 26, 2016 3:38 AM, "tkg_cangkul" wrote:
> hi william,
>
> thx for your reply.
> is it any link article or documents that i can read for this?
> pls let me know if you've any suggestion.
>
> On 26/06/16 02:58, William Bell wrote:
>
>> It depends on
Hello Erick,
i have four collection in solrcloud.
there are heavy insert on each collection, but below issue is observed for one
collection.
I tried increasing zk session timeout upto 2 mins but no luck.
Not sure about reason of session timeout at zookeeper when insert is made.
do we need to ex
The NPE is showing that the expression clause is Null. Are you in SolrCloud
mode? This is required for Streaming Expressions.
I would try sending the query via your browser also, just to make sure
there isn't something we're missing in the curl syntax.
You can call the /stream handler directly an
Hi, Alex,
Thanks. I check the wrong folder and miss it.
BR,
Liu Peng
- 原始邮件 -
发件人:Alexandre Rafalovitch
收件人:solr-user
主题:Re: Re: Does Solr 6.0 support indexing and querying for HUNGARIAN, KOREAN,
SLOVAK, VIETNAMESE and Traditional Chinese documents?
日期:2016年06月24日 19点25分
You can jump
Hi, Alex,
You are right. I just begin to try ICU.
Thanks
Liu Peng
- 原始邮件 -
发件人:Alexandre Rafalovitch
收件人:solr-user
主题:Re: Re: Does Solr 6.0 support indexing and querying for HUNGARIAN, KOREAN,
SLOVAK, VIETNAMESE and Traditional Chinese documents?
日期:2016年06月24日 19点37分
Also,
Solr also
Hi
We are seeing the below error(No files to download for index generation)
followed by Interrupted exception.
org.apache.solr.handler.SnapPuller; No files to download for index
generation:
org.apache.solr.common.SolrException; SnapPull failed
:org.apache.solr.common.SolrException: Index fetch
Thanks Erick.
As far as I understand (during highlight debugging) a field must have
indexed=true in order to be highlighted, so the truncated field should be
indexed and stored. I would like to avoid the double indexed fields.
--
View this message in context:
http://lucene.472066.n3.nabble.com/
15 matches
Mail list logo