Hi,
We are upgrading solr from 4.x to 7.4.0
The way we query solr is using /select path and collection alias name
through http POST method as the query length is larger.
In version 4.x it was working fine.
But in 7.4.0 it fails with below exception
SolrException: Could not find collection :
Hi all,
Is there a way to return String buffer instead of Character Array ? This is
regarding below example. I want to return StringTermAttribute instead of
CharTermAttribute basically.
import java.io.IOException;
import org.apache.lucene.analysis.TokenFilter;
import org.apache.lucene.analysis.T
Right. That’s the whole point of hashing the in the first place.
I’ve never seen much imbalance in how documents are distributed using
compositeId, maybe a percent or two.
Do be aware that you can’t really extrapolate from, say, 100 docs over 10
shards. With
such low numbers you can get some ano
@Erick
Actually, i thought further and realized what you were saying. I am hoping
to rely on the murmur3 hash of the routing key to find the destination
shard.
On Sun, Jun 30, 2019 at 3:32 AM Nawab Zada Asad Iqbal
wrote:
> Hi Erick,
>
> I plan to use the composite-id routing. And I can use th
On 6/30/2019 2:08 PM, derrick cui wrote:
Good point Erick, I will try it today, but I have already use cursorMark in my
query for deep pagination.
Also I noticed that my cpu usage is pretty high, 8 cores, usage is over 700%. I
am not sure it will help if I use ssd disk
That depends on whether
Good point Erick, I will try it today, but I have already use cursorMark in my
query for deep pagination.
Also I noticed that my cpu usage is pretty high, 8 cores, usage is over 700%. I
am not sure it will help if I use ssd disk
Sent from Yahoo Mail for iPhone
On Sunday, June 30, 2019, 2:57
Well, the first thing I’d do is see what’s taking the time, querying or
updating? Should be easy enough to comment out whatever it is that sends docs
to Solr.
If it’s querying, it sounds like you’re paging through your entire data set and
may be hitting the “deep paging” problem. Use cursorMark
Only thing I can think of is to check whether you can do in-place
rather than atomic updates:
https://lucene.apache.org/solr/guide/8_1/updating-parts-of-documents.html#in-place-updates
But the conditions are quite restrictive: non-indexed
(indexed="false"), non-stored (stored="false"), single value
Thanks Alex,
My usage is that
1. I execute query and get result, return id only 2. Add a value to a dynamic
field3. Save to solr with batch size1000
I have define 50 queries and run them parallel Also I disable hard commit and
soft commit per 1000 docs
I am wondering whether any configuration
Indexing new documents is just adding additional segments.
Adding new field to a document means:
1) Reading existing document (may not always be possible, depending on
field configuration)
2) Marking existing document as deleted
3) Creating new document with reconstructed+plus new fields
4) Possib
I have 400k data, indexing is pretty fast, only take 10 minutes, but add
dynamic field to all documents according to query results is very slow, take
about 1.5 hours.
Anyone knows what could be the reason?
Thanks
Sent from Yahoo Mail for iPhone
Hi Erick,
I plan to use the composite-id routing. And I can use the same routing part
of the key to determine the shard number in ADDREPLICA command (using the
route parameter). I think this solution will work for me.
Thanks
Nawab
On Sat, Jun 29, 2019 at 8:55 AM Erick Erickson
wrote:
> What
12 matches
Mail list logo