Need to also make sure the velocity writer and dependencies are ’d in in
solrconfig.xml
> On May 19, 2020, at 02:30, Prakhar Kumar
> wrote:
>
> Hello Team,
>
> I am using Solr 8.5.0 and here is the full log for the error which I am
> getting:
>
> SolrConfigHandler Error checking plugin : =
Dear all,
We start to upgrade a huge SolrCloud cluster from 5.4.1 to lastest version
8.5.1.
Context :
. Ubuntu 16.04, 64b, JVM Oracle 8 101 and now OpenJDK 8 252
. We can't reindex documents because old ones doesn't exist anymore, so no
other choices than upgrading indexes.
Our
Hello Solr users,
I’m quite puzzled about how shingles work. The way tokens are analysed looks
fine to me, but the query seems too restrictive.
Here’s the sample use-case. I have three documents:
mona lisa smile
mona lisa
mona
I have a shingle filter set up like this (both index- and query-tim
Hi list,
I seem to be unable to get REINDEXCOLLECTION to work on a collection alias
(running Solr 8.2.0). The documentation seems to state that that should be
possible:
https://lucene.apache.org/solr/guide/8_2/collection-management.html#reindexcollection
"name
Source collection name, may be an ali
This will not work. Lucene has never promised this upgrade path would work, the
“one major version back-compat” means that Lucene X has special handling for
X-1, but for X-2, all bets are off. Starting with Solr 6, a marker is written
into the segments recording the version of Lucene the segment
Solr Experts, any easy way for reading other solr docs ( other docs ) from
solr custom function ?
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Many thanks for your answers Erik.
Effectively, I've read this into many different threads that the migration path
will not be guaranteed but, what's strange is that there's no formal
information on this impossibility because clearly we can't migrate to v8 if
indexes are not "pure" v7 indexes.
I get the following exception:
Caused by: org.apache.lucene.index.CorruptIndexException: length should be
104004663 bytes, but is 104856631 instead
(resource=MMapIndexInput(path="path_to_index\index\_jlp.cfs"))
What may be the cause of this?
How can the length of the .cfs file change so it become
Which query parser is used if my query length is large?
My query is
https://drive.google.com/file/d/1P609VQReKM0IBzljvG2PDnyJcfv1P3Dz/view
Regards,
Vishal Patel
Hi, I don't think query size can affect the kind of the parser chosen. I
remember there is a maximum number of boolean clause (maxBooleanClauses),
but this a slight different thing.
If the query is too large, you can have an http error (bad request?), I
don't remember, well just change the http me
Hi
Is it possible to normalize the per field score before applying the boosts?
let say 2 documents match my search criteria on the query fields *title* and
*description* using Dismax Parser with individual boosts.
q=cookie&qf = text^2 description^1
let's say below are the TF-IDF scores for the d
I believe the issue is that under the covers this feature is using the
"topic" streaming expressions which it was just reported doesn't work with
aliases. This is something that will get fixed, but for the current release
there isn't a workaround for this issue.
Joel Bernstein
http://joelsolr.blo
Hmm, might be able to hack this with optimize (forced merge).
First, you would have to add enough extra documents to force a rewrite of all
segments. That might be as many documents as are already in the index. You
could set a “fake:true” field and filter them out with an fq. Or make sure they
Thanks Walter, but I can't imagine that will work because if this could work,
then the index Upgrader should work and it is not the case ☹
Because of the format, the index iv6 can't be rewrite whatever the process you
use (add replica, optimize, etc...)
The only way I have is the full reindexing!
Hi Phill,
What is the RAM config you are referring to, JVM size? How is that related
to the load balancing, if each node has the same configuration?
Thanks,
Wei
On Mon, May 18, 2020 at 3:07 PM Phill Campbell
wrote:
> In my previous report I was configured to use as much RAM as possible.
> With
Jean-Louis:
One explication is here:
https://lucene.apache.org/solr/guide/8_5/indexupgrader-tool.html, but then
again the reference guide is very long, I’m not sure how to make it more
findable. Or, for that matter, whether it should be part of the
IndexUpgraderTool section or not. Please feel
Usually this is caused by one of
1> the file on disk getting corrupted, i.e. the disk going bad.
2> the disk getting full at some point and writing a partial segment
No, you cannot delete the cfs file and re-index only the documents
that were in it because you have no way of knowing exactly what
t
Erick
I just suggest a dedicated page to upgrade path because reading the page about
indexUpgraderTool, we understand well that we can’t upgrade in one phase but
6->7->8 must be made and nowhere it is specified that from Lucene 6, the
segments are marked V6 for ever.
Naively, by transitivity,
Hello,
We have requirement to get terms using ascending and descending order.
We are using qt=/terms but this only gives terms in ascending order if I set
terms.sort=index
Is there a way or workound to get terms in descending order.
We are also providing terms.lower and terms.upper to restrict
I have quite a few numeric / meta-data type fields in my schema and
pretty much only use them in fq=, sort=, and friends. Should I always
use DocValue on these if i never plan to q=search: on them? Are there
any drawbacks?
Thanks,
Matt
Yes. You should also index them….
Here’s the way I think of it.
For questions “For term X, which docs contain that value?” means index=true.
This is a search.
For questions “Does doc X have value Y in field Z”, means docValues=true.
what’s the difference? Well, the first one is to get the res
You can index AND docvalue? For some reason I thought they were exclusive
On Tue, May 19, 2020 at 5:36 PM Erick Erickson wrote:
>
> Yes. You should also index them….
>
> Here’s the way I think of it.
>
> For questions “For term X, which docs contain that value?” means index=true.
> This is a se
In a word, “no”. The Terms component is intended
to look forward through the terms list.
You could always specify terms.limit=-1 and only display
the last N of the returned list, but the list may be very long.
Best,
Erick
> On May 19, 2020, at 3:58 PM, Gajjar, Jigar wrote:
>
> Hello,
>
> We
They are _absolutely_ able to be used together. Background:
“In the bad old days”, there was no docValues. So whenever you needed
to facet/sort/group/use function queries Solr (well, Lucene) had to take
the inverted structure resulting from “index=true” and “uninvert” it on the
Java heap.
docValu
Jean-Louis:
One of the great advantages of open source is that it allows people to look at
a problem with “fresh eyes” and add to the project in a way that help other
people who aren’t steeped in the arcana of Lucene/Solr. So it’d be great if you
could go ahead and make a patch and JIRA to put
25 matches
Mail list logo