On 1/10/2017 5:28 PM, Chetas Joshi wrote:
> I have got 2 shards having hash range set to null due to some index
> corruption.
>
> I am trying to manually get, edit and put the file.
> ./zkcli.sh -zkhost ${zkhost} -cmd putfile ~/colName_state.json
> /collections/colName/state.json
>
> I am getting
Obviously deleting and rebuilding the core will work but is there another way?
K
-Original Message-
From: KRIS MUSSHORN [mailto:mussho...@comcast.net]
Sent: Tuesday, January 10, 2017 12:00 PM
To: solr-user@lucene.apache.org
Subject: reset version number
SOLR 5.4.1 web admin interface has
Hello,
I have got 2 shards having hash range set to null due to some index
corruption.
I am trying to manually get, edit and put the file.
./zkcli.sh -zkhost ${zkhost} -cmd getfile /collections/colName/state.json
~/colName_state.json
./zkcli.sh -zkhost ${zkhost} -cmd clear /collections/colName
I really don't understand what you mean by "compress", maybe
provide a couple of samples?
Best,
Erick
On Tue, Jan 10, 2017 at 10:20 AM, dinesh naik wrote:
> Thanks Erick,
> I tried making it to String, but i need to compress the part first and then
> look for wild card search?
>
> With string i
Hi,
i want to extend the update(Tuple tuple) method in MaxMetric,. MinMetric,
SumMetric, MeanMetric classes.
can you please make the below metioned variables and methods in the above
mentioned classes as protected so that it will be easy to extend
variables
---
longMax
doubleMax
colu
Just as the normal query, usually we want to use multiple filter query when
run auto-completion.
It would be great if suggestor can return (the title of) doc that is
meaningful to the current user where we need multiple filters.
I am wondering whether it's possible in the current Solr(6.4)
implem
Want to add a couple of things
1) Shards were not deleted using the delete replica collection API
endpoint.
2) instanceDir and dataDir exist for all 20 shards.
On Tue, Jan 10, 2017 at 11:34 AM, Chetas Joshi
wrote:
> Hello,
>
> The following is my config
>
> Solr 5.5.0 on HDFS (SolrCloud of 25 n
Hello,
The following is my config
Solr 5.5.0 on HDFS (SolrCloud of 25 nodes)
collection with shards=20, maxShards per node=1, replicationFactor=1,
autoAddReplicas=true
The ingestion process had been working fine for the last 3 months.
Yesterday, the ingestion process started throwing the follow
Hi, kamaci:
That's great :) It's so nice of you to create the path and implement the
feature which are wanted for a long time :)
Best,
Jeffery Yuan
--
View this message in context:
http://lucene.472066.n3.nabble.com/Query-Elevation-Component-as-a-Managed-Resource-tp4312089p4313380.html
Sent
I know that we have never set the schedule parameter to 1 millisecond. We
have specified either 100 or 1000. I wondered why it was writing so
frequently. I suspect a bug somewhere
However, we will have multiple collections using cdcr, and in some cases
the source collection will have multiple targ
SOLR 5.4.1 web admin interface has a version number in the selected core's
overview.
How does one reset this number?
Kris
Looking at the cdcr API and documentation I wondered if the source and
target collection names could be aliases. This is not discussed in the cdcr
documentation, when I have time I was going to test this, but if someone
knows for certain it might save some time.
--
This message and any attachme
No one has any input on my post below about the spelling suggestions? I just
find it a bit frustrating not being able to understand this feature better, and
why it doesn't give the expected results. A built in "explain" feature really
would have helped.
/Jimi
-Original Message-
From: j
Hi Eric.
> But that's not the most important bit. Have you considered something like
> MappingCharFitlerFactory?
> Unfortunately that's a charFilter which transforms everything before it gets
> to the repeatFilter so you'd have to use two fields.
Yes, that is actually what I tried after giving
Thanks Erick,
I tried making it to String, but i need to compress the part first and then
look for wild card search?
With string i can not do that.
How do i achieve this?
On Wed, Jan 4, 2017 at 2:52 AM, Erick Erickson
wrote:
> My guess is that you're searching on a _tokenized_ field and that
>
Ahá, i am stupid indeed. I forgot i also had to change slf4j-nop to
slf4j-simple in my pom.xml..
org.slf4j
slf4j-simple
1.7.21
test
Sorry for the noise!
Markus
-Original message-
> From:Markus Jelsma
> Sent: Tuesday 10th January 2017 15:10
> To: s
Indeed, there were some changes recently but i also can't get logging to work
on older versions such as 6.0.
Thanks,
Markus
-Original message-
> From:Pushkar Raste
> Sent: Tuesday 10th January 2017 14:53
> To: solr-user@lucene.apache.org
> Subject: Re: Debug logging in Maven projec
Seems like you have enabled only console appender. I remember there was a
changed made to disable console appender if Solr is started in background
mode.
On Jan 10, 2017 5:55 AM, "Markus Jelsma" wrote:
> Hello,
>
> I used to enable debug logging in my Maven project's unit tests by just
> setting
The download links should work properly. Maybe try another mirror. I can
confirm the download works fine:
http://manifoldcf.apache.org/en_US/download.html#Latest+2.x+release+%28Apache+ManifoldCF+2.6%2C+2016+Dec+30%29
-Original message-
> From:puneetmishra2555
> Sent: Tuesday 10th Janu
Hi Team
Thanks for your response but I am not able to download Mainfold CF,so help
me on downloading so that I can check whether it will work for filenet or
not,because whatever link is given to download is not working.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How
Jimi:
The critical line for the KeywordRepeatFilter is "This is useful if
used with a stem filter that respects the KeywordAttribute to index
the stemmed and the un-stemmed version of a term into the same
field.". There is no guarantee that all filters _after_ the
KeywordRepeatFilter respect the k
Hi Jeffery,
I was looking whether an issue is raised for it or not. Thanks for pointing
it, I'm planning to create a patch.
Kind Regards,
Furkan KAMACI
On Mon, Jan 9, 2017 at 6:44 AM, Jeffery Yuan wrote:
> I am looking for same things.
>
> Seems Solr doesn't support this.
>
> Maybe you can vo
As for the question about different weights, down the page of this
article there's an explanation of why stats are different on different
replicas in the same shard:
https://support.lucidworks.com/hc/en-us/articles/115000888308-Getting-different-results-while-issuing-a-query-multiple-times-in-SolrC
Hi Team
Thanks for your response but I am not able to download Mainfold CF,so help
me on downloading so that I can check whether it will work for filenet or
not,because whatever link is given to download is not working.
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-to
Minor pedantic point (I like those).
"equiv to the order in which they were added to the index" depends on
the merge policy. That was true when Yonik wrote it, but other merge
policies added since then may or may not preserve insertion order in
terms of the internal Lucene ID. The default TieredMe
Hi,
I'm getting this error when I tried to do a division in JSON Facet.
"error":{
"msg":"org.apache.solr.search.SyntaxError: Unknown aggregation
agg_div in ('div(4,2)', pos=4)",
"code":400}}
Is this division function supported in JSON Facet?
I'm using this in Solr 5.4.0
Regards,
Edw
Hi Kshitij,
Quoting Yonik, the creator of solr:
"Ties are the same as in lucene... internal docid (equiv to the order in which
they were added to the index)."
Also, you can have multiple sort clauses, where score can be the first one.
Like sort=score DESC, publishDate DESC. But I think the rec
Hello,
I used to enable debug logging in my Maven project's unit tests by just setting
log4j's global level to DEBUG, very handy, especially in debugging some Solr
Cloud start up issues. Since a while, not sure to long, i don't seem to be able
to get any logging at all. This project depends on
Hi,
I need to understand what is the order of listing the documents from query
in case there is same score for all documents.
Regards,
Kshitij
Hi all,
We are using Suggester (and Solr 6.3.0) to implement autocomplete. We are
using TSTLookupFactory lookup implementation and
HighFrequencyDictionaryFactory dictionary implementation. If our index
consists of only one shard, everything works perfectly fine. However, when
index is split in 2 s
30 matches
Mail list logo