The joy was short-lived.
Tonight our environment was “down/slow” a bit longer than usual. It looks like
two of our nodes never recovered, clusterstate says everything is active. All
nodes are throwing this in the log (the nodes they have trouble reaching are
the ones that are affected) - the er
Hi,
I have sometimes this exception too, the recovering goes to an state of loop
and I can only finish the recovering if I restart the replica that has the
stuck core.
In my case I have ssd but replicas with 40 or 50 gigas. If I have 3 replicas in
recovery mode and they are replicating from
Thank you, Peter!
Last weekend I was up until 4am trying to understand why is Solr starting so
so sooo slow, when i had gave enough memory to fit the entire index.
And then I remembered your trick used on the m3.xlarge machines, tried it
and it worked like a charm!
Thank you again!
-
Thanks,
Thanks for the comments Shalin,I ended up doing just that, reindexing from
ground up.
-
Thanks,
Michael
--
View this message in context:
http://lucene.472066.n3.nabble.com/Merging-shards-and-replicating-changes-in-SolrCloud-tp407p4100255.html
Sent from the Solr - User mailing list arch
Hey
I am running af solr 4.3.1 and working is implementing spellcheck using
solr.DirectSolrSpellChecker everything seems to be working fine but at have
one issue.
If I search for
http://localhost:8765/solr/MainIndex/spell?q=kim%20AND%20larsen
the result is some hits and the spell component re
Hi
We have a SOLRCloud cluster of 3 solr servers (v4.5.0 running under tomcat)
with 1 shard. We added a new SOLR server (v4.5.1) by simply starting tomcat
and pointing it at the zookeeper ensemble used by the existing cluster. My
understanding was that this new server would handshake with zookeep
Try manually creating shard replicas on the new server. I think the new
server is only used automatically when you start you Solr server instance
with "correct command line" option (aka. -DnumShards) - I never liked
this kind of behaviour.
The server is not present in clusterstate.json file,
Thanks.
If I understand what you are saying, it should automatically register itself
with the existing cluster if we start SOLR with the correct command line
options. We tried adding the numShards option to the command line but still
get the same outcome.
We start the new SOLR server using
/usr
There are 2 parameters you want to consider:
First is "spellcheck.maxResultsForSuggest". Because you have an "OR" query,
you'll get hits if only 1 query term is in the index. This parameter lets you
tune it to make it suggest if the query returns n or fewer hits. My memory
tells me, however,
The socket read timeouts are actually fairly short for recovery - we should
probably bump them up. Can you file a JIRA issue? It may be a symptom rather
than a cause, but given a slow env, bumping them up makes sense.
- Mark
> On Nov 11, 2013, at 8:27 AM, Henrik Ossipoff Hansen
> wrote:
>
>
According to the wiki pages it should, but I have not really tried it yet
- I like to make the "bookeeping" myself :)
I am sorry but someones with more knowledge of Solr will have to answer
your question.
Primoz
From: ade-b
To: solr-user@lucene.apache.org
Date: 11.11.2013 15:44
Subj
Thanks Upayavira
It seems it needs too much work. I will have several more fields that will
have unit values.
Do we have more quicker way of implementing it?
We have Currency filed coming as default with SOLR. Can we use it?
Creating conversion rate table for each field? What I am expecting from
I will file a JIRA later today.
What I don’t get though (I haven’t looked much into any actual Solr code) is
that at this point, our systems are running fine, so timeouts shouldn’t be an
issue. Those two nodes though, is somehow left in a state where their response
time is up to around 120k ms
I think Upayavira's suggestion of writing a filter factory fits what you're
asking for. However, the other end of cleverness is to simple use
solr.TrieIntField and store everything in MB. So for 1TB you'd
write 51200. A range query for 256MB to 1GB would be field:[256 TO 1024].
Conversion from
On 11/10/2013 10:12 PM, subacini Arunkumar wrote:
> We are upgrading from Solr 3.5 to Solr 4.4. The response from 3.5 and 4.4
> are different. I have attached request and response [highlighted the major
> difference in RED]
>
>
> Can you please let me know how to change "parsedQuery" from *MatchA
A custom token filter may indeed be the right way to go, but an alternative
is the combination of an update processor and a query preprocessor.
The update processor, which could be a JavaScript script could normalize the
string into a simple integer byte count. You might also want to keep
sepa
Ryan and Upayavira,
Do we have an example skeleton to do this for schema.xml and solrconfig.xml?
Example java class that would help to build UnitResolvingFilterFactory
class?
Thanks
Erol Akarsu
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unit-of-dimension-for-solr-fie
We have an internal Solr collection with ~1 billion documents. It's
split across 24 shards and uses ~3.2TB of disk space. Unfortunately
we've triggered an 'optimize' on the collection (via a restarted browser
tab), which has raised the disk usage to 4.6TB, with 130GB left on the
disk volume.
As
Hi,
I'm new with solr and wanted to group data on weeks, is there any built-in
date round function so I give date to this function and it return me the
week of the year.
For example I query to solr against date ("01/01/2013") it should return me
(1st week of 2013).
Like I have following document
Hi Gil,
(we spoke in Dublin, didn't we?)
Short of stopping Solr I have a feeling there isn't much you can
do hm. or, I wonder if you could somehow get a thread dump,
get the PID of the thread (since I believe threads in Linux are run as
processes), and then kill that thread... Feels scary
>From my understanding, if your already existing cluster satisfies your
collection (already live nodes >= nr shards * replication factor) there
wouldn't be any need for creating additional replicas on the new server,
unless you directly ask for them, after startup.
I usually just add the machine to
You're probably looking at "date math", see:
http://lucene.apache.org/solr/4_5_1/solr-core/org/apache/solr/util/DateMathParser.html
You're probably going to be faceting to get these counts, see "facet
ranges" here:
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_by_Range
So the start is s
Hi Otis, thanks for the response. I could stop the whole Solr service as
as yet there's no audience access to it, but might it be left in an
incomplete state and thus try to complete optimisation when the service
is restarted?
[Yes, we did speak in Dublin - you can see we need that monitoring
serv
I replaced the frange filter with the following filter and got the correct
no. of results and it was 3X faster:
select?qq={!edismax v='news' qf='title^2
body'}&scaledQ=scale(product(query($qq),1),0,1)&q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))&fq={!edismax
v='news' qf='title
On Mon, Nov 11, 2013 at 11:39 AM, Peter Keegan wrote:
> fq=$qq
>
> What is the proper syntax?
fq={!query v=$qq}
-Yonik
http://heliosearch.com -- making solr shine
Has someone at least got a idee how i could do a year/month-date-tree?
In Solr-Wiki it is mentioned that facet.date.gap=+1DAY,+2DAY,+3DAY,+10DAY
should create 4 buckets but it doesn't work
-Original Message-
From: Andreas Owen [mailto:a...@conx.ch]
Sent: Donnerstag, 7. November 2013 18
Thanks
On Mon, Nov 11, 2013 at 11:46 AM, Yonik Seeley wrote:
> On Mon, Nov 11, 2013 at 11:39 AM, Peter Keegan
> wrote:
> > fq=$qq
> >
> > What is the proper syntax?
>
> fq={!query v=$qq}
>
> -Yonik
> http://heliosearch.com -- making solr shine
>
While doing a search like:
q=great+gatsby&defType=edismax&qf=title^1.8
records with a title of "great gatsby / great gatsby" always score higher than
"great gatsby" just a single time.
How do I express that a single match should be just as important as having the
query match multiple times in
Hi:
I was encouraged to explore the Solr mail list, specifically regarding the
fl–parameter. What is that parameter for and can it accomplish my original
task of crawling/indexing specific html components versus parsing the entire
page?
My original question is listed below (previously on the
Can DelimitedPayloadTokenFilterFactory be used to store unit dimension
information? This factory class can store extra information for field.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Unit-of-dimension-for-solr-field-tp4100209p4100345.html
Sent from the Solr - User mai
Hello,
On dropping noise words we have scenario that we have to only drop ending noise
words. For e.g. "160 Associates LP", the noise words here are Associates and LP
but we only want to drop the LP one which is a ending noise word.
If we use stop words, it will drop both words and make search
Hi All,
I am working on a custom analyzer in Solr to post content to Apache Stanbol
for enhancement during indexing. To post content to Stanbol, inside my
custom analyzer's incrementToken() method I have written below code using
Jersey client API sample [1];
public boolean incrementToken() throws
Thanks Joel, appreciate your help. Is Solr 4.6 due this year ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-grouping-performance-porblem-tp4098565p4100358.html
Sent from the Solr - User mailing list archive at Nabble.com.
This seems to be a weird intermittent issue when I use the Analysis UI (
http://localhost:8983/solr/#/collection1/analysis) for testing my Analyzer.
It works fine when I hard code the input value in the Analyzer and index. I
gave the same input : "Tim Bernes Lee is a professor at MIT" hard coded in
You seem to be consistently missing the problem that your queries will not
work as expected. How would you do a range query without writing a some
kind of custom code that looked at the payloads to determine the normalized
units?
The simplest way to do this is probably have your ingestion side nor
In fact, there's some movement towards starting the release process this
week, stay tuned!
Erick
On Mon, Nov 11, 2013 at 4:12 PM, shamik wrote:
> Thanks Joel, appreciate your help. Is Solr 4.6 due this year ?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-
On 11/11/2013 2:12 PM, shamik wrote:
Thanks Joel, appreciate your help. Is Solr 4.6 due this year ?
The job of release manager for 4.6 has already been claimed. There
should be a release candidate posted on the dev list sometime on
November 12th (tomorrow) in the USA timezones, unless a seri
On Mon, Nov 11, 2013 at 11:28 AM, Hoggarth, Gil wrote:
> I could stop the whole Solr service as
> as yet there's no audience access to it, but might it be left in an
> incomplete state and thus try to complete optimisation when the service
> is restarted?
Should be fine.
Lucene has a write-once
Hello,
I keep seeing here and on Stack Overflow people trying to deploy Solr to
Tomcat. We don't usually ask why, just help when where we can.
But the question happens often enough that I am curious. What is the actual
business case. Is that because Tomcat is well known? Is it because other
apps
Hi All,
In my custom filter, I need to index the processed token into a different
field. The processed token is a Stanbol enhancement response.
The solution I have so far found is to use a Solr client (solj) to add a
new Document with my processed field into Solr. Below is the sample code
segment
Howdy.
We are testing to upgrade Solr from 4.3 to 4.5.1 . We're using SolrCloud
and our problem is that the core does not appear to be loaded anymore.
We've set logging to DEBUG and we've found lots of those
2013-11-12 06:30:43,339 [pool-2-thread-1-SendThread(our.zookeeper.com:2181)]
DEBUG org.a
Happy to see some one have similar solutions as ours.
we have similar multi-language search feature and we index different
language content to _fr, _en field like you've done
but in search, we need a language code as a parameter to specify the
language client wants to search on which is normally
Hi
i want to create solr cloud like this:
1 solr cloud in location A, and another solr cloud in location B how to
make that solr cloud is location B is replicate solr cloud in location A.
And if all node in slor cloud A is die slor cloud B is still working and
vice versa.
any body know how to
like Erick said, merge data from different datasource could be very
difficult, SolrJ is much easier to use but may need another application to
do handle index process if you don't want to extends solr much.
I eventually end up with a customized request handler which use SolrWriter
from DIH package
44 matches
Mail list logo