On Fri, Jul 25, 2014 at 6:59 PM, Jack Krupansky wrote:
> OTOH, how many people are there out there who want to become Solr
> consultants, but aren't already either doing it or at least already in the
> process of coming up to speed or maybe just not cut out for it?
Well, I would target two groups
: Thank you very much Erik. This is exactly what I was looking for. While at
: the moment I have no clue about these numbers, they ruby formatting makes it
: much more easier to understand.
Just to be clear, regardless of *which* response writer you use (xml,
ruby, json, etc...) the default beha
Looks to me like you are, or were, hitting the replication handler¹s
backup function:
http://wiki.apache.org/solr/SolrReplication#HTTP_API
ie, http://master_host:port/solr/replication?command=backup
You might not have been doing it explicitly, there¹s some support for a
backup being triggered wh
It¹s a command like this just prior to jetty startup:
find -L -type f -exec cat {} > /dev/null \;
On 7/24/14, 2:11 PM, "Toke Eskildsen" wrote:
>Jeff Wartes [jwar...@whitepages.com] wrote:
>> Well, I¹m not sure what to say. I¹ve been observing a noticeable latency
>> decrease over the first fe
Hi , we have a solr cloud instance with 8 nodes and 4 shards. We are starting
to see that index size is growing so huge and when looked at the file system
solr has created several copies of the index.
However using solr admin, I could see its using only on the them.
This is what I see in solr admi
Steve Rowe [sar...@gmail.com] wrote:
> 1 Lakh (aka Lac) = 10^5 is written as 1,00,000
>
> It’s used in Bangladesh, India, Myanmar, Nepal, Pakistan, and Sri Lanka,
> roughly 1/4 of the world’s population.
Yet still it causes confusion and distracts from the issue. Let's just stick to
metric, okay
The formatting is one thing, but ultimately it is just a giant expression,
one for each document. The expression is computing the score, based on your
chosen or default "similarity" algorithm. All the terms in the expressions
are detailed here:
http://lucene.apache.org/core/4_9_0/core/org/apac
Thank you Anshum!
The links helps.
Daniel
On Fri, Jul 25, 2014 at 3:07 PM, Anshum Gupta
wrote:
> Hi,
>
> These might help you:
>
> https://issues.apache.org/jira/browse/SOLR-4414
> https://issues.apache.org/jira/browse/SOLR-5480
>
> and
>
> https://issues.apache.org/jira/browse/SOLR-6248.
>
>
Using Tika to extract documents or content is something I don't have experience
with but it looks like your issue is in that process. If you're able to
reproduce this issue near the same place every time maybe you've got a document
that has a lot of nested fields in it or otherwise causes the ex
Hi,
These might help you:
https://issues.apache.org/jira/browse/SOLR-4414
https://issues.apache.org/jira/browse/SOLR-5480
and
https://issues.apache.org/jira/browse/SOLR-6248.
On Fri, Jul 25, 2014 at 11:58 AM, Donglin Chen
wrote:
> Hi,
>
> I issued MoreLikeThis query using a uniquekey of a so
Hi,
I issued MoreLikeThis query using a uniquekey of a source document, and I
got no match as below (but I can select this document fine in Solr).
0
0
The query is like this:
http://localhost:8080/solr/dbcollection_1/mlt?&q=uniquekey:20320
However, using select in stea
Please find below entire stack trace:
ERROR - 2014-07-25 13:14:22.202; org.apache.solr.common.SolrException;
null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Requested
array size exceeds VM limit
at
org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:790)
at
o
You might consider looking at your internal Solr cache configuration
(solrconfig.xml). These caches occupy heap space, and from my understanding do
not overflow to disk. So if there is not enough heap memory to support the
caches an OOM error will be thrown.
I also believe these caches live i
Would you include the entire stack trace for your OOM message? Are you seeing
this on the client or server side?
Thanks,
Greg
On Jul 25, 2014, at 10:21 AM, Ameya Aware wrote:
> Hi,
>
> I am in process of indexing lot of documents but after around 9
> documents i am getting below error:
>
Hi,
I am in process of indexing lot of documents but after around 9
documents i am getting below error:
java.lang.OutOfMemoryError: Requested array size exceeds VM limit
I am passing below parameters with Solr :
java -Xms6144m -Xmx6144m -XX:MaxPermSize=512m
-Dcom.sun.management.jmxremote -X
Thank you very much Erik. This is exactly what I was looking for. While at
the moment I have no clue about these numbers, they ruby formatting makes it
much more easier to understand.
Thanks to you Koji. I'm sorry I did not acknowledge you before. I think
Erik's solution is what I was looking for.
I have Apache Solr,hosted on my apache Tomcat Server with SQLServer Backend.
Details:
*Solr Version:*
Solr Specification Version: 3.4.0.2012.01.23.14.08.01
Solr Implementation Version: 3.4
Lucene Specification Version: 3.4
Lucene Implementation Version: 3.4
*Tomcat version:*
Apache Tomcat/6.0.1
I've built and installed the latest snapshot of Solr 4.10 using the same
SolrCloud configuration and that gave me a tenfold increase in throughput,
so it certainly looks like SOLR-6136 was the issue that was causing my slow
insert rate/high latency with shard routing and replicas. Thanks for your
On Jul 25, 2014, at 9:13 AM, Shawn Heisey wrote:
> On 7/24/2014 7:53 AM, Ameya Aware wrote:
> The odd location of the commas in the start of this thread make it hard
> to understand exactly what numbers you were trying to say
On Jul 24, 2014, at 9:32 AM, Ameya Aware wrote:
> I am in process o
The format of the XML explain output is not indented or very readable. When I
really need to see the explain indented, I use wt=ruby&indent=true (I don’t
think the indent parameter is relevant for the explain output, but I use it
anyway)
Erik
On Jul 25, 2014, at 10:11 AM, O. Olson wr
Thank you Uwe. Unfortunately, I could not get your explain solr website to
work. I always get an error saying "Ops. We have internal server error. This
event was logged. We will try fix this soon. We are sorry for
inconvenience."
At this point, I know that I need to have some technical background
On 7/25/2014 1:06 AM, Yavar Husain wrote:
> Have most of experience working on Solr with Tomcat. However I recently
> started with Jetty. I am using Solr 4.7.0 on Windows 7. I have configured
> solr properly and am able to see the admin UI as well as velocity browse.
> Dataimporthandler screen is a
On 7/24/2014 8:45 PM, YouPeng Yang wrote:
> To Matt
>
> Thank you,your opinion is very valuable ,So I have checked the source
> codes about how the cache warming up. It seems to just put items of the
> old caches into the new caches.
> I will pull Mark Miller into this discussion.He is the on
On 7/24/2014 7:53 AM, Ameya Aware wrote:
> I did not make any other change than this.. rest of the settings are
> default.
>
> Do i need to set garbage collection strategy?
The collector chosen and its and tuning params can have a massive impact
on performance, but it will make no difference at a
Any or all of the above, and more.
OTOH, how many people are there out there who want to become Solr
consultants, but aren't already either doing it or at least already in the
process of coming up to speed or maybe just not cut out for it?
But then there are the kids in school. Maybe we need
Query ReRanking is built on the RankQuery API. With the RankQuery API you
can build and plugin your own ranking algorithms.
Here's a blog describing the RankQuery API:
http://heliosearch.org/solrs-new-rankquery-feature/
Joel Bernstein
Search Engineer at Heliosearch
On Fri, Jul 25, 2014 at 4:11
0 down vote favorite
I have this requirement where I want to limit the number of concurrent calls
to solr say 50. So I am trying to implement connection pooling in HTTP
client which is then used in solr object HttpSolrServer. Please find the
code below
HttpClient httpclient = new De
Thanks a lot for your answer David!
I'll check that out.
Elisabeth
2014-07-24 20:28 GMT+02:00 david.w.smi...@gmail.com <
david.w.smi...@gmail.com>:
> Hi Elisabeth,
>
> Sorry for not responding sooner; I forgot.
>
> You’re in need of some spatial nearest-neighbor code I wrote but it isn’t
> ope
Well, if we do it in England, we could hire out a castle, I bet. :-) I
am flexible on my "holiday" locations. And probably easier to do the
first one in English.
We can continue this on direct email, on the LinkedIn group (perfect
place probably) and/or on the margins of the Solr Revolution. Targe
Dear Jack,
Actually I am going to do benefit-cost analysis for in-house developement
or going for sqrrl support.
Best regards.
On Thu, Jul 24, 2014 at 11:48 PM, Jack Krupansky
wrote:
> Like I said, you're going to have to be a real, hard-core gunslinger to do
> that well. Sqrrl uses Lucene dire
On 24/07/2014 01:54, Alexandre Rafalovitch wrote:
On Thu, Jul 24, 2014 at 2:44 AM, Jack Krupansky wrote:
All the great Solr guys I know are quite busy.
Sounds like an opportunity for somebody to put together a training
hacker camp, similar to https://hackerbeach.org/ . Cross-train
consultants
from what i gather about reranking query is that it would further fine-pick
results, rather than dispurse similarities, or am i looking at it the wrong
way?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Shuffle-results-a-little-tp1891206p4149169.html
Sent from the Solr - U
Have most of experience working on Solr with Tomcat. However I recently
started with Jetty. I am using Solr 4.7.0 on Windows 7. I have configured
solr properly and am able to see the admin UI as well as velocity browse.
Dataimporthandler screen is also getting displayed. However when I do a
full im
33 matches
Mail list logo