Hi,
We are getting the results for the query but the spellchecker component is
returning 500. Please help us out.
*query*: http://localhostt:8111/solr/srch/select?q=malerkotla&qt=search
*Error:*
> "trace":"java.lang.StringIndexOutOfBoundsException: String index out of
> range: -5
>
> \tat java
Note that this problem can also happen if the RealTimeGet handler is
missing from your solrconfig.xml because PeerSync will always fail and a
full replication will be triggerred. I added warn-level logging to complain
when this happens but it is possible that you are using an older version of
Solr
Dominik:
If you optimize your index, then the entire thing will be replicated
from the master to the slave every time. In general, optimizing isn't
necessary even though it sounds like something that's A Good Thing.
I suspect that's the nub of the issue.
Erick
On Tue, Jun 24, 2014 at 11:14 PM,
Hello All
If I set fetcher.threads.per.queue property to more than 1 , I believe the
behavior would be to have those many number of threads per host from Nutch, in
that case would Nutch still respect the Crawl-Delay directive in robots.txt and
not crawl at a faster pace that what is specified i
: I see that result is affected by sorting order (ASC/DESC change order) but
: result is not precise. For example for query
:
:
params={mm=2&pf=tags^10+title^5&sort=created+asc&q=query&qf=tags^10+title^5&wt=javabin&version=2&defType=edismax&rows=10}
those results don't really make sense -- can
Hey
I try to sort my documents over creation date
I see that result is affected by sorting order (ASC/DESC change order) but
result is not precise. For example for query
params={mm=2&pf=tags^10+title^5&sort=created+asc&q=query&qf=tags^10+title^5&wt=javabin&version=2&defType=edismax&rows=10}
On 6/25/2014 3:27 PM, Michael Della Bitta wrote:
> The subject line kind of says it all... this is the latest thing we have
> noticed that doesn't seem to have made it in. Am I missing something?
This code:
SolrServer server;
server = new HttpSolrServer("http://server:port/solr/c
FYI: The current plan is to call a vote for the 4.9 Solr Ref Guide
sometime tomorrow (2014-06-26) morning (~11AM UTC-0500 maybe?)
The main thing we are currently waiting on is that sarowe is working on a
simple page to document using Solr with SSL -- but now would be a great
time for folks to he
I have just created https://issues.apache.org/jira/browse/SOLR-6205
I hope the description makes sens.
Thanks.
Arcadius.
On 23 June 2014 18:49, Mark Miller wrote:
> We have been waiting for that issue to be finished before thinking too
> hard about how it can improve things. There have been
The subject line kind of says it all... this is the latest thing we have
noticed that doesn't seem to have made it in. Am I missing something?
Other awkwardness was doing a deleteByQuery against a collection other than
the defaultCollection, and trying to share a CloudSolrServer among
different ob
Thank you for your help!
I wrote an article on Performance Testing Solr filterCache "Shedding Light
on Apache Solr filterCache for VuFind" that I am hoping to get published.
https://docs.google.com/document/d/1vl-nmlprSULvNZKQNrqp65eLnLhG9s_ydXQtg9iML10
Anyone can comment and I would highly appr
Thanks, I tried your suggestion today
1. Define a text_num fieldType
2. Define a new text field to capture numerical data and link it to the
text field via a copyField
3. Restart the server and reindex my test data
As you can see from a simple analysis test o
Hi,
Is it possible to get the facet count for calculated values, along with regular
columns.
e.g I have Price & MSRP, I like to get how many are in Sale (Price < MSRP)
Onsale (10)
Jeans (20)
Shirts (50)
Above, Jeans & Shirts are there in Schema.xml, I can add there in the facet
fields, How
Ravi,
The POST should work. Here's an example that works within tomcat.
curl -X POST --data "q=*:*&rows=1"
http://localhost:8080/solr/collection1/select
Sameer.
On Mon, Jun 23, 2014 at 10:37 AM, EXTERNAL Taminidi Ravi (ETI,
Automotive-Service-Solutions) wrote:
> Hi, I am executing a solr que
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Modassar,
I ran into the same issue (Solr 4.8.1) with an existing collection set
to "implicit" routing but with no "router.field" defined. I managed to
set the "router.field" by modifying /clusterstate.json and pushing it
back to Zookeeper. For in
My configs below are not returning anything in suggest! Any pointers
please?
solrconf
mysuggestion
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.tst.TSTLookupFactory
mysuggestion
0.0
true
true
: I repo'd using the example config (with sharding). I was missing one
: necessary condition: the schema needs a "*" dynamic field.
: It looks like serializeSearchGroup matches the sort expression as the
: "*" field, thus marshalling the double as TextField.
:
: Should I enter a ticket with the
> Can you provide some sample data to demonstrates the problem? (ideally
> using the 4.x example configs - but if you can't reproduce with that
> then providing your own configs would be helpful)
I repo'd using the example config (with sharding). I was missing one
necessary condition: the schema
-Original message-
> From:johnmu...@aol.com
> Sent: Wednesday 25th June 2014 20:13
> To: solr-user@lucene.apache.org
> Subject: How much free disk space will I need to optimize my index
>
> Hi,
>
>
> I need to de-fragment my index. My question is, how much free disk space I
> ne
Hi,
I need to de-fragment my index. My question is, how much free disk space I
need before I can do so? My understanding is, I need 1X free disk space of my
current index un-optimized index size before I can optimize it. Is this true?
That is, let say my index is 20 GB (un-optimized) then
Hi paul.
Im not using it on S3 -- But yes - I dont think S3 would be ideal for Solr
at all. There are several other Hadoop Compatible File Systems, however,
some of which might be ideal for certain types of SolrCloud workloads.
Anyways... would love to see a Solr wiki page on FileSystem compati
How big is your request size from client to server?
I ran into OOM problems too. For me the reason was that I was sending big
requests (1+ docs) at too fast a pace.
So I put a throttle on the client to control the throughput of the request
it sends to the server, and that got rid of the OOM e
Thanks Shawn!
In this case I will use operators everywhere.
Johannes
Am 25.06.2014 15:09, schrieb Shawn Heisey:
On 6/25/2014 1:05 AM, Johannes Siegert wrote:
I have defined the following edismax query parser:
100%edismax0.01100*:*ANDfield1^2.0 field210*
My search query looks like:
q=(wo
Vicky - were you able to get the results page formatted how you’d like?You
may want to tweak results_list.vm or a sub (or maybe parent?)-template from
there to achieve what you want.
Erik
On Jun 18, 2014, at 10:02 AM, vicky wrote:
> Hi Everyone,
>
> I just installed solr 4.8 re
25 June 2014, Apache Solr™ 4.9.0 available
The Lucene PMC is pleased to announce the release of Apache Solr 4.9.0
Solr is the popular, blazing fast, open source NoSQL search platform
from the Apache Lucene project. Its major features include powerful
full-text search, hit highlighting, faceted se
Hi,
I am getting exception while saving SolrInputDocument object from Java in
client Server,
but in my local machine it works fine.
org.apache.solr.common.SolrException: Unexpected EOF in prolog
at [row,col {unknown-source}]: [1,0]
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoa
On 24.06.14 17:33, Erick Erickson wrote:
Hmmm. It would help if you posted a couple of other
pieces of information BTW, if this is new code are you
considering donating it back? If so please open a JIRA so
we can track it, see: http://wiki.apache.org/solr/HowToContribute
All my other langua
On 6/25/2014 1:05 AM, Johannes Siegert wrote:
> I have defined the following edismax query parser:
>
> name="defaults">100% name="defType">edismax0.01 name="ps">100*:* name="q.op">ANDfield1^2.0 field2 name="rows">10*
>
>
>
> My search query looks like:
>
> q=(word1 word2) OR (word3 word4)
>
I made two tests, one with MaxRamBuffer=128 and the second with
MaxRamBuffer=256.
In both i got OOM.
I also made two tests on autocommit:
one with commit every 5 min, and the second with commit every 100,000 docs.
(disabled softcommit)
In both i got OOM.
merge policy - Tiered (max segment size o
Thanks for the answers.
I will try to solve my problem, by extracting the affected text and index that
part into another string field, where the wild card query work as expected.
The Solr queries will be extend by an „OR“ with that new field, that should
work for my case.
Yours truly, Sven
Hi,
I have defined the following edismax query parser:
name="defaults">100%name="defType">edismax0.01name="ps">100*:*name="q.op">ANDfield1^2.0 field2name="rows">10*
My search query looks like:
q=(word1 word2) OR (word3 word4)
Since I specified AND as default query operator, the query shoul
31 matches
Mail list logo