> Yes - I am using edismax but the
> reason is not obvious to me can you give
> me a pointer?
Probably this it the be cause. See Hoss' explanation.
https://issues.apache.org/jira/browse/SOLR-2649
By the way if you want to boost docs (using lucene queries) with (e)dismax, bq
is the way to
Where you upgrading from Solr 1.4?
SolrJ uses by default for querying the javabin format (wt parameter).
The javabin format is not compatible between 1.4 and 3.1 and above.
So If your clients where running with SolrJ 1.4 versions I would expect
errors to occur.
Martijn
On 25 July 2011 12:15, Tarj
Hello All
I built a web application in Java/JSP, which calls a Solr Servlet during
search process. While I am able to retrieve and display my search results in
the format I want, my console is filled with Debug messages, printing all
the content of the pages I retrieve.
An example Debug line on m
Hi!
I am using solrCloud with a zookeeper ensamble of 3.
I noticed that solcOuld stores information direclt under the root dir in the
ZooKeepr file system:
\config \live_nodes \ collections
In my setup Zookeepr is also used by other modules so I would like solrCLoud
to store everything under /s
You can use Luke request handler, but for improving the speed set
numTerms parameters to zero, like
http://localhost:8983/solr/admin/luke?numTerms=0
It will give you information about optimized state of index as true
More about this on Solr wiki: http://wiki.apache.org/solr/LukeRequestHandler
201
How do u write solr query to mention proximity between two phrases
dance jockey should appear within 10 words before video jokey
"("dance jockey") ("video jockey")"~10
This isn't working fine . can some one suggest a way ?
-JAME
> How do u write solr query to mention
> proximity between two phrases
>
> dance jockey should appear within 10 words before video
> jokey
>
> "("dance jockey") ("video jockey")"~10
>
> This isn't working fine . can some one suggest a way ?
This is not possible with out-of-the-box solr, thou
It seems to work now.
We simply added
/ulimit -v unlimited /
to our tomcat-startup-script.
@Yonik: Thanks again!
Best regards,
Sebastian
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-3-3-Exception-in-thread-Lucene-Merge-Thread-1-tp3185248p3200105.html
Sen
Hi Tobias,
try this, it works for us (Solr 3.3):
solrconfig.xml:
/
word
suggestion
org.apache.solr.spelling.suggest.Suggester
org.apache.solr.spelling.suggest.fst.FSTLookup
wordCorpus
score
./suggester
false
true
0.005
true
true
true
true
suggestion
50
50
suggest
/
Query like that:
htt
Hi Floyd, I don't think the feature that allows to use multiple gaps for a
range facet is committed. See
https://issues.apache.org/jira/browse/SOLR-2366
You can achieve a similar functionality by using facet.query. see:
http://wiki.apache.org/solr/SimpleFacetParameters#Facet_Fields_and_Facet_Querie
Hi, you are not giving us much information. What's your default operator?
What do you mean with "results are not correct"?
On Tue, Jul 26, 2011 at 3:04 AM, deniz wrote:
> Here is the situation..
>
> when i make search with 3 or more words, the results are corret, however if
> i make a search by
On 07/26/2011 09:26 AM, Martijn v Groningen wrote:
> Where you upgrading from Solr 1.4?
Yep.
> SolrJ uses by default for querying the javabin format (wt parameter).
> The javabin format is not compatible between 1.4 and 3.1 and above.
> So If your clients where running with SolrJ 1.4 versions I wou
Using ShingleFilterFactory and PositionFilterFactory I get some results, but
never as a useful collation.
So I tried to see what results with spellcheck.maxCollations=2 would be, but
I never got this to work. not on 3.3 nor 4.0. Even lowering
maxCollationEvaluations had no effect. I never get a re
I agree! It should be noted in the documentation.
I just wanted to say that SolrJ doen't depend on Java serialization, but
uses its own serialization:
http://lucene.apache.org/solr/api/solrj/org/apache/solr/common/util/JavaBinCodec.html
Martijn
On 26 July 2011 15:31, Tarjei Huse wrote:
> On 07/
Have a look:
http://stackoverflow.com/questions/2271600/elasticsearch-sphinx-lucene-solr-xapian-which-fits-for-which-usage
http://karussell.wordpress.com/2011/05/12/elasticsearch-vs-solr-lucene/
Regards,
Peter.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-vs-Elastic
Is this possible to do? If so, how?
On 7/25/11, Brian Lamb wrote:
> Yes and that's causing some problems in my application. Is there a way to
> truncate the 7th decimal place in regards to sorting by the score?
>
> On Fri, Jul 22, 2011 at 4:27 PM, Yonik Seeley
> wrote:
>
>> On Fri, Jul 22, 2011 a
If you're getting OOM's, double-check that you're on 3.3. There was a nasty
bug in 3.0 - 3.2 that would cause OOM in conjunction with spellcheck collations
in some cases. Ditto if Solr hangs as you might be in a Garbage Collection
"loop". If you have your jvm running with verbose gc's you'll
Hi, finally now I have all the field names of each document using the
Luke Request Handler (http://wiki.apache.org/solr/LukeRequestHandler)
and making HTTP Request to Solr I can get all the fields that contain
the word that I am searching.
I'll keep looking for a better solution.
Thanks!
Regards
Hi,
I'm coing back to trying to get the Synonynms working (alongside the
spellchecker). Here is what I have:
Then in synonyms.txt, I have:
pixima => pixma
cellpne => cellphone
computer => laptop
, car, food,
a
On Mon, Jul 25, 2011 at 10:12 AM, Brian Lamb
wrote:
> Yes and that's causing some problems in my application. Is there a way to
> truncate the 7th decimal place in regards to sorting by the score?
Not built in.
With some Java coding, you could create a post filter that manipulates scores.
http:/
Well it appears to be some issue with the analysis. You can check the
http://localhost:8983/solr/admin/analysis.jsp (the admin page of your
instance, the analysis section) to see how the analysis is applied and see
the end result of "aaa"
You should work with the index and the query analysi
Dear Team,
We tried changing the schema.xml to the user xml format but it
shows error.Kindly give me a solution to carry out this process.
Thank You.
Regards,
Vignesh.V
: Hi, I recently went through a little hell when I upgraded my Solr
: servers to 3.2.0. What I didn't anticipate was that my Java SolrJ
: clients depend on the server version.
:
: I would like to add a note about this in the SolrJ docs:
: http://wiki.apache.org/solr/Solrj#Streaming_documents_for_
: I built a web application in Java/JSP, which calls a Solr Servlet during
...
: An example Debug line on my console looks like this:
: DEBUG ["http-bio-8080"-exec-1] (Wire.java:70) - << "Quick recipe finder[\n]"
...
: Are there any suggestions on how to work around with this, sinc
On Tue, Jul 26, 2011 at 3:55 PM, Vignesh.v wrote:
> Dear Team,
>
> We tried changing the schema.xml to the user xml format but it
> shows error.Kindly give me a solution to carry out this process.
[...]
Sorry, what does that mean exactly? Please provide details
of what you tried, a
: Hi, you are not giving us much information. What's your default operator?
: What do you mean with "results are not correct"?
To elaborate, please note...
http://wiki.apache.org/solr/UsingMailingLists
:
: On Tue, Jul 26, 2011 at 3:04 AM, deniz wrote:
:
: > Here is the situation..
: >
: > wh
Im using 4.0 for testing this.
Im not sure what to expect, but as soon as I increase maxCollationTries to 1
or more, even with maxCollationEvaluations set to low value like 10 it just
hangs.
With maxCollationTries set to 0 it works just fine.
--
View this message in context:
http://lucene.47206
It sounds like that could be a bug. Could you provide some details on how
you're building your dictionary (config snippets), and what parameters you're
using to query, etc. ? Your jvm settings and a rough estimate of how big your
index is would be helpful too. It would be nice to try and figu
That didn't help. Seems like another case where I should get matches but don't
and this time it is only for some documents. Others with similar content do
match just fine. The debug output 'explain other' section for a non-matching
document seems to say the term frequency is 0 for my problema
I will try to duplicate the behavior in 3.3 as I cant get logging to file
working in 4.0 like in other releases
http://globalgateway.wordpress.com/2010/01/06/configuring-solr-1-4-logging-with-log4j-in-tomcat/
Solr logging (maybe you know how to fix this?)
Config is pretty normal I think:
Hi all,
I'm new to solr.
I installed solr 3.3 with glassfish 3.1 in ubuntu 10.4.
It works fine until I set security manager in glassfish since I don't want to
everyone can reach the solr's admin page. The error message was as follows.
Severe errors in solr configuration.
Check your log files
> I will try to duplicate the behavior in 3.3 as I cant get logging to file
> working in 4.0 like in other releases
> http://globalgateway.wordpress.com/2010/01/06/configuring-solr-1-4-logging-
> with-log4j-in-tomcat/ Solr logging (maybe you know how to fix this?)
You're most likely caught by t
you can use solr analysis tool from the admin page and see how an analysis
and querying are done for a specific term.
On Sat, Jul 23, 2011 at 1:33 PM, Romi wrote:
> I am using solr for search . i am facing problem with wildcard searches.
> when i search for dia?mond i get result for diamond
> bu
Could you provide the relevant sections of the logs pertaining to this
error?
On Tue, Jul 26, 2011 at 12:13 PM, Xue-Feng Yang wrote:
> Hi all,
>
> I'm new to solr.
>
> I installed solr 3.3 with glassfish 3.1 in ubuntu 10.4.
>
> It works fine until I set security manager in glassfish since I don'
Here is the message from server.log
[#|2011-07-26T12:17:37.591-0400|SEVERE|glassfish3.1|org.apache.solr.core.CoreContainer|_ThreadID=10;_ThreadName=Thread-1;|java.security.AccessControlException:
access denied (javax.management.MBeanServerPermission findMBeanServer)
at
java.security.AccessCo
I don't know glassfish; the error you're reporting is a low-level security
exception (method access) and doesn't seem to be related with web application
(JAAS) security.
Did you change the web.xml of solr war for including security constraints,
security collections, login-config, roles and so o
Sorry, my previous email has been truncated.
Setting a security for a web application has nothing to do with security
manager, which is something related with jvm and low level permission
(Continue from the previous email)
But anyway, i don't know glassfish and how its security config is workin
No, I don't have any info to setup this for solr with glassfish. If anyone has
such a doc for any other application server, such as tomcat, that would be a
great help.
From: Andrea Gazzarini
To: solr-user@lucene.apache.org; Xue-Feng Yang
Sent: Tuesday, July
: Subject: Severe errors in solr configuration
: References: <1311383488148-3192748.p...@n3.nabble.com>
: <8f0d0142ca7ecc4287a9ec1bd8cb880c17c6a26...@uslvdcmbvp01.ingramcontent.com>
: <201107251713.22614.markus.jel...@openindex.io>
: <8f0d0142ca7ecc4287a9ec1bd8cb880c17c6a27...@uslvdcmbvp01.ingr
Adding log4j-1.2.16.jar and deleting slf4j-jdk14-1.6.1.jar does not fix
logging for 4.0 for me.
Anyways, tried it on 3.3 and Solr just hangs here also. No logging, no
exceptions.
I'll let you know if I manage to find source of problem.
--
View this message in context:
http://lucene.472066.n3.na
Hello
I've been looking for a definitive answer to the question: "is it possible to run Solr on multiple servers with a shared index folder instead of the native
master/slave configuration?"
So far I have found two threads in this ml about the topic:
http://lucene.472066.n3.nabble.com/Multiple
FWIW, here is the process I follow to create a log4j aware version of the
apache solr war file and the corresponding lo4j.properties files.
Have fun :)
François
##
#
# Log4J configuration for SOLR
#
# http://wiki.apache.org/solr/Sol
François Schiettecatte wrote:
>
> #
> # 4) Copy:
> # slf4j-1.6.1/slf4j-log4j12-1.6.1.jar ->
> WEB-INF/lib
> # log4j.properties (this file)->
> WEB-INF/classes/ (needs to be
> created)
> #
>
Don't you mean log4j-1.2.16/slf4j-log
So I've got Solr 1.4. I've got replication going on.
Once a day, before replication, I optimize on master. Then I replicate.
I'd expect optimization before replicate would basically replace all
files on slave, this is expected.
But that means I'd also expect that the index files on slave wo
I get slf4j-log4j12-1.6.1.jar from
http://www.slf4j.org/dist/slf4j-1.6.1.tar.gz, it is what interfaces slf4j to
log4j, you will also need to add log4j-1.2.16.jar to WEB-INF/lib.
François
On Jul 26, 2011, at 3:40 PM, O. Klein wrote:
>
> François Schiettecatte wrote:
>>
>> #
>> # 4) Copy:
Regarding point 4, you will have to reload the indexes to preserve
consistency among the indexes.
When yo perform a commit in solr you have (for an instant) two versions of
the index. The commit produces new segments (with new documents, new
deletions, etc). After creating these new segments a new
I see you use this for Solr 3.3.
In 3.3 there is no problem with logging. Have you tried to do the same thing
for 4.0?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Spellcheck-compounded-words-tp3192748p3201597.html
Sent from the Solr - User mailing list archive at Nabble.c
Hi all,
I am a little confused as to why the scoring is working the way it is:
I have a field defined as:
And I have several documents where that value is:
RECORD 1
Fred
Fred (the coolest guy in town)
OR
RECORD 2
Fred Anderson
What happens when I do a search for
http://localhost:
That is caused by the size of the documents. The principle is pretty
intuitive if one of your documents is the entire three volumes of The Lord
of the Rings, and you search for "tree" I know that The Lord of the Rings
will be in the results, and I haven't memorized the entire text of that book
:p
I
Finally got to running these tests.
Here are the basics...
Core i7 - 960
24GB RAM
Solr index on its own drive
Solr 3.3.0 running under tomcat 7.0.19, jdk1.6.0_26, java opts are:
JAVA_OPTS="-Xmx4096M -XX:-UseGCOverheadLimit"
Raw data is 80GB in SOLR marking for adding, sample below:
Hi All
I am stuck with an issue with delta-import while configuring solr in an
environment where multiple databases exist.
My schema looks like this:
names exist in one DB and keywords in a table in the other DB (with id as
foreign key).
For delta import, I would need to check against the updat
Here's an idea: if you index the full text of your XML document using
XmlCharFilter - available as a patch (or HtmlCharFilter), and then
highlight the entire document (you will need to fiddle with highlighter
parameters a bit to make sure you get 1 fragment that covers the entire
file) with som
It's not clear to me (from the wiki, or the jira issue) whether the
compatibility break goes both ways - maybe I should just try and see,
but just to get this out there on the list: is the 3.X javabin client
able to talk to 1.4 servers? If so, then there is a nicely decoupled
upgrade path: get
On 7/26/2011 6:26 PM, Michael Sokolov wrote:
It's not clear to me (from the wiki, or the jira issue) whether the
compatibility break goes both ways - maybe I should just try and see,
but just to get this out there on the list: is the 3.X javabin client
able to talk to 1.4 servers? If so, then
I find that, if I do not restart the master's tomcat for some days,
the load average will keep rising to a high level, solr become slow
and unstable, so I add a crontab to restart the tomcat everyday.
do you boys restart your tomcat ? and is there any way to avoid restart tomcat?
I often restarted the tomcat service before the memory reaches the os limit.
Usually, it eats up only 4 GB, but eventually it eats up 11GB.
On Wed, Jul 27, 2011 at 8:42 AM, Bing Yu wrote:
> I find that, if I do not restart the master's tomcat for some days,
> the load average will keep rising to
On 27/07/11 11:42, Bing Yu wrote:
do you boys restart your tomcat ? and is there any way to avoid restart tomcat?
Our female sysadmin takes care of managing our server.
I want to let system do the job instead of system adminm, beause I'm lazy ~ ^__^
But I just want a better way to fix the problem. restart server will
cause some other problem like I need to rebuild the changes happened
during the restart.
2011/7/27 Dave Hall :
> On 27/07/11 11:42, Bing Yu wrote:
On 7/26/2011 7:42 PM, Bing Yu wrote:
I find that, if I do not restart the master's tomcat for some days,
the load average will keep rising to a high level, solr become slow
and unstable, so I add a crontab to restart the tomcat everyday.
do you boys restart your tomcat ? and is there any way to
Hi Tomás
Is facet queries support following queries?
facet.query=onlinedate:[NOW/YEAR-3YEARS TO NOW/YEAR+5YEARS]
I tried this but returned result was not correct.
Am I missing something?
Floyd
2011/7/26 Tomás Fernández Löbbe
> Hi Floyd, I don't think the feature that allows to use multiple
This may be a trivial question - I am noob :).
In the dataimport of a CSV file, am trying to assign a field based on a
conditional check on another field.
E.g.
this works well. However I need to create another field A that is
assigned a value based on X.
Something like this
Hi all,
when i use autocomplete to suggest like google:
http://www.google.com/webhp?complete=1&hl=en and follow this url
http://solr.pl/en/2010/11/15/solr-and-autocomplete-part-2/ to config my
project, but when i tested with more two terms in my query, it's not right,
i don't know why?
Can anyone
Till now I used jetty and got 2 week as the longest uptime until OOM.
I just switched to tomcat6 and will see how that one behaves but
I think its not a problem of the servlet container.
Solr is pretty unstable if having a huge database.
Actually this can't be blamed directly to Solr it is a probl
63 matches
Mail list logo