Did you by any chance set up multicore? Try passing in the path to the Solr
home directory as -Dsolr.solr.home=/path/to/solr/home while you start Solr.
On Mon, Apr 26, 2010 at 1:04 PM, Jon Drukman wrote:
> What does this error mean?
>
> SEVERE: Could not start SOLR. Check solr/home property
>
>
If its worth mentioning here, in my case the disk read speeds seemed to have
a really noticeable effect on the query times. What disks are you planning
on using? Also, as Otis has already pointed out, I doubt if a single box of
that capacity can handle 100-700 queries per second.
On Fri, Apr 23, 2
I think you can use something like "q=hello world -books". Should do.
On Wed, Mar 31, 2010 at 7:34 PM, Sebastian Funk
wrote:
> Hey there,
>
> I'm sure this easy a pretty easy thing, but I can't find the solution:
> can I search for a text with one word (e.g. "books") especially not in it?
> so so
Gentle bounce
On Sun, Mar 28, 2010 at 11:31 AM, Siddhant Goel wrote:
> Hi everyone,
>
> The output of "jmap -histo:live 27959 | head -30" is something like the
> following :
>
> num #instances #bytes class name
> -
Hi everyone,
The output of "jmap -histo:live 27959 | head -30" is something like the
following :
num #instances #bytes class name
--
1:448441 180299464 [C
2: 5311 135734480 [I
3: 3623 683
the
> throughput doesn't change (much).
>
> Adding threads after p+1 *decreases* throughput while
> *increasing* individual response time as your processors start
> spending w to much time context and/or memory
> swapping.
>
> The trick is finding out what n and p ar
robably thats the number.
When the number of threads is around 10, the response times average to
something like 60ms (and 95% of the queries fall within 100ms of that
value).
Thanks,
>
> Erick
>
> On Fri, Mar 12, 2010 at 3:39 AM, Siddhant Goel >wrote:
>
> > I've alloc
on-words-part-1
> for more background on our experience.
>
> Tom Burton-West
> University of Michigan Library
> www.hathitrust.org
>
>
>
> >
> > On Thu, Mar 11, 2010 at 9:39 AM, Siddhant Goel > >wrote:
> >
> > > Hi everyone,
> > >
&g
Did you reindex after setting omitNorms to false? I'm not sure whether or
not it is needed, but it makes sense.
On Thu, Mar 11, 2010 at 5:34 PM, muneeb wrote:
>
> Hi,
>
> In my schema, the document title field has "omitNorms=false", which, if I
> am
> not wrong, causes length of titles to be cou
scribing
> just because it was running for a while.
>
> HTH
> Erick
>
> On Thu, Mar 11, 2010 at 9:39 AM, Siddhant Goel >wrote:
>
> > Hi everyone,
> >
> > I have an index corresponding to ~2.5 million documents. The index size
> is
> > 43GB. The con
Hi everyone,
I have an index corresponding to ~2.5 million documents. The index size is
43GB. The configuration of the machine which is running Solr is - Dual
Processor Quad Core Xeon 5430 - 2.66GHz (Harpertown) - 2 x 12MB cache, 8GB
RAM, and 250 GB HDD.
I'm observing a strange trend in the queri
value is encoded as a single byte, so there is some
> precision
> loss]
>
> When omitNorms=true no norm calculation is done, so fieldNorm will always
> be
> one on those fields.
>
> You can also use the Luke utility to view the document in the index, and it
> will
Hi everyone,
Is the fieldNorm calculation altered by the omitNorms factor? I saw on this
page (http://old.nabble.com/Question-about-fieldNorm-td17782701.html) the
formula for calculation of fieldNorms (fieldNorm =
fieldBoost/sqrt(numTermsForField)).
Does this mean that for a document containing a
Now that I missed attending it, where can I view it? :-)
Thanks
On Fri, Feb 26, 2010 at 10:11 PM, Jay Hill wrote:
> Yes, it will be recorded and available to view after the presentation.
>
> -Jay
>
>
> On Thu, Feb 25, 2010 at 2:19 PM, Bernadette Houghton <
> bernadette.hough...@deakin.edu.au> w
Can you provide the error message that you got?
On Sat, Mar 6, 2010 at 11:13 AM, Suram wrote:
>
> Hi,
>
>
> how can i send the xml file to solr after created the multicore.i tried it
> refuse accept
> --
> View this message in context:
> http://old.nabble.com/multiCore-tp27802043p27802043.html
Did you send a commit after indexing those files?
On Thu, Mar 4, 2010 at 6:30 PM, Suram wrote:
>
> Hi,
>
>I newly Indexed some xml files, it was not found for search and
> autosuggestion
>
> My xml Index file http://old.nabble.com/file/p27780413/Nike.xmlNike.xml
>
> and my scheme is htt
There is an HTML filter documented here, which might be of some help -
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.HTMLStripCharFilterFactory
Control characters can be eliminated using code like this -
http://bitbucket.org/cogtree/python-solr/src/tip/pythonsolr/pysolr.py#cl-44
I think that's because of the internal tokenization that Solr does. If a
document contains HP1, and you're using the default text field type, Solr
would tokenize that to HP and 1, so that document figures in the list of
documents containing HP, and hence that documents appears in the search
results
Yep. I think updation in Lucene means first a deletion, and then an
addition. So the entire document needs to be sent to update.
On Mon, Mar 1, 2010 at 7:24 PM, Israel Ekpo wrote:
> Unfortunately, because of how Lucene works internally, you will not be able
> to update just one or two fields. Yo
Yes. You can just re-add the document with your changes, and the rest of the
fields in the document will remain unchanged.
On Mon, Mar 1, 2010 at 5:09 PM, Suram wrote:
>
> Hi,
>
>
> EN7800GTX/2DHTV/256M
> ASUS Computer Inc.
> electronics
> graphics card
> NVIDIA GeForce 7800 GTX GPU/VPU c
Hi,
Did you *really* go through this page -
http://wiki.apache.org/solr/CoreAdmin ?
On Thu, Feb 25, 2010 at 7:40 PM, Sudhakar_Thangavel
wrote:
>
> Hi,
>Am new to Solr .Am not getting clearly in wiki..can any one tell me
> how to configure coreAdmin i need step by step instruction..
>
>
>
On Wed, Jan 20, 2010 at 4:19 PM, Erik Hatcher wrote:
> Where are you getting your solr-ruby code from? You can simply "gem
> install" it to pull in an already pre-built gem.
>
I'm just picking it up from the 1.4 release. I also tried checking out the
latest copy from svn, but the results were th
Hi,
I'm using Solr 1.4 (and trying to use the Ruby client (solr-ruby) to access
it). The problem is that I just cant get it to work. :-)
If I run the tests (rake test), it fails giving me the following output -
/path/to/solr-ruby/test/unit/delete_test.rb:52: invalid multibyte char
(US-ASCII)
/pat
Hi,
Thanks for the responses.
q.alt did the job. Turns out that the dismax query parser was at fault, and
wasn't able to handle queries of the type *:*. Putting the query in q.alt,
or adding a defType=lucene (as pointed out to me on the irc channel) worked.
Thanks,
--
- Siddhant
Hi all,
Any query I make which is of type field:value does not return any documents.
Same is the case for the *:* query. The *:* query doesn't return any result
either. The index size is close to 1GB now, so it should be returning some
documents. The rest of the queries are functioning properly. A
On Tue, Jan 5, 2010 at 2:24 PM, Peter A. Kirk wrote:
> Thanks for the answer. How does one "reload" a core? Is there an API, or a
> url one can use?
>
I think this should be it - http://wiki.apache.org/solr/CoreAdmin#RELOAD
--
- Siddhant
On Tue, Dec 22, 2009 at 12:01 PM, Ryan Kennedy wrote:
> This approach will be limited to applying a "global" rank to all the
> documents, which may have some unintended consequences. The most
> popular document in your index will be the most popular, even for
> queries for which it was never clic
to user clicks" ? Quite many things in my head.
> Do you have maybe a citation that inspires you here?
>
> paul
>
>
> Le 17-déc.-09 à 13:52, Siddhant Goel a écrit :
>
>
> Does Solr provide adaptive searching? Can it adapt to user clicks within
>> the
>>
Hi,
Does Solr provide adaptive searching? Can it adapt to user clicks within the
search results it provides? Or that has to be done externally?
I couldn't find anything on googling for it.
Thanks,
--
- Siddhant
29 matches
Mail list logo