I'm excited to announce the release of Lucid's certified distribution
for Solr. Below is the marketing blurb, but a bit of a personal take
first.
The reference guide is a must-have for all of us (including myself),
many great details about how all the config options work, etc. And
it's
On Jan 7, 2010, at 10:18 PM, Andy wrote:
Oh I see.
Is "popularityboost" the name of the parameter?
{!boost b=$popularityboost v=$qq}
log(popularity)
popularityboost is entirely arbitrary here. Use whatever name you
like, it's a simple substitution in the q p
Hello,
I would like to know if by just copying the solr.war file to my existing
solr 1.3 installation ,lucene version is also upgraded to current 2.9 ?
I believe reindex is not necessary ,is that correct?
Is there anything else apart form this that i need to do to upgrade to the
latest lucen
Can anyone help me out with this? It'd be really appreciated :)
2010/1/7 Oliver Beattie
> Hey,
>
> I'm doing a query which involves using an frange in the filter query — and
> I was wondering if there is a way of combing the frange with other
> parameters. Something like ({!frange l=x u=y)*do st
You can mix and match all sorts of things with Solr's flexible query
parsing capabilities.
See the great blog entry by Yonik here:
http://www.lucidimagination.com/blog/2009/03/31/nested-queries-in-solr/
Erik
On Jan 8, 2010, at 5:30 AM, Oliver Beattie wrote:
Can anyone help me out
On Jan 8, 2010, at 4:14 AM, revas wrote:
I would like to know if by just copying the solr.war file to my
existing
solr 1.3 installation ,lucene version is also upgraded to current
2.9 ?
Yes, Lucene 2.9 is built into solr.war, so you're automatically
upgrading that too.
I believe rei
If you're using SolrJ, QueryResponse#getLimitingFacets() does what you
want, but still it is doing client-side filtering of the values less
than the number found.
Currently that's the only way to do it.
Erik
On Jan 7, 2010, at 5:32 PM, joeMcElroy wrote:
Hi there
If i have two do
Thanks ,Erik.
On Fri, Jan 8, 2010 at 4:34 PM, Erik Hatcher wrote:
>
> On Jan 8, 2010, at 4:14 AM, revas wrote:
>
>> I would like to know if by just copying the solr.war file to my existing
>> solr 1.3 installation ,lucene version is also upgraded to current 2.9 ?
>>
>
> Yes, Lucene 2.9 is b
kalidoss wrote:
Hi,
I would like to split the existing index by 2 index, ( inverse of
merge index function).
My index directory size around 20G and 10 Million documents.
-Kalidoss.m,
I think IndexSplitter and/or MultiPassIndexSplitter are what you are
looking for:
http://hudson.zo
Hi Erik,
the first link on page 33 on the Solr Reference Guide is a wrong one.
The text of the link http://localhost:8983/solr/select?q=video, but the
link itself points to http://localhost:8080/solr/select?q=video (the
difference
is port 8983 vs. 8080).
Péter
- Original Message -
Fr
Thanks Koji,
http://wiki.apache.org/solr/MergingSolrIndexes -this will be useful for
merge 2 different indexes, am looking for a tool like to Split a Index
directory by 2.
Kalidoss.m,
Koji Sekiguchi wrote:
kalidoss wrote:
Hi,
I would like to split the existing index by 2 index, ( in
Hi,
I am using solr for my faceted search implementation. It is an e-commerce
site and the facets are based on product attributes.
I have added product code to my index. The search works well if the user
enters the complete product code. However if some one enters a partial code,
the search is fa
Dear all,
happy New Year!
For my custom searchHandler (called /list) that uses the
VelocityResponseWriter I would like to use the existing dismax
configuration. Therefore I did NOT specify ANY dismax configuration in
that custom handler.
In short: it's not using (my existing) dismax at all,
On Jan 8, 2010, at 7:03 AM, Király Péter wrote:
Hi Erik,
the first link on page 33 on the Solr Reference Guide is a wrong one.
The text of the link http://localhost:8983/solr/select?q=video, but
the
link itself points to http://localhost:8080/solr/select?q=video (the
difference
is port 89
Chantal -
You'll have to copy the configuration to your various handlers.
However, with the VelocityResponseWriter, you can use your existing
handlers and simply set wt=velocity on them by default, or set it from
the client with &wt=velocity appended to the query string. The
VelocityResp
Let's see the schema definitions for the field in question and
query handler. I suspect that your indexing process is
splitting the word on the letter-number transitions but your
query processing isn't (WordDelimiterFactory?), but that's
only a guess in the absence of any data..
Erick
On Fri,
Ah sorry - didn't realize attachments were stripped.
Here's a web version:
http://img.skitch.com/20100108-t99a1emmar32w9gkcfcius8afm.png
-Peter
On Thu, Jan 7, 2010 at 9:53 PM, Otis Gospodnetic
wrote:
> I'd love to see the screenshot, but it didn't come through - got st
Okay, you're right. It really would be cleaner, if I do such stuff in the
code which populates the document to Solr.
Is there a way to prepare a document the described way with Lucene/Solr,
before I analyze it?
My use case is to categorize several documents in an automatic way, which
includes tha
Thank you for your quick answer, Erik!
I've copied the dismax configuration to the custom handler, and it works
fine, now.
I have different velocity templates (=>different custom handlers), so
from quickly reading through your other suggestion I think that would
work with only one template/vel
On Jan 8, 2010, at 9:31 AM, Chantal Ackermann wrote:
I've copied the dismax configuration to the custom handler, and it
works fine, now.
I have different velocity templates (=>different custom handlers),
so from quickly reading through your other suggestion I think that
would work with only
I get the feeling what I need to accomplish isn't necessarily in the spirit
of what solr is meant to do, but it's the problem I'm facing. Of course,
I'm a solr newbie, so this may not be as challenging as I think it is.
Domain is a little tricky, so I'll make one up. Lets say I have the
followi
I think the StatsComponent will do everything you're asking for here:
http://wiki.apache.org/solr/StatsComponent
On Jan 8, 2010, at 9:37 AM, dbashford wrote:
I get the feeling what I need to accomplish isn't necessarily in the
spirit
of what solr is meant to do, but it's the problem I'm f
Somewhere, you have to create the document XML you
send to SOLR. Just add the calculated data to
your new field there...
HTH
Erick
On Fri, Jan 8, 2010 at 9:30 AM, MitchK wrote:
>
> Okay, you're right. It really would be cleaner, if I do such stuff in the
> code which populates the document to S
Hi Erick,
Thanks for replying. Please find the schema definitions below.
Regards,
Raja
Erick Erickson wrote:
>
> Let's see the sc
hi, thank you for your answer
i posted corrected versions on jira for both of them
On Wed, Jan 6, 2010 at 6:01 PM, Erik Hatcher wrote:
> You probably aren't doing anything wrong, other than those patches are a
> bit out of date with trunk. You might have to fight through getting them
> current
I thought this was fixed...
http://issues.apache.org/jira/browse/SOLR-1292
http://www.lucidimagination.com/search/document/57103830f0655776/stats_page_slow_in_latest_nightly
-Yonik
http://www.lucidimagination.com
It's definitely still an issue. I've seen this with at least four different
Solr implementations. It clearly seems to be a problem when there is a large
field cache. It would be bad enough if the stats.jsp was just slow to load
(usually takes 1 to 2 minutes), but when monitoring memory usage with
j
Yonik Seeley wrote:
> I thought this was fixed...
> http://issues.apache.org/jira/browse/SOLR-1292
> http://www.lucidimagination.com/search/document/57103830f0655776/stats_page_slow_in_latest_nightly
>
>
> -Yonik
> http://www.lucidimagination.com
>
It should be fixed in trunk, but that was after
Mark Miller wrote:
> Yonik Seeley wrote:
>
>> I thought this was fixed...
>> http://issues.apache.org/jira/browse/SOLR-1292
>> http://www.lucidimagination.com/search/document/57103830f0655776/stats_page_slow_in_latest_nightly
>>
>>
>> -Yonik
>> http://www.lucidimagination.com
>>
>>
> It
Hi,
I'm puzzled by this issue and was wondering if anyone knows why. Basically
I am trying to get hit counts from my solr.log.* files for analysis purpose.
However, I noticed that sometimes for a request I don't get a "hits=xyz"
shown.
Here are 2 example log snippets from my solr.log.2010_01_
Update: from my further investigation, it appears that anytime I am using the
collapse field feature (I am running collapse field patch on 1.4), then the
hits= count is not shown in the log. Anyone can confirm?
michael8 wrote:
>
> Hi,
> I'm puzzled by this issue and was wondering if anyone kno
On Fri, Jan 8, 2010 at 1:03 PM, Mark Miller wrote:
> It should be fixed in trunk, but that was after 1.4. Currently, it
> should only do it if it sees insanity - which there shouldn't be any
> with stock Solr.
http://svn.apache.org/viewvc/lucene/solr/tags/release-1.4.0/src/java/org/apache/solr/se
Yonik Seeley wrote:
> On Fri, Jan 8, 2010 at 1:03 PM, Mark Miller wrote:
>
>> It should be fixed in trunk, but that was after 1.4. Currently, it
>> should only do it if it sees insanity - which there shouldn't be any
>> with stock Solr.
>>
>
> http://svn.apache.org/viewvc/lucene/solr/tags/
: >> 2009-05-28) to the Solr 1.4.0 released code. Every 3 hours we have a
: >> cron task to log some of the data from the stats.jsp page from each
: >> core (about 100 cores, most of which are small indexes).
1) what stats are you actaully interested in? ... in Jay's case the
LukeRequestHandler
: We should prob switch to never calculating the size unless an explicit
: param is pass to the stats page.
I don't think that's as easy as it sounds considdering stats.jsp is just
iterating over MBeans - that level of control over the Insanity checker is
abstracted away.
-Hoss
Chris Hostetter wrote:
> : We should prob switch to never calculating the size unless an explicit
> : param is pass to the stats page.
>
> I don't think that's as easy as it sounds considdering stats.jsp is just
> iterating over MBeans - that level of control over the Insanity checker is
> abstra
: Is there a way to prepare a document the described way with Lucene/Solr,
: before I analyze it?
: My use case is to categorize several documents in an automatic way, which
: includes that I have to "create" data from the given input doing some
: information retrieval.
As Ryan mentioned earlier:
: That's just setting the solr/home environment not the user.dir variable.
: I have that already set. But when I got to the solr/admin page, at the
: top it shows the Solr Admin(schemaname), hostname, and cwd=/root,
: SolrHome=/opt/solr.
:
: How do I get cwd=/root not to be that but to be se
: If you're using SolrJ, QueryResponse#getLimitingFacets() does what you want,
: but still it is doing client-side filtering of the values less than the number
: found.
:
: Currently that's the only way to do it.
I've opened an issue to track this as a new feature, and posted some ideas
i have
Actually my cases were all with customers I work with, not just one case. A
common practice is to monitor cache stats to tune the caches properly. Also,
noting the warmup times for new IndexSearchers, etc. I've worked with people
that have excessive auto-warm count values which is causing extremely
Why don't we create a patch that allows one to check a box to get the Field
Cache stats and then reload? Off by default.
On Jan 8, 2010, at 2:33 PM, Jay Hill wrote:
> Actually my cases were all with customers I work with, not just one case. A
> common practice is to monitor cache stats to tune
On Fri, Jan 8, 2010 at 1:38 PM, Mark Miller wrote:
> Yonik Seeley wrote:
> So people seeing this should also being seeing an insanity count over one.
>
> I'd think that would be rarer than one this sounds like though ... whats
> left that could cause insanity?
faceting on a field, ord(field), and
Yonik Seeley wrote:
> ord(field)
I thought you used those leaf readers to get around top level for ord?
--
- Mark
http://www.lucidimagination.com
On Jan 8, 2010, at 3:03 PM, Jonas Bosson wrote:
>
>>
>> Being all about the open source myself, why would I want to use it? The
>> advantage to me personally include the pre-configured features such as
>> clustering (which doesn't come fully functional with Solr itself due to
>> Apache licen
On Fri, Jan 8, 2010 at 3:02 PM, Mark Miller wrote:
> Yonik Seeley wrote:
>> ord(field)
> I thought you used those leaf readers to get around top level for ord?
ord()/rord() now works via top()... by popping back to the top level reader.
-Yonik
http://www.lucidimagination.com
: working well. The only caveat to this is that the reverse sort results
: don't include 0-count facets (see notes in SOLR-1672), so reverse sort
...
: believe patching to include 0 counts could open a can of worms in terms
: of b/w compat and performance, as 0 counts look to be skipped
>
> now i'm totally confused: what are you suggesting this new param would
> do, what does the name mean?
>
Sorry, I wan't clear - there isn't a new parameter, except the one added in the
patch. What I was suggesting here is to do the work
to remove the new parameter I just put in (facet.sorto
I've seen many mentions of the Lucene CheckIndex tool, but where can I find it?
Is there any documentation on how to use it?
I noticed Luke has it built-in, but I can't get Luke to open my index with the
"Don't open IndexReader(when opening corrupted index)" option check. Opening
even an index
When I needed to use it, I couldn't find docs for it either but it's straight
forward. Here's what I did:
un-jar the solr war file to find the lucene jar that solr was using and run
CheckIndex like this
java -cp lucene-core-2.9-dev.jar org.apache.lucene.index.CheckIndex
/path/to/solr/data/index
hello *, what do i need to do to make a query parser that works just
like the standard query parser but also runs analyzers/tokenizers on a
wildcarded term, specifically im looking to only wildcarding the last
token
ive tried the edismax qparser and the prefix qparser and neither is
exactly what i
Yeah that worked. Thanks!
-Original Message-
From: Ian Kallen [mailto:spidaman.l...@gmail.com]
Sent: Friday, January 08, 2010 5:32 PM
To: solr-user@lucene.apache.org
Subject: Re: checkindex
When I needed to use it, I couldn't find docs for it either but it's straight
forward. Here's wha
: To be brief, looking at query string,
: 1) if there are no fields supplied as part of the query, I need to
: search for the input-query terms in a set of default field list
: including support for field boosting.
:
: 2) If the user has supplied some field name as part of the query then
: I
: My 'inspiration' for the ord method was actually the Solr 1.4 Enterprise
: Search server book. Page 126 has a section 'using reciprocals and rord
: with dates'. You should let those guys know what's up!
1) Although it's not 100% obvious, those examples in the book use the
"DateField" type --
: I want to modify scoring to ignore term frequency > 1. This is useful
...
: fields. What's the best way to do this? Is there a way to use a
: field-specific similarity class, or to evaluate field names/parameters
: inside a Similarity class?
The Similarity API let's you customize the
54 matches
Mail list logo