will merely adding fl=score make difference in search results, i mean will i
get desired results now???
-
Thanks & Regards
Romi
--
View this message in context:
http://lucene.472066.n3.nabble.com/configure-dismax-requesthandlar-for-boost-a-field-tp3137239p3139814.html
Sent from the Solr - Use
Add score to the fl parameter.
fl=*,score
On 7/4/11 11:09 PM, "Romi" wrote:
>I am not returning score for the queries. as i suppose it should be
>reflected
>in search results. means doc having query string in description field come
>higher than the doc having query string in name field.
>
>And
i want to enable boosting for query and search results. My dismax
queryHandlerConfiguration is as:
*
explicit
dismax
0.01
text^0.5 name^1.0 description^1.5
UID_PK,name,price,description,score
2<-1 5<-2 6<90%
100
Hi,
I am getting the following two error in my solr log file,
SEVERE: SolrIndexWriter was not closed prior to finalize(), indicates a bug --
POSSIBLE RESOURCE LEAK!!!
and
SEVERE: org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
out: NativeFSLock@/home/solr/simplify360/mult
I am not returning score for the queries. as i suppose it should be reflected
in search results. means doc having query string in description field come
higher than the doc having query string in name field.
And yes i restarted solr after making changes in configuration.
-
Thanks & Regards
Ro
ya i agree with Filype Pereira Please put your problem in details . And
check all thing what he says . Please also check in 8080 port
-
Regards
Nilay Tiwari
--
View this message in context:
http://lucene.472066.n3.nabble.com/A-beginner-problem-tp3138118p3139667.html
Sent from the Solr -
hi all,
I want to provide full text searching for some "small" websites.
It seems cloud computing is popular now. And it will save costs
because it don't need employ engineer to maintain
the machine.
For now, there are many services such as amazon s3, google app
engine, ms azure etc. I am
: I will be more clear on the steps that I would like to take:
: 1) Call the analyzer of Solr that returns me an XML response in the
: following format (just a snippet as example)
...
: 2) now I would like to be able to extract the info that I need from there
: and tell Solr directly whic
Gee, I was about to post. I figured my issue is that of computing the unique
terms per document. One approach to compute that value is running the
analyzer on the document before before calling addDocument, and count the
number of tokens.
Then I can invoke addDocument with the value of the field co
: Sorry for the double post but in this case, is it possible for me to access
: the queryResultCache in my component and play with it? Ideally what I want
: is this:
it could be possible to do what you're describing, but it would probabl be
fairly brittle.
i know you said earlier thta you can't
You can create a custom update processor. The passed AddUpdateCommand object
has an accessor to the SolrInputDocument you're about to add. In the
processAdd method you can add a new field with whatever you want.
The wiki has a good example:
http://wiki.apache.org/solr/UpdateRequestProcessor
>
It's hard to find what is happening without more details about your setup.
I would start by asking the questions:
- Do you have a firewall installed?
- What opperating system do you run solr on?
- Can you ping the hostname "localhost"?
Filype
On Tue, Jul 5, 2011 at 4:49 AM, wrote:
> I use nut
: i recently upgraded al systems for indexing and searching to lucene/solr 3.1,
: and unfortunatly it seems theres a lot more changes under the hood than
: there used to be.
it wounds like you are saying you had a system that wsa working fine for
you, but when you tried to upgrade it stoped work
On Tue, Jul 5, 2011 at 12:03 AM, Chris Hostetter
wrote:
>
> : The index version shown on the dashboard is the time at which the most
> : recent index segment was created. I'm not sure why it has a value older
> than
> : a month if a commit has happened after that time.
>
> I'm fairly certian that'
: The index version shown on the dashboard is the time at which the most
: recent index segment was created. I'm not sure why it has a value older than
: a month if a commit has happened after that time.
I'm fairly certian that's false.
last time i checked, newly created indexes are assigned a v
On Mon, Jul 4, 2011 at 5:47 PM, Engy Morsy wrote:
>
> What is the workflow of solr starting from submitting an xml document to be
> indexed? Is there any default analyzer that is called before the analyzer
> specified in my solr schema for the text field. I have a situation where the
> words of t
>From my exploration so far, I understood that we can opt Solr straightaway
if the index changes are kept to minimal. However, mine is absolutely the
opposite. I'm still vague about the perfect solution for the scenario
mentioned.
Please share..
On Mon, Jul 4, 2011 at 6:28 PM, fire fox wrote:
>
I use nutch, as a search engine. Until now nutch did the crawl and the
search functions. The newest version, however, delegated the search to
solr. I don't know almost nothing about programming, but i'm able to
follow a receipe. So I went to the the solr site, downloaded solr and
tried to follow
Hi,
i've tried to add the params for group=true and group.field=myfield by
using the SolrQuery.
But the result is null. Do i have to configure something? In wiki part
for field collapsing i couldn't
find anything.
Thanks
Per
Hi Marian,
I guess that your problem isn't related to the number of results, but to the
component's configuration. The configuration that you show is meant to set
up an autocomplete component that will suggest terms from an incomplete user
input (something similar to what google does while you're
Hello Tomasso
I noticed that though I can see the Solr Admin interface, when I click on
links "schema" and "conf", its not taking me to the pages inside solr/conf/
folder of the webapp, again, I guess because of eclipse paths.
This is the stack trace on console:
INFO: Solr home set to 'solr/./'
On Mon, Jul 4, 2011 at 2:07 AM, arian487 wrote:
> I guess I'll have to use something other then SolrCache to get what I want
> then. Or I could use SolrCache and just change the code (I've already done
> so much of this anwyways...). Anyways thanks for the reply.
You can specify a regenerator f
Hello Tomasso
It was indeed a relative path issue inside eclipse. I key-ed in the total
path instead of ../../ and it ran without throwing an error.
However, when I gave the path for index as an old lucene index directory's
path and modified schema.xml accordingly, it still says numDocs = 0,
Hi all,
There were several places I could find a discussion on this but I
failed to find the suited one for me.
I'd like to be clear on my requirements, so that you may suggest me the
better solution.
-> A project deals with tons of database tables (with *millions *of records)
out of which
Hello Sowmya,
I've just made a fresh checkout from
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_3_3/ then I've
done the following:
1. cd solr
2. ant example
3. cd solr/contrib/uima
4. ant dist
5. cd ../../example
6. edit solr/conf/solrconfig.xml
7. copied-pasted lib directives:
Hi Tommaso,
I am using: Solr 3.3, that got released last week.
The Readme on the Solr version I have also had the same info as the read me
on that link.
There exists a lib element in my solrconfig.xml.
Here is my trace: from this, it seemed like a class not found exception.
The server encou
On Mon, Jul 4, 2011 at 13:11, Romi wrote:
> I want to apply boost for searching. i want that if a query term occur both
> in description,name than docs having query term in description field come
> high in search results. for this i configure dismax request handler as:
>
> * default="true" >
>
Hi,
What is the workflow of solr starting from submitting an xml document to be
indexed? Is there any default analyzer that is called before the analyzer
specified in my solr schema for the text field. I have a situation where the
words of the text field that will be analyzed if somehow splitte
On Mon, 2011-07-04 at 13:51 +0200, Jame Vaalet wrote:
> What would be the maximum size of a single SOLR index file for resulting in
> optimum search time ?
There is no clear answer. It depends on the number of (unique) terms,
number of documents, bytes on storage, storage speed, query complexity,
Hi!
I can't help you with the question about the limit to the number of
fields. But until now I haven't read anywhere that there is a limit.
So I'd assume that there is none.
For your second question:
"Another question: Is it possible to add the FACET fields automatically to my
query? facet.fiel
Hello Sowmya,
Is the problem a ClassNotFoundException?
If so check there exist a element referencing the solr-uima jar.
Otherwise it may be some configuration error.
By the way, which version of Solr are you using ? I ask since you're seeing
README for trunk but you may be using Solr jars with dif
There are Solutions for Indexing huge data. e.g. SolrCloud,
ZooKeeperIntegration, MultiCore, MultiShard.
depending on your requirement you can choose one or other.
On 4 July 2011 17:21, Jame Vaalet wrote:
> Hi,
>
> What would be the maximum size of a single SOLR index file for resulting in
> o
Hi,
What would be the maximum size of a single SOLR index file for resulting in
optimum search time ?
In case I have got to index all the documents in my repository (which is in TB
size) what would be the ideal architecture to follow , distributed SOLR ?
Regards,
JAME VAALET
Software Developer
Hi All
I tried integrating UIMA in to Solr, following the instructions here:
https://svn.apache.org/repos/asf/lucene/dev/trunk/solr/contrib/uima/README.txt
However, I set a solrconfig error, when I try to run Solr as a webapp, on
Eclipse.
org.apache.solr.common.SolrException: Error loading clas
Nobody? I'm still confused about this
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-many-fields-can-SOLR-handle-tp3033910p3137301.html
Sent from the Solr - User mailing list archive at Nabble.com.
I want to apply boost for searching. i want that if a query term occur both
in description,name than docs having query term in description field come
high in search results. for this i configure dismax request handler as:
*
explicit
0.01
text^0.5 na
Hi!
I want my spellchecker component to return search query suggestions,
regardless of the number of items in the search results. (Actually I'd
find it most useful in zero-hit cases...)
Currently I only get suggestions if the search returns one ore more hits.
Example: q=place
hi,
i wondering solrj @Field annotation support embedded child object ? e.g.
class A {
@field
string somefield;
@emebedded
B b;
}
regards,
kiwi
how long is an average query?
I have noticed, that if the query with such a contents as you specified, it
can take a while to return the hits. How big is your index?
On Mon, Jul 4, 2011 at 8:48 AM, Jason, Kim wrote:
> Hi All
> I have complex phrase queries including wildcard.
> (ex. q="conn* pho
Markus, i did like it
*
default
true
false
1
default
true
false
1
*
I hope i have done things correctly.
But when i run solr server i am getting exception
*org.apache.solr.common.SolrExceptio
Hi, I have a problem with the WordDelimiterFilterFactory and the
DelimitedPayloadTokenFilterFactory.
It seems that the payloads are applied only to the original word that I
index and the WordDelimiterFilter doesn't apply the payloads to the tokens
it generates.
For example, imagine I index the str
41 matches
Mail list logo