Hi
I am asking about the filter after clustering . Faceting is based on the
single field so,if we need to filter we can search in related field . But
in clustering it is created by multiple field then how can we create a
filter for that.
Example
after clusetring you get the following
>
> I am asking about the filter after clustering . Faceting is based on the
> single field so,if we need to filter we can search in related field . But
> in clustering it is created by multiple field then how can we create a
> filter for that.
>
> Example
>
> after clusetring you get the foll
Hi Walter,
That makes sense, but this has always been a multi-core setup, so the paths
> have not changed, and the clustering component worked fine for core0. The
> only thing new is I have fine tuned core1 (to begin implementing it).
> Previously the solrconfig.xml file was very basic. I replaced
The "docs" array contained in each cluster contains ids of documents
belonging to the cluster, so for each id you need to look up the document's
content, which comes earlier in the response (in the response/docs array).
Cheers,
Staszek
On Thu, Jun 30, 2011 at 11:50, Romi wrote:
> wanted to use
Hello Erik,
thank u for ur help.
I understand that we need to delete the folder but how undeploy the solr.war
and where i can find it.
If anyone can send me the document to "unisnatll solr software" will be great.
Regards,
Gaurav Pareek
--
Sent via Nokia Email
--Original message
Hi François,
it is indeed being stemmed, thanks a lot for the heads up. It appears
that stemming is also configured for the query so it should work just
the same, no?
Thanks again.
Regards,
Celso
2011/6/30 François Schiettecatte :
> I would run that word through the analyzer, I suspect that th
Hi again,
read (past tense) TFM :-) and:
"On wildcard and fuzzy searches, no text analysis is performed on the
search word."
Thanks a lot François!
Regards,
Celso
On Fri, Jul 1, 2011 at 10:02 AM, Celso Pinto wrote:
> Hi François,
>
> it is indeed being stemmed, thanks a lot for the heads up.
I want to include both clustering and spellchecker in my search results. but
at a time i am able to include only one. Only one, with which
i am setting default=true. than how can i include both
clustering and spellchecker both for my results.
-
Thanks & Regards
Romi
--
View this message in c
Use a custom request handler and define both components as in the example for
these individual request handlers.
> I want to include both clustering and spellchecker in my search results.
> but at a time i am able to include only one. Only one, with which
> i am setting default=true. than how ca
would you please give me an example for custom request handler
-
Thanks & Regards
Romi
--
View this message in context:
http://lucene.472066.n3.nabble.com/Problem-in-including-both-clustering-component-and-spellchecker-for-solr-search-results-at-the-same-e-tp3128864p3128893.html
Sent from the
My indexes are these, i want to see the effect of merge factor and maxmerge
docs. on These indexes how can i do it.
*
_0.fdt 3310 KB
_0.fdx 23 KB
_0.fnm 1 KB
_0.frq 857 KB
_0.nrm 31 KB
_0.prx 1748 KB
_0.tii 5 KB
_0.tis 350 Kb*
I mean what test cases for mergefactor and maxmergedoc i can ru
This example loads two fictional components. Use spellcheck and clustering
instead.
704
705
708
709 explicit
710 10
711
712
716
725
730
746
754
758
764
> would you please give me an example for custom request handler
>
> -
I think that's all you can do, although there is a callback-style
interface that might save some time (or space). You still need to
iterate over all of the vectors, at least until you get the one you want.
-Mike
On 6/30/2011 4:53 PM, Jamie Johnson wrote:
Perhaps a better question, is this po
I am using solr for indexing and searching in my application. I am facing
some strange problem while querying wild card When i search for di?mo?d, i
get results for diamond but when i search for diamo?? i get no results.What
could be the reason please tell.
-
Thanks & Regards
Romi
--
View this
How would I know which ones were the ones I wanted? I don't see how
from a query I couldn't match up the term vectors that met the query.
Seems like what needs to be done is have the highlighting on the solr
end where you have more access to the information I'm looking for.
Sound about right?
On
I don't use dismax, but do something similar with a regular query. I have a
field defined in my schema.xml called 'dummy' (not sure why its called that
actually) but it defaults to 1 on every document indexed. So say I want to
give a score bump to documents that have an image, I can do queries like
I believe that is not a setting, it's not telling you that you have 'optimize
turned on' it's a state, your index is currently optimized. If you index a
new document or delete an existing document, and don't issue an optimize
command, then your index should be optimize=false.
--
View this message
Hello,
I have made my own sql function(isSoccerClub). In my sql query browser this
works fine. My query looks like:
select *
from soccer
where isSoccerClub(id,name) = 1;
Now i try to use this with the DIH. It looks like this:
Now i get some error with the full-import: Indexing failed. Rolled
that doesn't matter for solr .. it's just executing your query via
jdbc .. so the complete error-message would be intersting. have a look
at the error-log of your sql-server too (especially for the timeframe
while the dataimport is running)
regards
Stefan
On Fri, Jul 1, 2011 at 2:52 PM, roySolr
On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley wrote:
> Hello-
>
> I'm looking for a way to find all the links from a set of results. Consider:
>
>
> id:1
> type:X
> link:a
> link:b
>
>
>
> id:2
> type:X
> link:a
> link:c
>
>
>
> id:3
> type:Y
> link:a
>
>
> Is there a way to sea
Hello!
Is it possible to have an optional nested query. I have 2 nested queries and
would like to have the first query mandatory but the second optional. ie..
if there is a match on the second query, i would like it to improve the
score but it is not required.
A sample query I am currently using
Put an OR between your two nested queries to ensure you're using that operator.
Also, those hl params in your first dismax don't really belong there and
should be separate parameters globally.
Erik
On Jul 1, 2011, at 06:19 , joelmats wrote:
> Hello!
>
> Is it possible to have an opti
Ok, i checked my error logs and find some problems.
SET NAMES latin1
SET character_set_results = NULL
SHOW VARIABLES
SHOW COLLATION
SET autocommit=1
SET sql_mode='STRICT_TRANS_TABLES'
SET autocommit=0
select * from soccer where isSoccerClub(id,name) = 1;
I see that the sql_mode is set to ST
Celso
You are very welcome and yes I should have mentioned that wildcard searches are
not analyzed (which is a recurring theme). This also means that they are not
downcased, so the search TEST* will probably not find anything either in your
set up.
Cheers
François
On Jul 1, 2011, at 5:16 AM
I have found the problem. Some records has incorrect data. Thanks for your
help so far!!
--
View this message in context:
http://lucene.472066.n3.nabble.com/SOLR-and-SQL-functions-tp3129175p3129409.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Guys,
Last several days I am trying to find fast way to obtain all possible values
for a given field, but all the solution that I tried was
not fast enough.
I have several millions of documents indexed in single Solr instance, around
7 million for now, but I want to see how far I can go.
Every
There's no general documentation on that, because it depends on exactly what
container you are using (Tomcat? Jetty? Something else?) and how you are using
it. It is confusing, but blame Java for that, nothing unique to Solr.
So since there's really nothing unique to Solr here, you could try l
On 7/1/2011 4:43 AM, Romi wrote:
My indexes are these, i want to see the effect of merge factor and maxmerge
docs. on These indexes how can i do it.
*
_0.fdt 3310 KB
_0.fdx 23 KB
_0.fnm 1 KB
_0.frq 857 KB
_0.nrm 31 KB
_0.prx 1748 KB
_0.tii 5 KB
_0.tis 350 Kb*
I mean what test cases for m
I'm working on upgrading to v3.2 from v 1.4.1. I think I've got
everything working but when I try to do a data import using
dataimport.jsp I'm rolling back and getting class not found exception on
the above referenced class.
I thought that tika was packaged up with the base Solr build now but
On Fri, Jul 1, 2011 at 9:06 AM, Yonik Seeley wrote:
> On Thu, Jun 30, 2011 at 6:19 PM, Ryan McKinley wrote:
>> Hello-
>>
>> I'm looking for a way to find all the links from a set of results. Consider:
>>
>>
>> id:1
>> type:X
>> link:a
>> link:b
>>
>>
>>
>> id:2
>> type:X
>> link:a
>>
Hi,
when I restart my solr server it performs two warming queries.
When a request occures within this there is an exception and always
exceptions until i restart solr.
Logfile:
INFO: Added SolrEventListener:
org.apache.solr.core.QuerySenderListener{queries=[{q=solr,start=0,rows=10},
{q=rocks,start
i recently upgraded al systems for indexing and searching to lucene/solr 3.1,
and unfortunatly it seems theres a lot more changes under the hood than
there used to be.
i have a java based indexer and a solr based searcher, on the java end for
the indexing this is what i have:
Set nostopwords =
i guess what im asking is how to set up solr/lucene to find
yale l.j.
yale l. j.
yale l j
as all the same thing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/upgraded-from-2-9-to-3-x-problems-help-tp3129348p3129520.html
Sent from the Solr - User mailing list archive at Nab
On 7/1/2011 9:23 AM, Tod wrote:
I'm working on upgrading to v3.2 from v 1.4.1. I think I've got
everything working but when I try to do a data import using
dataimport.jsp I'm rolling back and getting class not found exception
on the above referenced class.
I thought that tika was packaged up
On 07/01/2011 12:59 PM, Shawn Heisey wrote:
On 7/1/2011 9:23 AM, Tod wrote:
I'm working on upgrading to v3.2 from v 1.4.1. I think I've got
everything working but when I try to do a data import using
dataimport.jsp I'm rolling back and getting class not found exception
on the above referenced cl
So it seems the things in the queryResultCache have no TTL, I'm just curious
how it works if I reindex something with new info? I am going to be
reindexing things often (I'd sort by last login and this changes fast).
I've been stepping through the code and of course if the same queries come
in it
I'm a bit puzzled while trying to adapt some pagination code in
javascript to a grouped query.
I'm using:
'group' : 'true',
'group.limit' : 5, // something to show ...
'group.field' : [ 'bt.nearDupCluster', 'bt.nearStoryCluster' ]
and displaying each field's worth in a tab. how do I work 'star
Hi,
I am beginning to learn Solr. I am trying to read data from Solr MoreLike
This through Java. My query is
http://localhost:8983/solr/select?q=repository_id:20&mlt=true&mlt.fl=filename&mlt.mindf=1&mlt.mintf=1&debugQuery=on&mlt.interestingTerms=detail&indent=true
http://localhost:8983/solr/select
Hi,
I am beginner in Solr. I am trying to read data from Solr MoreLike This
through Java. My query is
http://localhost:8983/solr/select?q=repository_id:20&mlt=true&mlt.fl=filename&mlt.mindf=1&mlt.mintf=1&debugQuery=on&mlt.interestingTerms=detail
I wanted to read the data of the field "moreLikeThi
I'm not sure I understand what you want to do. To paginate with groups you
can use "start" and "rows" as with ungrouped queries. with "group.ngroups"
(Something I found a couple of days ago) you can show the total number of
groups. "group.limit" tells Solr how many (max) documents you want to see
f
Hello all,
What are we doing incorrectly with this query?
http://10.0.0.121:8080/solr/select?q=(description:rifle)&fq=(transactionDate:[NOW-30DAY/DAYTO
NOW/DAY] AND {!bbox sfield=storeLocation pt=32.73,-96.97 d=20})
If we leave the transactionDate field out of the filter query string, the
query
I'm a Solr novice, so I hope I'm missing something obvious. When I run a
search in the Admin view, everything works fine. When I do the same search in
http://localhost:8983/solr/browse , I invariably get "0 results found". What
am i missing? Are these not supposed to be searching the same in
Hi, currently in Solr, updated Documents doesn't actually change until you
issue a "commit" operation (the same happen with new and deleted documents).
After the commit operation, all caches are flushed. That's why there is no
TTL, all documents in Cache remain up to date with the index until a com
What takes the place of response.response.numFound?
2011/7/1 Tomás Fernández Löbbe :
> I'm not sure I understand what you want to do. To paginate with groups you
> can use "start" and "rows" as with ungrouped queries. with "group.ngroups"
> (Something I found a couple of days ago) you can show t
are you using group.main=true?
I didn't see the code for this and the documentation doesn't specify it, but
I tried "group.ngroups=true" and using "group.main=true", the "ngroups"
attribute is not brought back. If you are not using "group.main=true", then
by setting "group.ngroups=true" you'l
: when I restart my solr server it performs two warming queries.
: When a request occures within this there is an exception and always
: exceptions until i restart solr.
what type of request?
what is the initial exception?
what are the subsequent exceptions until restart?
what do the logs looks l
I am trying to import from one SOLR index to another (with different schema)
using data import handler via http: However, there are dynamic fields in the
source that I need to import. In the schema.xml, this field has been
declared as:
When I query SOLR, this comes up:
2011-05-31T00:00:00Z201
Thanks!
I was wondering why my highlighting wasnt' working either.
--
View this message in context:
http://lucene.472066.n3.nabble.com/optional-nested-queries-tp3128847p3130593.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
As far as I know, there's no specific method to get the MoreLikeThis section
from the response. Anyway, you can retrieve the results with a piece of code
like the following:
// the is a NamedList of SolrDocumentLists
> NamedList mltResult =
> (NamedList)response.getResponse().get("moreLikeTh
Thanks for the quick reply! I see theres no way to access the result cache,
I actually want to access the result the cache in a new component I have
which runs after the query but it seems this is impossible. I guess I'm
just going to rebuild the code to make it public or something as I need the
Yes, that's right. But at the moment the HL code basically has to
reconstruct and re-run your query - it doesn't have any special
knowledge. There's some work going on to try and fix that, but it seems
like it's going to require some fairly major deep re-plumbing.
-Mike
On 07/01/2011 07:54
By default, browse is using the following config:
explicit
velocity
browse
layout
Solritas
edismax
*:*
10
*,score
text^0.5 features^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1 cat^1.4
text,fea
SOLR-1499 is a DIH plugin that reads from another Solr.
https://issues.apache.org/jira/browse/SOLR-1499
It is not in active development, but is being updated to current source trees.
Lance
On Fri, Jul 1, 2011 at 12:51 PM, randolf.julian
wrote:
> I am trying to import from one SOLR index to ano
I'm using a version taken from the trunk some time ago. I'm not
setting groups.main, I just started setting groups.ngroups, and
nothing doing. So I guess I don't have a new enough grab from the
trunk.
2011/7/1 Tomás Fernández Löbbe :
> are you using group.main=true?
> I didn't see the code fo
Hello to all,
Is it possible that I can make solr return only documents that contain all or
most of my query terms for a specific field? Or will I need some
post-processing on the results?
So, for example, if I search for (a b c), I would like the following documents
returned:
a b c
a' c b (
I am trying to index CSV data in multicore setup using post.jar.
Here is what I have tried so far:
1) Started the server using "java -Dsolr.solr.home=multicore -jar
start.jar"
2a) Tried to post to "localhost:8983/solr/core0/update/csv" using "java
-Dcommit=no -Durl=http://localhost:8983/solr/core
Hi.
By the way, your uses of parenthesis are completely superfluous.
You can't just plop that "{!" syntax anywhere you please, it only works at
the beginning of a query to establish the query parser for the rest of the
string and/or to set "local-params". There is a sub-query hacky syntax:
... AN
57 matches
Mail list logo