In case it helps there are two SOLR indexes (160GB and 700GB) on the
machine.
Also these are separate indexes and not shards so would it help to put them
on two separate Tomcat servers on same machine? This way I think one index
won't be affecting others cache.
On Wed, Jan 19, 2011 at 12:00 PM, S
Hi Isan,
It seems your index size 25GB si much more compared to you have total Ram
size is 4GB.
You have to do 2 things to avoid Out Of Memory Problem.
1-Buy more Ram ,add at least 12 GB of more ram.
2-Increase the Memory allocated to solr by setting XMX values.at least 12 GB
allocate to solr.
B
I was wondering if the are binary operation filters? Haven't seen any in the
book nor was able to find any using google.
So if I had 0600(octal) in a permission field, and I wanted to return any
records that 'permission & 0400(octal)==TRUE', how would I filter that?
Dennis Gearon
Signature W
Hi,
I know this is a subjective topic but from what I have read it seems more
RAM should be spared for OS caching and much less for SOLR/Tomcat even on a
dedicated SOLR server.
Can someone give me an idea about the theoretically ideal proportion b/w
them for a dedicated Windows server with 32GB R
Hi Grijesh,all,
We are having only single master and are using multicore environment with
size of various indexes as 675MB ,516 MB , 3GB , 25GB.
Number of documents with 3GB index are roughly around 14 lakhs
and with 25 GB are roughly around 7 lakh
Queries are fired very frequently.
ramBufferSize
Where do you get your Lucene/Solr downloads from?
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your c
On Wed, Jan 19, 2011 at 2:34 AM, Grant Ingersoll wrote:
[...]
> Where do you get your Lucene/Solr downloads from?
[...]
[X] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] I/we build them from source via an SVN/Git checkout.
Regards,
Gora
On which server [master/slave] Out of Memory ocuur
What is your index in size[GB]?
How many documents you have?
What is query per second?
How you are indexing?
What is you ramBufferSize?
-
Thanx:
Grijesh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Out-of-Memory-
I've run into that "GC overhead limit exceeded" messagebefore. Try:
-XX:-UseGCOverheadLimit to avoid that one; but it basically means you're
running
low on memory and turning off GCOverheadLimit may just lead to a real heap
space
a few seconds later.
Also, if you have free memory on the host
Hi markus,
We dont have any XMX memory settings as such .Our java version is 1.6.0_19
and solr version is 1.4 developer version. Can u plz help us out.
Thanks,
Isan.
On 18 January 2011 19:54, Markus Jelsma wrote:
> Hi
>
> I haven't seen one like this before. Please provide JVM settings and Solr
Hi Erick,
Thanks for the fast reply. I kind of figured it was not supposed to be
that way.
But it would have some benefits when we need migrate from Lucene to Solr.
We don't have to rewrite the build query part, right. Is there any parser
can do that?
2011/1/18 Ahmet Arslan
> > what's the alte
Hi!
I would like to announce Solr-RA, Solr with RankingAlgorithm. Solr-RA
uses the RankingAlgorithm, a new scoring and ranking algorithm instead
of Lucene to rank the searches. Solr with RA seems to enable Solr
searches to be comparable to Google site search results, and much better
than Luce
I don't fully follow but it seems that you need only faceting on fields. Not
facet queries.
something like:
facet=on&q=*:*&start=0&rows=0&facet.field=category
Is this what you want?
> I'm fairly new to solr, but I'm
> running into some behaviour I can't explain.
>
> I have about 30 documents
I'm fairly new to solr, but I'm running into some behaviour I can't explain.
I have about 30 documents in the index that are in one specific category, and
no others. If I run my query and facet query with *:* this category is not
represented in the facet counts. If i search for a word in those
> I am building a faceted search and
> want the default view to show all of the facet counts.
>
> When I try submitting just a wild card like that, I get an
> error.
>
>
> '*' or '?' not allowed as first character in WildcardQuery
*:* should be just fine. It is a special match all docs query.
On Jan 18, 2011, at 2:24 PM, Glen Newton wrote:
> Where do you get your Lucene/Solr downloads from?
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [] I/we build them from source via a
I am building a faceted search and want the default view to show all of the
facet counts.
When I try submitting just a wild card like that, I get an error.
'*' or '?' not allowed as first character in WildcardQuery
-Original message-
From: Erick Erickson erickerick...@gmail.com
Date:
Mark your calendars today! The largest worldwide conference dedicated to Lucene
and Solr will take place in the San Francisco/Bay Area May 25-26.
The 2011 conference will build on the success of last year's Lucene Revolution
in Boston. Sponsored by Lucid Imagination with additional su
This is usually a bad idea, but if you really must use
q=*:*&start=0&rows=100
Assuming that there are fewer than 1,000,000 documents in your index.
And if there are more, you won't like the performance anyway.
Why do you want to do this? There might be a better solution.
Best
Erick
On
Is there a way I can simply tell the index to return its entire record set?
I tried starting and ending with just a "*" but no dice.
You could try field collapsing as a way to group fiels with identical
signatures together and only display one of the results
Best
Erick
On Tue, Jan 18, 2011 at 7:07 PM, Dan Baughman wrote:
> Hi,
>
> When I index my content, one of the fields I store is a MD5 hash sum of
> another field.
>
Hi,
When I index my content, one of the fields I store is a MD5 hash sum of another
field.
I don't want this field to be used as the "key", but when I search I want to
specify that I don't wany duplicate results for this field.
Is there a way to do that?
Thanks,
Dan
Grant Ingersoll wrote:
> Where do you get your Lucene/Solr downloads from?
>
> [x] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [x] I/we build them from source via an SVN/Git checkout.
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
On Tue, Jan 18, 2011 at 11:53 PM, Chris Male wrote:
>>
>>
>> [X] ASF Mirrors (linked in our release announcements or via the Lucene
>> website)
>>
>> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>>
>> [] I
>
>
> [X] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your company mirrors them internally or via a
> downst
> [X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
> [X] I/we build them from source via an SVN/Git checkout.
> [] Other (someone in your company mirrors them internally or via a
downstream project)
> [] ASF Mirrors (linked in our release announcements or via
> the Lucene website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy,
> Buildr, etc.)
>
> [] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your company mirrors them internally
> or via a downst
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
> [X] I/we build them from source via an SVN/Git checkout.
> [X] Other (someone in your company mirrors them internally or via a
> downstream proje
Am 18.01.2011 22:33, schrieb Steven A Rowe:
>> [] ASF Mirrors (linked in our release announcements or via the Lucene
>> website)
>>
>> [x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>>
>> [x] I/we build them from source via an SVN/Git checkout.
Make sure your browser is set to UTF-8 encoding.
- Original Message
From: Otis Gospodnetic
To: solr-user@lucene.apache.org; bing...@asu.edu
Sent: Tue, January 18, 2011 10:39:16 AM
Subject: Re: Indexing and Searching Chinese with SolrNet
Bing Li,
Go to your Solr Admin page and use the
THX, Chris!
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'
[X] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a downstream
project)
K
On 18.01.2011, at 22:04, Grant Ingersoll wrote:
> As devs of Lucene/Solr, due to the way ASF mirrors, etc. works, we really
> don't have a good sense of how people get Lucene and Solr for use in their
> application. Because of this, there has been some talk of dropping Maven
> support for Luc
>
> Where do you get your Lucene/Solr downloads from?
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [X] I/we build them from source via an SVN/Git checkout.
>
Depending on the project, I either pull from ASF Mirrors or Source. However, I
do reference Maven repository when writing Java code that is built by Maven.
And it's often a pain getting it to work!
On Jan 18, 2011, at 4:23 PM, Ryan Aylward wrote:
> [X] ASF Mirrors (linked in our release annou
> Where do you get your Lucene/Solr downloads from?
>
> [x] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [x] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in
> That seems to delete the entire index and replace it with
> only the contents of
> that one entity. Is there no way to leave the index
> alone for the other
> entities and just redo that one?
>
Yes, there is a parameter named clean for that.
solr/dataimport?command=full-import&entity=myEntit
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors
> [x] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [x] I/we build them from source via an SVN/Git checkout.
Where do you get your Lucene/Solr downloads from?
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirr
[x] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
On Tue, Jan 18, 2011 at 1:24 PM, Glen Newton wrote:
> Where do you get your Lucene/Solr
>
> [X] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your company mirrors them internally or via a downst
[] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream project)
Where do you get your Lucene/Solr downloads from?
[x] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
-Glen Newton
--
-
[] ASF Mirrors (linked in our release announcements or via the Lucene website)
[X] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a downstream
project)
--
Where do you get your Lucene/Solr downloads from?
[X] ASF Mirrors (linked in our release announcements or via the Lucene website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors
Ahmet Arslan yahoo.com> writes:
>
> > I've got a DataImportHandler set up
> > with 5 entities. I would like to do a full
> > import on just one entity. Is that possible?
> >
>
> Yes, there is a parameter named entity for that.
> solr/dataimport?command=full-import&entity=myEntity
That seem
And here's mine:
On Jan 18, 2011, at 4:04 PM, Grant Ingersoll wrote:
>
> Where do you get your Lucene/Solr downloads from?
>
> [] ASF Mirrors (linked in our release announcements or via the Lucene website)
>
> [x] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [x] I/we bui
> [X] ASF Mirrors (linked in our release announcements or via the Lucene
> website)
>
> [] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
>
> [X] I/we build them from source via an SVN/Git checkout.
>
> [] Other (someone in your company mirrors them internally or via a
> downst
As devs of Lucene/Solr, due to the way ASF mirrors, etc. works, we really don't
have a good sense of how people get Lucene and Solr for use in their
application. Because of this, there has been some talk of dropping Maven
support for Lucene artifacts (or at least make them external). Before we
> if i restart it, will i lose any data that is in memory? if so, is there a
> way around it?
Usually I've restarted the process, and on restart Solr using the
true in solrconfig.xml will
automatically remove the lock file (actually I think it may be removed
automatically when the process dies).
i have not restarted the process yet.
if i restart it, will i lose any data that is in memory? if so, is there a
way around it?
is there a way to know if there is any data waiting to be written? (if not,
i will just restart...)
thanks.
On Tue, Jan 18, 2011 at 12:23 PM, Jason Rutherglen <
jason.ru
Hi,
You get an error because LocalParams need to be in the beginning of a
parameter's value. So no parenthesis first. The second query should not give an
error because it's a valid query.
Anyway, i assume you're looking for :
http://wiki.apache.org/solr/SimpleFacetParameters#Multi-
Select_Facet
Hello and Thanks for the reply.
I've been over that page, and it doesn't seem like it helps with the pivoting
aspect.
That is if I am sorting via an existing pivot 'sum(student_id,test_grade)' I
want my groups of student_id sorted by the sum of test_grade with that
student_id.
The data is all
> btw where will i find the writes that have not been committed? are they all
> in memory or are they in some temp files somewhere?
The writes'll be gone if they haven't been committed yet and the
process fails.
> org.apache.lucene.store.LockObtainFailedException: Lock obtain timed
If it's remov
: problem, disk space is cheap. What I wanted to know was whether it is best
: to make the single field multiValued="true" or not. That is, should my
: 'content' field hold data like:
...
: or would it be better to make it a concatenated, single value field like:
functionally, the only di
:
:
: The above won't generate a UUID on it's own, right?
correct.
-Hoss
the ebs volume is operational and i cannot see any error in dmesg etc.
the only errors in catalina.out are the lock related ones (even though i
removed the lock file) and when i do a commit everything looks fine in the
log.
i am using the following for the commit:
curl http://localhost:8983/solr/up
Too bad for me I guess! I was hoping there was a hidden field, perhaps, offset
one could query on. That one thing would have made this possible to do by
simply querying on it.
On Jan 18, 2011, at 7:06 AM, Erick Erickson wrote:
> Ahhh, I see. I don't know of any way to do what you want.
>
> Be
Udi,
It's hard for me to tell from here, but it looks like your writes are really
not
going in at all, in which case there may be nothing (much) to salvage.
The EBS volume is mounted? And fast (try listing a bigger dir or doing
something that involves some non-trivial disk IO)?
No errors anyw
Dear Jelsma,
After configuring the Tomcat URIEncoding, Chinese characters can be
processed correctly. I appreciate so much for your help!
Best,
LB
On Wed, Jan 19, 2011 at 3:02 AM, Markus Jelsma
wrote:
> Hi,
>
> Yes but Tomcat might need to be configured to accept, see the wiki for more
> inform
i have not stopped writing so i am getting this error all the time.
the commit actually seems to go through with no errors but it does not seem
to write anything to the index files (i can see this because they are old
and i cannot see new stuff in search results).
my index folder is on an amazon e
Hi,
Yes but Tomcat might need to be configured to accept, see the wiki for more
information on this subject.
http://wiki.apache.org/solr/SolrTomcat#URI_Charset_Config
Cheers,
> Dear Jelsma,
>
> My servlet container is Tomcat 7. I think it should accept Chinese
> characters. But I am not sure
Dear Jelsma,
My servlet container is Tomcat 7. I think it should accept Chinese
characters. But I am not sure how to configure it. From the console of
Tomcat, I saw that the Chinese characters in the query are not displayed
normally. However, it is fine in the Solr Admin page.
I am not sure eithe
Udi,
Hm, don't know off the top of my head, but sounds like "an interesting problem".
Are you getting this error while still writing to the index or did you stop all
writing?
Do you get this error when you issue a commit or?
Is the index on the local disk or?
Otis
Sematext :: http://sematex
It's FFRT (pronounced ...) - Far From Real Time.
To help the o.p., there is a page on Solr Wiki about what one can do with Solr
and NRT search today.
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http://search-lucene.com/
- Original Message
Bing Li,
Go to your Solr Admin page and use the Analysis functionality there to enter
some Chinese text and see how it's getting analyzed at index and at search
time. This will tell you what is (or isn't) going on.
Here it looks like you just defined index-time analysis, so you should see your
Why creating two threads for the same problem? Anyway, is your servlet
container capable of accepting UTF-8 in the URL? Also, is SolrNet capable of
handling those characters? To confirm, try a tool like curl.
> Dear all,
>
> After reading some pages on the Web, I created the index with the foll
Bing Li,
You can configure different analyzers in your Solr's schema.xml. Have a look
at
the example Solr schema.xml to see how that's done.
http://search-lucene.com/?q=%2Bchinese+analyzer+schema&fc_project=Solr&fc_type=wiki
There is also SmartCN Analyzer in Lucene that you could configure in
Dear all,
After reading some pages on the Web, I created the index with the following
schema.
..
..
It must be correct, right? However, when sending a query though SolrNet
Oh, and this should not have the INFO level in my opinion. Other log lines
indicating a problem with the master (such as a time out or unreachable host)
are not flagged as INFO.
Maybe you could file a Jira ticket? Don't forget to specifiy your Solr version.
Also, please check the master log f
Hi,
This is a slave polling the master for its index version but it seems the
master fails to respond.
From the javadoc:
> public class NoHttpResponseException
> extends IOException
>
> Signals that the target server failed to respond with a valid HTTP
> response.
Cheers,
> I see a large numb
sorry, never did find a solution to that.
if you do happen to figure it out, pls post a reply to this thread. thanks
--
View this message in context:
http://lucene.472066.n3.nabble.com/what-would-cause-large-numbers-of-executeWithRetry-INFO-messages-tp1453417p2281087.html
Sent from the Solr -
I would like to use the following field declaration to store my own, COMB
UUIDs,
(same length and format, a kind of cross between version 1 and version 4). If I
leave out default value in the declaration, would that work? I.E.:
The above won't generate a UUID on it's own, right?
Dennis Ge
Whoops, picked the wrong email to reply thanks to. Wasn't actually in this
thread.
Dennis Gearon
- Original Message
From: Dennis Gearon
To: solr-user@lucene.apache.org
Sent: Tue, January 18, 2011 8:25:04 AM
Subject: Re: Does Solr supports indexing & search for Hebrew.
Thanks Ofer :-)
>>Schemas are very differents, i can't group them.
In contrast to what you're saying above, you may rethink the option of
combining both type of documents in a single core.
It's a perfectly valid approach to combine heteregenous documents in a
single core in Solr. (and use a specific field -say 't
Le 18/01/2011 18:31, Jonathan Rochkind a écrit :
Solr can't do that. Two cores are two seperate cores, you have to do
two seperate queries, and get two seperate result sets.
Solr is not an rdbms.
Yes Solr can't do that but if i want this :
1. Core 1 call Core 2 to get the label
2. Core 1 use
Hi, all,
Now I cannot search the index when querying with Chinese keywords.
Before using Solr, I ever used Lucene for some time. Since I need to crawl
some Chinese sites, I use ChineseAnalyzer in the code to run Lucene.
I know Solr is a server for Lucene. However, I have no idea know how to
conf
Hi,
I have a solr server that is failing to acquire a lock with the exception
below. I think that the server has a lot of uncommitted data (I am not sure
how to verify this) and if so I would like to salvage it.
Any suggestions how to proceed?
(btw i tried removing the lock file but it did not hel
Solr can't do that. Two cores are two seperate cores, you have to do two
seperate queries, and get two seperate result sets.
Solr is not an rdbms.
On 1/18/2011 12:24 PM, Damien Fontaine wrote:
I want execute this query :
Schema 1 :
Schema 2 :
Query :
select?facet=true&fl=title&q=title
I want execute this query :
Schema 1 :
required="true" />
required="true" />
required="true" />
Schema 2 :
required="true" />
required="true" />
required="true" />
Query :
select?facet=true&fl=title&q=title:*&facet.field=UUID_location&rows=10&qt=standard
Result :
0
0
true
title
title:*
Okay .. and .. now .. you're trying to do what? perhaps you could give us an
example, w/ real data .. sample queries & - results.
because actually i cannot imagine what you want to achieve, sorry
On Tue, Jan 18, 2011 at 5:24 PM, Damien Fontaine wrote:
> On my first schema, there are informations
Erick,
The wt parameter does not specifiy the parser but the request handler to use.
Except the confusion between parser and request handler you're entirely right.
Cheers
On Tuesday 18 January 2011 17:37:41 Erick Erickson wrote:
> If you're trying to get to a dismax parser (named "dismax" in
>
These are legacy types that aren't, frankly, very useful in recent Solr. So
you can probably safely ignore them.
BTW, you probably want to go with Trie fields (tint, tfloat, etc) as a first
choice unless you have a definite reason not to.
Hope this helps
Erick
On Tue, Jan 18, 2011 at 10:35 AM, S
If you're trying to get to a dismax parser (named "dismax" in
solrconfig.xml),
you need to specify qt=dismax. NOTE: the Wiki is a bit confusing on this
point, the fact that the dismax parser is *named* dismax in the
solrconfig.xml
file is coincidence, you could name it "erick" and specify qt=erick
OK thanks for bringing closure!
The "tokens" output is the total number of indexed tokens (ie, as if
you had a counter that counted all tokens produced by analysis as the
indexer consumes them).
My guess is the faulty server's hardware problem also messed up this count?
Mike
On Tue, Jan 18, 201
Thanks Ofer :-)
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e0
On my first schema, there are informations about a document like title,
lead, text etc and many UUID(each UUID is a taxon's ID)
My second schema contains my taxonomies with auto-complete and facets.
Le 18/01/2011 17:06, Stefan Matheis a écrit :
Search on two cores but combine the results afterw
Thanks Robert.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036
Thanks Otis
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistakes, so you do not have to make them yourself.
from 'http://blogs.techrepublic.com.com/security/?p=4501&tag=nl.e036'
Search on two cores but combine the results afterwards to present them in
one group, or what exactly are you trying to do Damien?
On Tue, Jan 18, 2011 at 5:04 PM, Damien Fontaine wrote:
> Hi,
>
> I would like make a search on two core with differents schemas.
>
> Sample :
>
> Schema Core1
> - ID
Hi,
I would like make a search on two core with differents schemas.
Sample :
Schema Core1
- ID
- Label
- IDTaxon
...
Schema Core2
- IDTaxon
- Label
- Hierarchy
...
Schemas are very differents, i can't group them. Have you an idea to
realize this search ?
Thanks,
Damien
> Is there an example of how to use dismax with embedded
> Solr?I am currently
> creating my query like this:
> QueryParser parser = new
> QueryParser(Version.LUCENE_CURRENT,"content", new
> StandardAnalyzer(Version.LUCENE_CURRENT));
> Query q = parser.parse(query);
> search
Hi Marc,
Have you looked at the grouping stuff that has been committed?
http://wiki.apache.org/solr/FieldCollapsing
-Grant
On Jan 17, 2011, at 5:11 AM, Marc Sturlese wrote:
>
> I need to dive into search grouping / field collapsing again. I've seen there
> are lot's of issues about it now.
What version of Solr are you on?
On Jan 13, 2011, at 8:23 PM, Adam Estrada wrote:
> According to the documentation here:
> http://wiki.apache.org/solr/SpatialSearch the field that identifies the
> spatial point data is "sfield". See the console output below.
>
> Jan 13, 2011 6:49:40 PM org.apach
Hi,
Is there an example of how to use dismax with embedded Solr?I am currently
creating my query like this:
QueryParser parser = new
QueryParser(Version.LUCENE_CURRENT,"content", new
StandardAnalyzer(Version.LUCENE_CURRENT));
Query q = parser.parse(query);
searcher.search(q
> what's the alternative?
q=kfc+mdc&defType=dismax&mm=1&qf=I_NAME_ENUM
See more: http://wiki.apache.org/solr/DisMaxQParserPlugin
> Both solutions are working fine for
> me. I guess the fq performance is
> slower though, or?
http://wiki.apache.org/solr/FilterQueryGuidance
> So if my pivot term is:"student_id,test_grade"
> I'd want to be able to sort on the number of tests a
> student has taken. and also get an average. something like:
> :sort => sum( student_id,test_grade )/ count(
> student_id,test_grade )
>
> where the values would be summed and counted over al
1 - 100 of 130 matches
Mail list logo