Hi,
I saw this post :
http://lucene.472066.n3.nabble.com/Multiple-Facet-Dates-td495480.html
I didn't see work in progress or plans about this feature on the list
and bugtracker.
Does someone already created a patch, pof, ... I wouldn't have been able
to find ?
From my naïve point of view the
@Michael, @Erick,
You both mention interesting things that triggered me.
@Erick:
Your referenced page is very useful. It seems the whitespace tokenizer under
the text_ws is causing issues.
You do mention another interesting thing:
"And do be aware that fields you get back from a request (i.e. a
you should parse the xml and extract the value. Lot's of libraries
undoubtably exist for PHP to help you with that (I don't know PHP)
Moreover, if all you want from the result is AUC_CAT you should consider
using the fl=param like:
http://172.16.17.126:8983/search/select/?q=AUC_ID:607136&fl=AUC_CA
On Thu, Aug 5, 2010 at 3:07 AM, Dennis Gearon wrote:
> If data is stored in the index, isn't the index of Solr pretty much already
> a 'Big/Cassandra Table', except with tokenized columns to make seaching
> easier?
>
> How are Cassandra/Big/Couch DBs doing text/weighted searching?
>
> Seems a rea
Can some one please answer this.
Is there a way of creating/adding a core and starting it without having to
reload Solr ?
http://wiki.apache.org/solr/CoreAdmin
-Original message-
From: Karthik K
Sent: Thu 05-08-2010 12:00
To: solr-user@lucene.apache.org;
Subject: Re: Load cores without restarting/reloading Solr
Can some one please answer this.
Is there a way of creating/adding a core and starting it with
Given below are the steps for auto-suggest and spellcheck in single query:
Make the change in TermComponent part in solrconfig.xml
true
termsComponent
spellcheck
Use given below query format for getting autosuggest and spellcheck
suggestion.
http
That is not 100% true. I would think RDBMS and XML would be the most common
importers but the real flexibility is with the TikaEntityProcessor [1] that
comes w/ DIH ...
http://wiki.apache.org/solr/TikaEntityProcessor
Im pretty sure it would be able to handle any type of serde (in the case of
Hi,
i want to query solr and convert my response object to a json string
using solrj
when i query from my browser(with wt=json) i get the following result:
{
"responseHeader":{
"status":0,
"QTime":0},
"response":{"numFound":0,"start":0,"docs":[]
}}
At the moment i am using google-gson (a th
Hi everybody,
I would like to know if does make sense to use Solr in the following
scenario:
- search for large amount of data (like 1000, 1, 10 registers)
- each register contains four or five fields (strings and integers)
- every time will request for entire result set (I can pagin
On 8/5/10 5:59 AM, Karthik K wrote:
> Can some one please answer this.
>
> Is there a way of creating/adding a core and starting it without having to
> reload Solr ?
>
Yes, see http://wiki.apache.org/solr/CoreAdmin
- Mark
lucidimagination.com
I have UPPER12-lower and would like to be able to find it with queries
"UPPER" or "lower". What should break this up for the index? A
tokenizer or a filter such as WordDelimiterFilterFactory?
I have tried various combinations of parameters to
WordDelimiterFilterFactory and cant get it to split pro
In the size 'facet' you have values that may not be in red, but in the size
'field' of any individual document you wont'. If you searched on
q=converse&fq=color:red the shoes returned would have appropriate sizes in
their field. Having a facet value for size 10 means at least 1 shoe in your
pote
> I have UPPER12-lower and would like
> to be able to find it with queries
> "UPPER" or "lower". What should break this up for the
> index? A
> tokenizer or a filter such as WordDelimiterFilterFactory?
If all thats you want just LowerCaseTokenizer will be enough.
I've got only one document per shoes, whatever its size or color.
My first try was to create one document per model/size/color, but when i
searche for 'converse' for example, the same shoe is retrieved several
times, and i want to show only one record for each model. But I don't
succeed in groupi
Hi - I am trying to compile Solr source and during "ant dist" step, the
build times out on
get-colt:
[get] Getting:
http://repo1.maven.org/maven2/colt/colt/1.2.0/colt-1.2.0.jar
[get] To:
/opt/solr/apache-solr-1.4.0/contrib/clustering/lib/downloads/colt-1.2.0.
jar
After a while - the
This is the message I am getting
Error getting
http://repo1.maven.org/maven2/colt/colt/1.2.0/colt-1.2.0.jar
-Original Message-
From: sai.thumul...@verizonwireless.com
[mailto:sai.thumul...@verizonwireless.com]
Sent: Thursday, August 05, 2010 1:15 PM
To: solr-user@lucene.apache.org
Subje
Thank you for all the help. Greatly appreciated. I have seen the related
issues and I see lot of patches in the JIRA mentioned. I am really confused
which patch to use (pls excuse my ignorance). Also are the patches
production ready? I will greatly appreciate if you can point me to the
correct patc
Hi,
We have a requirement to NOT display search results if user query contains
terms that are in our anti-words field. For example, if user query is "I
have swollen foot" and if some records in our index have "swollen foot" in
anti-words field, we don't want to display those records. How do I go a
Hello Mr. Horsetter,
I again tried the code from trunk '
https://svn.apache.org/repos/asf/lucene/dev/trunk' on solr 1.4 index and it
gave me the following IndexFormatTooOldExceptio which in the first place
prompted me to think the indexes are incompatible. Any ideas ?
j
Got it working - had to manually copy the jar files under the contrib
directories
-Original Message-
From: sai.thumul...@verizonwireless.com
[mailto:sai.thumul...@verizonwireless.com]
Sent: Thursday, August 05, 2010 2:00 PM
To: solr-user@lucene.apache.org
Subject: RE: get-colt
This is th
(10/08/06 2:14), sai.thumul...@verizonwireless.com wrote:
Hi - I am trying to compile Solr source and during "ant dist" step, the
build times out on
get-colt:
[get] Getting:
http://repo1.maven.org/maven2/colt/colt/1.2.0/colt-1.2.0.jar
[get] To:
/opt/solr/apache-solr-1.4.0/contrib/c
If I understand correctly:
1. products have different product variants ( in case of shoes a combination
of color and size + some other fields).
2. Each product is shown once in the result set. (so no multiple product
variants of the same product are shown)
This would solve that IMO:
1, create 1 d
Mickael Magniez wrote:
Thanks for your response.
Unfortunately, I don't think it'll be enough. In fact, I have many other
products than shoes in my index, with many other facets fields.
I simplified my schema : in reality facets are dynamic fields.
You could change the way you do indexing,
This is tricky. You could try doing something with the ShingleFilter
(http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.ShingleFilterFactory)
at _query time_ to turn the users query:
"i have a swollen foot" into:
"i", "i have", "i have a", "i have a swollen", "have", "have a
I've read through the DataImportHandler page a few times, and still can't
figure out how to separate a large document into smaller documents. Any hints?
:-) Thanks!
-Peter
On Aug 2, 2010, at 9:01 PM, Lance Norskog wrote:
> Spanning won't work- you would have to make overlapping mini-document
Eloi Rocha wrote:
Hi everybody,
I would like to know if does make sense to use Solr in the following
scenario:
- search for large amount of data (like 1000, 1, 10 registers)
- each register contains four or five fields (strings and integers)
- every time will request for entire res
Oh yes, replication will not work for shared files. It is about making
your own copy from another machine.
There is no read-only option but there should be. The files and
directory can be read-only, I've done it. You could use the OS
permission system to enforce read-only. Then you can just do a
You can use an XInclude in solrconfig.xml. Your external query file
has to be in the XML format.
Lance
On Wed, Aug 4, 2010 at 7:57 AM, Shalin Shekhar Mangar
wrote:
> On Wed, Aug 4, 2010 at 3:27 PM, Stanislaw
> wrote:
>
>> Hi all!
>> I cant load my custom queries from the external file, as writt
: Hello Mr. Horsetter,
Please, call me Hoss. "Mr. Horsetter" is ... well frankly i have no idea
who that is.
: I again tried the code from trunk '
: https://svn.apache.org/repos/asf/lucene/dev/trunk' on solr 1.4 index and it
Please note my previous comments...
: >
This confuses lots of people. When you index a field, it's Analyzed 10
ways from Sunday. Consider "The World is an unknown Entity". When
you INDEX it, many thing happen, depending upon the analyser.
Stopwords may be removed. each token may be lower cased. Each token
may be stemmed. It all depends o
On Thu, Aug 5, 2010 at 9:07 PM, Chris Hostetter wrote:
>
> That should still be true in the the official 4.0 release (i really should
> have said "When 4.0 can no longer read SOlr 1.4 indexes"), ...
> i havne't been following the detials closely, but i suspect that tool
> hasn't been writen yet be
can somebody help me please
--
View this message in context:
http://lucene.472066.n3.nabble.com/XML-Format-tp1024608p1028456.html
Sent from the Solr - User mailing list archive at Nabble.com.
You may have to write your own javascript to read in the giant field
and split it up.
On Thu, Aug 5, 2010 at 5:27 PM, Peter Spam wrote:
> I've read through the DataImportHandler page a few times, and still can't
> figure out how to separate a large document into smaller documents. Any
> hints?
I can see how one document per model blows up when you have many
options. But how many models of the shoe do they actually make? They
can't possibly make 5000, one for every metadat combination.
If you go with one document per model, you have to do a second search
on that product ID to get all of
hi everyone,
I run the query from the browser:
http://172.16.17.126:8983/search/select/?q=AUC_CAT:978
the query is based on cat_978.xml which was produced by my PHP script
and I got the correct result like this:
0
4
AND
AUC_ID,AUC_CAT,AUC_DESCR_SHORT
0
AUC
36 matches
Mail list logo