Howdy,
I recently rolled a custom WordNet synonym filter that pulls synonyms
from WordNet during indexing. All that is nice and dandy; however, it
causes problems in the sorting. Sometimes, the top match will come from
a synonym rather than the original word.
An example in our system is a se
Caches are stored on the Java heap for each instance of a searcher. The
filter cache would be different per replica, same for the doc cache, and
query cache
On Fri, Feb 5, 2016 at 8:47 AM Tom Evans wrote:
> I have a small question about fq in cloud mode that I couldn't find an
> explanation for
Hi all,
Hoping someone else uses the maven capabilities and can help out here.
Solr: 4.10.4
Ant-Task: ant generate-maven-artifacts
Problem:
When trying to publish to an internal artifactory using our SNAPSHOTs,
where our user has update/delete permissions, everything builds ok.
When trying to bu
Using:
- JDK 1.8u40
- UseG1GC, ParallelRefProcEnabled, Xmx12g,Xms12g
- Solr 4.10.4
When using G1GC we are seeing very high processing times in the GC Remark
phase during reference processing. Originally we saw high times during
WeakReference processing but adding"-XX:+ParallelRefProcEnabled" flag
to add to Ericks point:
It's also highly dependent on the types of queries you expect (sorting,
faceting, fq, q, size of documents) and how many concurrent updates you
expect. If most queries are going to be similar and you are not going to be
updating very often, you can expect most of your index
Solr User Group,
I have a non-multivalied field with contains stored values similar to this:
US100AUS100BUS100CUS100-DUS100BBA
My assumption is - If I tokenized with the below fieldType definition,
specifically the WDF -splitOnNumbers and the LowerCaseFilterFactory would have
have provided
rom: Jack Krupansky
To: solr-user@lucene.apache.org; Mike L.
Sent: Sunday, April 5, 2015 8:23 AM
Subject: Re: WordDelimiterFilterFactory - tokenizer question
You have to tell the filter what types of tokens to generate - words, numbers.
You told it to generate... nothing. You did te
Solr User Group -
I have a case where I need to be able to search against compound words, even
when the user delimits with a space. (e.g. baseball => base ball). I think
I've solved this by creating a compound-words dictionary file containing the
split words that I would want DictionaryCom
Typo: *even when the user delimits with a space. (e.g. base ball should find
baseball).
Thanks,
From: Mike L.
To: "solr-user@lucene.apache.org"
Sent: Tuesday, April 7, 2015 9:05 AM
Subject: DictionaryCompoundWordTokenFilterFactory - Dictionary/Compound-Words
File
Hello -
I have qf boosting setup and that works well and balanced across different
fields.
However, I have a requirement that if a particular manufacturer is part of the
returned matched documents (say top 20 results) , all those matched docs from
that manufacturer should be bumped to the
Thanks Jack. I'll give that a whirl.
From: Jack Krupansky
To: solr-user@lucene.apache.org; Mike L.
Sent: Saturday, April 11, 2015 12:04 PM
Subject: Re: Bq Question - Solr 4.10
It all depends on what you want your scores to look like. Or do you care at all
what the scores
Hi,
A few days ago I deployed a solr 4.9.0 cluster, which consists of 2
collections. Each collection has 1 shard with 3 replicates on 3 different
machines.
On the first day I noticed this error appear on the leader. Full Log -
http://pastebin.com/wcPMZb0s
4/23/2015, 2:34:37 PM SEVERE SolrCmdDist
Are there any known network issues?
> * Do you have any idea about the GC on those replicas?
>
>
> On Mon, Apr 27, 2015 at 1:25 PM, Amit L wrote:
>
> > Hi,
> >
> > A few days ago I deployed a solr 4.9.0 cluster, which consists of 2
> > collections. Each collection
Solr User Group -
Was wondering if anybody had any suggestions/best practices around a
requirement for storing a dynamic category structure that needs to have the
ability to facet on and maintain its hierarchy
Some context:
A product could belong to an undetermined amount of product categor
categories and can maintain the
hierarchy..
I'll take a look at it.
Thanks!
From: Erick Erickson
To: solr-user@lucene.apache.org; Mike L.
Sent: Monday, July 6, 2015 12:42 PM
Subject: Re: Category Hierarchy on Dynamic Fields - Solr 4.10
Hmmm, probably missing something her
Can someone point me to a tutorial or blog to setup SolrCloud on multiple
hosts? LucidWorks just have a trivial single host example. I searched
around but only found some blogs for older versions (2014 or earlier).
thanks.
Additionally, it looks like the commits are public on github. Is this
backported to 5.5.x too? Users that are still on 5x might want to backport
some of the issues themselves since is not officially supported anymore.
On Mon, Oct 16, 2017 at 10:11 AM Mike Drob wrote:
> Given that the already pub
solr user group -
I'm afraid I may have a scenario where I might need to define a few
thousand fields in Solr. The context here is, this type of data is extremely
granular and unfortunately cannot be grouped into logical groupings or
aggregate fields because there is a need to know which
al (95th
> percentile) query?
>
> -- Jack Krupansky
>
> -----Original Message- From: Mike L.
> Sent: Tuesday, February 4, 2014 10:00 PM
> To: solr-user@lucene.apache.org
> Subject: Max Limit to Schema Fields - Solr 4.X
>
>
> solr user group -
>
> I&
Thanks Shawn. This is good to know.
Sent from my iPhone
> On Feb 5, 2014, at 12:53 AM, Shawn Heisey wrote:
>
>> On 2/4/2014 8:00 PM, Mike L. wrote:
>> I'm just wondering here if there is any defined limit to how many fields can
>> be created within a schem
t; fielda_value, fieldb_value into a single field. Then do the right thing
> when searching. Watch tokenization though.
>
> Best
> Erick
>> On Feb 5, 2014 4:59 AM, "Mike L." wrote:
>>
>>
>> Thanks Shawn. This is good to know.
>>
>>
>>
Appreciate all the support and I'll give it a whirl. Cheers!
Sent from my iPhone
> On Feb 8, 2014, at 4:25 PM, Shawn Heisey wrote:
>
>> On 2/8/2014 12:12 PM, Mike L. wrote:
>> Im going to try loading all 3000 fields in the schema and see how that goes.
>>
Hi All:
I am using solr 4.9.1. and trying to use PostingsSolrHighlighter. But I got
errors during indexing. I thought LUCENE-5111 has fixed issues with
WordDelimitedFilter. The error is as below:
Caused by: java.lang.IllegalArgumentException: startOffset must be
non-negative, and endOffset must b
ing this issue
>
> On Wed, Nov 5, 2014 at 4:51 PM, Alan Woodward wrote:
>
> > Hi Min,
> >
> > Do you have the specific bit of text that caused this exception to be
> > thrown?
> >
> > Alan Woodward
> > www.flax.co.uk
> >
> >
> > O
Hi all:
Has anyone made solr Morelikethis working with results grouping?
Thanks in advance.
M
Hi all:
My code using solr spellchecker to suggest keywords worked fine locally,
however in qa solr env, it failed to build it with the following error in
solr log:
ERROR Suggester Store Lookup build from index on field: myfieldname failed
reader has: xxx docs
I checked the solr directory and th
G.info("Stored suggest data to: " + target.getAbsolutePath());
}
}
On Fri, Dec 5, 2014 at 12:59 PM, Erick Erickson
wrote:
> What's the rest of the stack trace? There should
> be a root cause somewhere.
>
> Best,
> Erick
>
> On Fri, Dec 5, 2014 at 11:07 AM
://www.solr-start.com/ and @solrstart
> Solr popularizers community: https://www.linkedin.com/groups?gid=6713853
>
>
> On 5 December 2014 at 14:07, Min L wrote:
> > Hi all:
> >
> > My code using solr spellchecker to suggest keywords worked fine locally,
> > howeve
I'm stumped. I've got some solrj 3.6.1 code that works fine against three of
my request handlers but not the fourth. The very odd thing is that I have no
trouble retrieving results with curl against all of the result handlers.
My solrj code sets some parameters:
ModifiableSolrParams param
It was pilot error. I just reviewed my servlet and noticed a parameter in
web.xml that was looking to find data for the new product in the production
index which doesn't have that data yet while my curl command was running
against the staging index. I rebuilt the servlet with the fixed parameter
an
Hi Dmitri,
I do have a question mark in my search. I see that I dropped that
accidentally when I was copying/pasting/formatting the details.
My curl command is curl "http://myserver/myapp/myproduct?fl=*,.";
And, it works fine whether I have .../myproduct/?fl=*, or if I leave out
the / b
Hello,
I'm trying to execute a parallel DIH process and running into heap
related issues, hoping somebody has experienced this and can recommend some
options..
Using Solr 3.5 on CentOS.
Currently have JVM heap 4GB min , 8GB max
When executing the entities in a se
have not completed the job quite yet with any config... I did get very close..
I'd hate to throw additional memory at the problem if there is something else I
can tweak..
Thanks!
Mike
From: Shawn Heisey
To: solr-user@lucene.apache.org
Sent: Wednesday, June 26, 2013 12:13
I've been working on improving index time with a JdbcDataSource DIH based
config and found it not to be as performant as I'd hoped for, for various
reasons, not specifically due to solr. With that said, I decided to switch
gears a bit and test out FileDataSource setup... I assumed by eliminiat
s not doing what its supposed to or am I missing
something? I also trying passing a commit afterward like this:
http://server:port/appname/solrcore/update?stream.body=%3Ccommit/%3E ( didn't
seem to do anything either)
From: Ahmet Arslan
To: "solr-user@lucene.apache.org" ; Mike L
requests on
different data files containing different data?
Thanks in advance. This was very helpful.
Mike
From: Shawn Heisey
To: solr-user@lucene.apache.org
Sent: Monday, July 1, 2013 2:30 PM
Subject: Re: FileDataSource vs JdbcDataSouce (speed) Solr 3.5
On 7/1/
Solr User Group,
I would like to return a hierarchical data relationship when somebody
queries for a parent doc in solr. This sort of relationship doesn't currently
exist in our core as the use-case has been to search for a specific document
only. However, here's kind of an example
When I do this query:
q=catcode:CC001
I get a bunch of results. One of them looks like this:
CC001
Cooper, John
If I then do this query:
q=start_url_title:cooper
I also match the record above, as expected.
But, if I do this:
q=(catcode:CC001 AND start_u
Jack Krupansky-2 wrote
> What query parser and release of Solr are you using?
>
> There was a bug at one point where a fielded term immediately after a left
> parenthesis was not handled properly.
>
> If I recall, just insert a space after the left parenthesis.
>
> Also, the dismax query parser
Erick Erickson wrote
> What do you get when you add &debugQuery=true? That should show you the
> results of the query parsing, which often adds clues.
>
> FWIW,
> Erick
When I was trying to debug this last night I noticed that when I added
"&debugQuery=true" to queries I would only get debug outp
Jack Krupansky-2 wrote
> Also, be aware that the spaces in your query need to be URL-encoded.
> Depending on how you are sending the command, you may have to do that
> encoding yourself.
>
> -- Jack Krupansky
It's a good possibility that that's the problem. I've been doing queries in
different
Solr Family,
I'm a Solr 3.6 user who just pulled down 4.4 yesterday and noticed
something a bit odd when importing into a multi-valued field. I wouldn't be
surprised if there's a user-error on my end but hopefully there isn't a bug.
Here's the situation.
I created some test data to
Nevermind, I figured it out. Excel was applying a hidden quote on the data.
Thanks anyway.
From: Mike L.
To: "solr-user@lucene.apache.org"
Sent: Wednesday, September 25, 2013 11:32 AM
Subject: Solr 4.4 Import from CSV to Multi-value field - Adds quote on last
value
S
Solr Admins,
I've been using Solr for the last couple years and would like to
contribute to this awesome project. Can I be added to the Contributorsgroup
with also access to update the Wiki?
Thanks in advance.
Mike L.
Mike L. wrote:
>
> Solr Admins,
>
> I've been using Solr for the last couple years and would like to
>contribute to this awesome project. Can I be added to the Contributorsgroup
>with also access to update the Wiki?
>
> Thanks in advance.
>
> Mike L.
>
>
Sometimes when I use curl to query solr I get a slow real time response but a
short QTime.
Here's an example:
$ time curl "solrsandbox/testindex/select/?q=all:science,data&rows=500" >
foo
% Total% Received % Xferd Average Speed TimeTime Time
Current
Thanks everyone for the responses.
I did some more queries and watched disk activity with iostat. Sure enough,
during some of the slow queries the disk was pegged at 100% (or more.)
The requirement for the app I'm building is to be able to retrieve 500
results in ideally one second. The index has
I just did the experiment of retrieving only the metaDataUrl field. I still
sometimes get slow retrieval times. One query took 2.6 seconds of real time
to retrieve 80k of data. There were 500 results. QTime was 229. So, I do
need to track down where the extra 2+ seconds is going.
--
View this me
My virtual machine has 6GB of RAM. Tomcat is currently configured to use 4GB
of it. The size of the index is 5.4GB for 3 million records which averages
out to 1.8KB per record. I can look at trimming the data, having fewer
records in the index to make it smaller, or getting more memory for the VM.
p.s. Regarding streaming of the dat, my Java servlet uses solrj and iterates
through the results. Right now I'm focused on getting rid of the delay that
cause some queries to take 6 or 8 seconds to complete so I'm not even
looking at the performance of the streaming.
--
View this message in con
I'm just writing to close the loop on this issue.
I moved my servlet to a beefier server with lots of RAM. I also cleaned up
the data to make the index somewhat smaller. And, I turned off all the
caches since my application doesn't benefit very much from caching. My
application is now quite zippy,
>
> Justin, can you tell us which field in the query is your record id? What is
> the record id's type in database and in solr schema? What is your unique
> key and its type in solr schema?
>
>
> On Tue, Mar 19, 2013 at 5:19 AM, Justin L. wrote:
>
> > Every time I
- Forwarded Message -
From: "l blevins"
To: "solr user mail"
Sent: Wednesday, March 9, 2011 4:03:06 PM
Subject: some relational-type groupig with search
I have a large database for which we have some good search capabilties now, but
am interested to see if
It is not just one document that would be returned, it one document per
person. That is a little trickier.
- Original Message -
From: "Michael Sokolov"
To: solr-user@lucene.apache.org
Cc: "l blevins"
Sent: Wednesday, March 9, 2011 7:46:10 PM
Subject: Re:
Hello All,
I have been using apache-solr-common-1.3.0.jar in my module.
I am planning to shift to the latest version, because of course it has more
flexibility. But it is really strange that I dont find any corresponding jar
of the latest version. I have serached in total apachae sol
Hello all,
I am trying to use query parser plugin feature of solr.
But its really strange that everytime its behaving in a different way.
I have decalred my custom query parser in solrconfig.xml as follows..
I have linked it to the default request handler as follows..
Hello All,
I have been trying to find out the right place to parse
the query submitted. To be brief, I need to expand the query. For
example.. let the query be
city:paris
then I would like to expand the query as .. follows
city:paris OR place:paris OR town:paris .
I gue
Hi,
I have explored "DisMaxRequestHandler". It could serve for some
of my purposes but not all.
1) It seems we have to decide that alternative field list beforehand
and declare them in the config.xml . But the field list for which
synonyms are to be considered is not definite ( at least in the
Hello,
Thanks. This would absolutely serve. I thought of doing it in
queryparser part which I mentioned in first mail. But if the query is
a complex one, then it would become a bit complicated. Thats why I
wanted to know whether there is any other way which is similar to the
second point
the query looks like:
"name:(carsten) OR name:(carsten*) OR email:(carsten) OR
email:(carsten*) OR userid:(carsten) OR userid:(carsten*)"
Then it should match:
carsten l
carsten larsen
Carsten Larsen
Carsten
CARSTEN
etc.
And when the user enters the term: "carsten l&q
(I think the
> defaults here are non-alphanumeric chars).
>
> Take a look at http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
> for more info on tokenizers and filters.
>
> cheers,
> Aleks
>
> On Tue, 18 Nov 2008 08:35:31 +0100, Carsten L <[EMAIL PR
Hi,
Sorry that this is not strictly a solr specific question -
I wonder if anyone has a script to start solr on Linux when the
system boots up? Or better yet, supports shutdown and restart?
--
Thanks,
Jack
end of the line to just before the -jar.
> - Pete
> On 8/19/07, Jack L <[EMAIL PROTECTED]> wrote:
>> Hello Peter,
>>
>> Many thanks!
>>
>> solr.start works fine but I'm getting an error with solr.stop and solr is
>> not being stop
ete
> On 8/19/07, Jack L <[EMAIL PROTECTED]> wrote:
>> Hello Peter,
>>
>> Many thanks!
>>
>> solr.start works fine but I'm getting an error with solr.stop and solr is
>> not being stopped:
>> (I've replaced my app dir with /op
Actually it's --stop. Thanks!
> Interesting, it worked fine on the server. Try moving the -stop at
> the end of the line to just before the -jar.
> - Pete
urday night
with the trunk code ;-(
SolrQuery q = new SolrQuery();
q.setQuery( "id:11");
q.addFacetField("l");
q.setFacet(true);
q.setFacetMinCount(1);
q.setParam("mlt", true);
q.setParam("mlt.fl"
query field be stored?
It will be nice if http://wiki.apache.org/solr/FieldOptionsByUseCase is
updated.
--
Thanks
George L
On 9/9/07, Ryan McKinley <[EMAIL PROTECTED]> wrote:
>
> George L wrote:
> > I have been trying the MLT Query using EmbeddedSolr and SolrJ clients,
> whic
Hi,
I'm about to start a new solr installation. Given the good quality of
development builds in the past, should I use 1.2 or just grab a
nightly build?
--
Best regards,
Jack
I was going through some old emails on this topic. Rafael Rossini figured
out how to run multiple indices on single instance of jetty but it has to
be jetty plus. I guess jetty doesn't allow this? I suppose I can add
additional jars and make it work but I haven't tried that. It'll
always be much sa
Hello Mike,
> but it is still just an example application.
I think this is a very modest statement. I'd like to say both solr
(including the example) and jetty are production level software.
I suppose many users, like me, will just take it and make minimum
modification of the configs and use it o
s two Solr instances inside one
>> Jetty container. I'm using Solr 1.2.0 and Jetty 6.1.3 respectively.
>>
>>
> Hope this helps,
> --matt
> On Sep 11, 2007, at 11:52 AM, Jack L wrote:
>> I was going through some old emails on this topic. Rafael Rossini
>
because lucene 2.3.0 today released..
--
regards
j.L
2008/3/20 李银松 <[EMAIL PROTECTED]>:
> 1、When I set fl=score ,solr returns just as fl=*,score ,not just scores
> Is it a bug or just do it on purpose?
u can set fl=id,score, solr not support the style like fl=score
> My customer want to get the 1th-10010th added docs
> So I have to sort by t
u can try je-analyzer,,,i building 17m docs search site by solr and
je-analyzer
On Thu, May 15, 2008 at 6:44 AM, Walter Underwood <[EMAIL PROTECTED]>
wrote:
> N-gram works pretty well for Chinese, there are even studies to
> back that up.
>
> Do not use the N-gram matches for highlighting. They
if commercial analyzers, i recommend http://www.hylanda.com/(it is the best
analyzer in chinese word)
On Thu, May 15, 2008 at 8:32 AM, j. L <[EMAIL PROTECTED]> wrote:
> u can try je-analyzer,,,i building 17m docs search site by solr and
> je-analyzer
>
>
> On Thu, Ma
if u can read chinese and wanna write ur chinese-analyzer,,, maybe u can see
it http://www.googlechinablog.com/2006/04/blog-post_10.html
2008/5/15 j. L <[EMAIL PROTECTED]>:
> if commercial analyzers, i recommend
> http://www.hylanda.com/(it<http://www.hylanda.com/%28it>is
I don't know the cost.
I know the bigger chinese search use it.
More chinese people who study and use full-text search think it is the best
chinese analyzer which u can buy.
Baidu(www.baidu.com), is the biggest chinese search, and googlechina is the
No 2.
Baidu not use it (http://www.hylanda.c
can u talk about it ?
maybe i will use hadoop + solr.
thks for ur advice.
--
regards
j.L
On Thu, May 15, 2008 at 11:25 PM, Walter Underwood <[EMAIL PROTECTED]>
wrote:
> I've worked with the Basis products. Solid, good support.
> Last time I talked to them, they were working on hooking
> them into Lucene.
>
i don't know basis product. but i know google use it and in china,
google.cnno
just rm -r SOLR_DIR/data/index.
2008/6/18 Mihails Agafonovs <[EMAIL PROTECTED]>:
> How can I clear the whole Solr index?
> Ar cieņu, Mihails
--
regards
j.L
Hello,
I have a question about solr's performance of accepting
inserts and indexing. If I have 10 million documents that
I'd like to index, I suppose it will take some time to
submit them to solr. Is there any faster way to do this
than through the web interface?
--
Best regards,
Jack
_
Thanks to all who replied. It's encouraging :)
The numbers vary quite a bit though, from 13 docs/s (Burkamp)
to 250 docs/s (Walter) to 1000 docs/s I understand the results also depend
on the doc size and hardware.
I have a question for Erik: you mentioned "single threaded indexer"
(below). I'm no
Thanks for all who replied.
> my number 1000 was per minute, not second!
I can't read! :-p
> couple of times today at around 158 documents / sec.
This is not bad at all. How about search performance?
How many concurrent queries have people been having?
What does the response time look like?
>
I have played with the "example" directory for a while.
Everything seems to work well. Now I'd like to start my own
index and I have a few questions.
1. I suppose I can start from copying the whole example
directory and name it myindex. I understand that I need
to modify the solr/conf/schema.xml
Thanks Chris and Eric for the replies. Very helpful.
> no, each instance manages a single schema and a single data index -- but
> thta schema can allow for various differnt types of documents that don't
> need to have anything in common.
Does this mean that as long as I have the schema for all do
Hello Erik,
> Wouldn't even matter if there were field name "conflicts". A field
> by any other name is just a field. All document types could have a
> "title" field, for example.
That makes sense.
I wonder what happens if I change the schema after some documents
have been inserted? Is this al
I am indexing some pages whose urls are unique.
I wanted to use url as the unique key but got a
problem that "=" is not allowed in unique keys.
I could use a MD5 hash of the url as the unique key
but I'm not sure if there is a better/simpler way.
I wonder what others use for the unique keys?
--
B
Hello Yonik,
You are right. = is allowed. The problem was because & was
not properly escaped in xml.
--
Thanks,
Jack
Tuesday, February 27, 2007, 8:47:35 AM, you wrote:
> On 2/27/07, Jack L <[EMAIL PROTECTED]> wrote:
>> I am indexing some pages whose urls are unique.
>&g
Hello,
I guess this is more of a (naive) jetty question - I'm running
a modified instance of the "example" project with jetty
on a linux server. How should I stop this jetty instance?
Note that I may have multiple jetty instances and other
java processes running and I'd like to stop a particular
i
> Otis
> . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
> Simpy -- http://www.simpy.com/ - Tag - Search - Share
> - Original Message ----
> From: Jack L <[EMAIL PROTECTED]>
> To: solr-user@lucene.apache.org
> Sent: Monday, March 5, 2007 5:21:56 AM
> Subject: How to stop solr/jet
This is very interesting discussion. I have a few question while
reading Tim and Venkatesh's email:
To Tim:
1. is there any reason you don't want to use HTTP? Since solr has
an HTTP interface already, I suppose using HTTP is the simplest
way to communicate the solr servers from the merger/se
Selecting by type will do the job. But I suppose it sacrifice
performance because having multiple document types in the same
index will render a larger index. Is it bad?
--
Best regards,
Jack
Wednesday, March 7, 2007, 2:15:14 PM, you wrote:
> As it is now... I don't think so. SolrCore is a sta
Hello Sachin,
I'm not understanding the second option. Could you explain a bit?
--
Best regards,
Jack
Friday, March 9, 2007, 2:35:36 AM, you wrote:
> Well,
> One way you can do this is:
> "Description:php AND title:php AND Subject:php AND notes:php"
> The other option is to create an extra
I understand that I'm supposed to delete the old record and
re-post in order to update a document. But in many cases,
it takes time to extract data (from a database, etc.) and all
I want to change is the document boost. I wonder if it's possible
to adjust the document boost without deleting and re-
so they can
> be extracted and re-indexed on the server side. This may not help
> your performance concern, but it may be easier to deal with.
> ryan
> On 3/31/07, Jack L <[EMAIL PROTECTED]> wrote:
>> I understand that I'm supposed to delete the old record and
>>
Hello solr-user,
Query result in JSON format is really convenient, especially for
Python clients. Is there any plan to allow posting in JSON format?
--
Best regards,
Jack
Doing queries is so easy with Python, thanks to solr's
Python format support. Is there any Python utility classes for
posting documents? Which I think, is essentially a Python
class to generate XML documents (before JSON support is available)
from Python objects. Then again, JSON for posting would
Mike and Erik, thanks for the reply. Excellent. I'll try it out.
> On 4/15/07, Jack L <[EMAIL PROTECTED]> wrote:
>> Doing queries is so easy with Python, thanks to solr's
>> Python format support. Is there any Python utility classes for
>> posting documents? W
Is the lucene query syntax available in solr? I saw this page
about lucene query syntax:
http://lucene.apache.org/java/docs/queryparsersyntax.html
I tried "width:[0 TO 500]" and got an exception:
java.lang.NumberFormatException: For input string: "TO500"
at
java.lang.NumberFormatExceptio
I'm running the default solr package with my data, about 10 million
small documents. I'm not sure if it's jetty or solr, but often
times solr stops functioning properly after running for a few days.
The symptom is search returning nothing, or when I go to /solr/admin/,
I get file browsing page sh
1 - 100 of 165 matches
Mail list logo