I have created my own field type. I have indexed "Stephen King" and
get no hit when searching
author:(stephen king)
I get a hit when searching like this
author:(stephen* AND *king)
I also get a hit when searching like this
author:"stephen king"
So it seems like when querying with (...) it actual
> I have a couple of string fields. For some of them I want from my
> application to be able to index a lowercased string but store the
> original value. Is there some way to do this? Or would I have to come
> up with a new field type and implement an analyzer?
I think I should be able to do what
>> I have a couple of string fields. For some of them I want
>> from my
>> application to be able to index a lowercased string but
>> store the
>> original value. Is there some way to do this? Or would I
>> have to come
>> up with a new field type and implement an analyzer?
>
> If you have stored="
Hi,
I have a couple of string fields. For some of them I want from my
application to be able to index a lowercased string but store the
original value. Is there some way to do this? Or would I have to come
up with a new field type and implement an analyzer?
/Tim
Hi Tod,
I had similar issue with slf4j, but it was NoClassDefFound. Do you
have some other dependencies in your application that use some other
version of slf4j? You can use mvn dependency:tree to get all
dependencies in your application. Or maybe there's some other version
already in your tomcat
solrj has a dependency on wstx-asl. I've successfully used Solr 1.4
maven artifacts for a while and the wstx-asl dependency had the wrong
groupId so it's always been missing in my application, but it has
still worked fine. Is wstx-asl really needed? Is it only needed in
certain circumstances? Is it
Is there any chance that
https://issues.apache.org/jira/browse/LUCENE-996 will be backported to
the 3x branch? I see that it's fixed in trunk, but it will be a while
until it's in a release.
How do people generally search for documents from lets say year 2009?
I thought it would be convenient to d
In our application we use dynamic fields and there can be about 50 of
them and there can be up to 100 million documents.
Are there any disadvantages having multivalued=true on all fields in
the schema? An admin of the application can specify dynamic fields and
if they should be indexed or stored.
Synonyms doesn't seem to work in EmbeddedSolrServer (solr 1.4.0) when
mixing in multi word synonyms. It works fine when I run solr
standalone. Did anyone else experience this?
I have this in synonyms.txt:
word => some, other stuff
I index "some" and then search for "word". With a standalone solr
> hi all,
> i am indexing the documents to solr that are in my system. now i need
> to index the files that are in remote system, i enabled the remote streaming
> to true in solrconfig.xml and when i use the stream.url it shows the error
> as ""connection refused"" and the detail of the error
StreamingUpdateSolrServer logs "starting runner: ...", sends a POST
with ... and I guess also opens a new HTTP connection
every time it has managed to empty its queue. In
StreamingUpdateSolrServer.java it says this:
// info is ok since this should only happen once for each thread
log.info(
It would be nice if the documentation mentioned this. :)
/Tim
2010/3/18 Erik Hatcher :
> The StreamingUpdateSolrServer does not support binary format, unfortunately.
>
> Erik
>
> On Mar 18, 2010, at 8:15 AM, Tim Terlegård wrote:
>
>> I'm using Streami
I'm using StreamingUpdateSolrServer to index a document.
StreamingUpdateSolrServer server = new
StreamingUpdateSolrServer("http://localhost:8983/solr/core0";, 20, 4);
server.setRequestWriter(new BinaryRequestWriter());
SolrInputDocument doc = new SolrInputDocument();
doc.addField("id", "12121212")
2010/2/25 Bradford Stephens :
> Thanks for coming, everyone! We had around 25 people. A *huge*
> success, for Seattle. And a big thanks to 10gen for sending Richard.
>
> Can't wait to see you all next month.
Did anyone record the event?
/Tim
2010/2/15 Toke Eskildsen :
> From: Tim Terlegård [tim.terleg...@gmail.com]
>> If the index size is more than you can have in RAM, do you recommend
>> to split the index to several servers so it can all be in RAM?
>>
>> I do expect phrase queries. Total index size is 107
Hi Tom,
1600 warming queries, that's quite many. Do you run them every time a
document is added to the index? Do you have any tips on warming?
If the index size is more than you can have in RAM, do you recommend
to split the index to several servers so it can all be in RAM?
I do expect phrase qu
2010/2/12 Shalin Shekhar Mangar :
> 2010/2/12 Tim Terlegård
>
>> Does Solr use some sort of a persistent cache?
>>
> Solr does not have a persistent cache. That is the operating system's file
> cache at work.
Aha, that's very interesting and seems to make sense.
Does Solr use some sort of a persistent cache?
I do this 10 times in a loop:
* start solr
* create a core
* execute warmup query
* execute query with sort fields
* stop solr
Executing the query with sort fields takes 5-20 times longer the first
iteration than the other 9 iterations. For
2010/2/10 Jan Simon Winkelmann :
> I am (still) trying to get JMX to work. I have finally managed to get a Jetty
> installation running with the right parameters to enable JMX. Now the next
> problem appeared. I need to get Solr to register ist MBeans with the Jetty
> MBeanServer. Using service
t; On Mon, Feb 8, 2010 at 8:44 AM, Simon Rosenthal
> wrote:
>> What Garbage Collection parameters is the JVM using ? the memory will not
>> always be freed immediately after an event like unloading a core or starting
>> a new searcher.
>>
>> 2010/2/8 Tim Terlegård
I don't use any garbage collection parameters.
/Tim
2010/2/8 Simon Rosenthal :
> What Garbage Collection parameters is the JVM using ? the memory will not
> always be freed immediately after an event like unloading a core or starting
> a new searcher.
>
> 2010/2/8 Tim Terl
To me it doesn't look like unloading a Solr Core frees the memory that
the core has used. Is this how it should be?
I have a big index with 50 million documents. After loading a core it
takes 300 MB RAM. After a query with a couple of sort fields Solr
takes about 8 GB RAM. Then I unload (CoreAdmin
I have 6 fields. The text field is the biggest, it contains almost all
of the 5000 chars.
/Tim
2010/1/27 Noble Paul നോബിള് नोब्ळ् :
> how many fields are there in each doc? the binary format just reduces
> overhead. it does not touch/compress the payload
>
> 2010/1/27 Tim Terlegård
2010/1/26 Erick Erickson :
> > My indexing script has been running all
> > night and has accomplished nothing. I see lots of disk activity
> > though, which is weird.
>
>
> One explanation would be that you're memory-starved and
> the disk activity you see is thrashing. How much memory
> do you all
2010/1/26 Jake Brownell :
> I swapped our indexing process over to the streaming update server, but now
> I'm seeing places where our indexing code adds several documents, but
> eventually hangs. It hangs just before the completion message, which comes
> directly after sending to solr. I found
Yes, it worked! Thank you very much. But do I need to use curl or can
I use CommonsHttpSolrServer or StreamingUpdateSolrServer? If I can't
use BinaryWriter then I don't know how to do this.
/Tim
2010/1/20 Noble Paul നോബിള് नोब्ळ् :
> 2010/1/20 Tim Terlegård :
>>>>>
2010/1/19 Noble Paul നോബിള് नोब्ळ् :
> 2010/1/19 Tim Terlegård :
>> server = new CommonsHttpSolrServer("http://localhost:8983/solr";)
>> server.setRequestWriter(new BinaryRequestWriter())
>> request = new UpdateRequest()
>> request.setAction(Up
There are a few ways to use solrj. I just learned that I can use the
javabin format to get some performance gain. But when I try the binary
format nothing is added to the index. This is how I try to use this:
server = new CommonsHttpSolrServer("http://localhost:8983/solr";)
server.setReque
28 matches
Mail list logo