Hi,
About a year ago, I took the then-existing SOLR 1.2 patches for Field
Collapsing, adapted them to my use, and have been using them
successfully in production for most of this year.
I'm now looking to upgrade my installation to SOLR 1.3. That leads to
several questions.
- As far as I
:
: Not sure how that would work (unless you didn't want responses), but
: I've thought about it from the SolrJ side - something you could
: quickly add documents to and it would manage a number of threads under
: the covers to maximize throughput. Not sure what would be the best
: for error hand
: Query (can be quite complex, as it gets built from an advanced search form):
: term1^2.0 OR term2 OR "term3 term4"
...
: Any matches in the title or url fields should be weighed more. I can specify
if i'm understanding you correctly: the client app can provide any
arbitrary lucene synt
Hi Grant,
Yeah, I've noticed the commit yesterday. Great!!! Now I need not check
for updates on the patch anymore.
Now that it has been integrated, I suppose it will be a good time to
develop an API for sending Documents to Solr. Something similar to
sending a SolrInputDocument with doc.add(field
For the nested entities there is a feature called
CachedSqlEntityProcessor to make things faster (if you have enough
RAM)
On Tue, Dec 9, 2008 at 10:36 AM, Noble Paul നോബിള് नोब्ळ्
<[EMAIL PROTECTED]> wrote:
> On Tue, Dec 9, 2008 at 10:26 AM, Chris Hostetter
> <[EMAIL PROTECTED]> wrote:
>>
>> : Fi
: If the way I am doing it (Query 1) is a fluke, what is the correct way of
: doing it? Seems like there is something fundamental that I am missing.
as i said: URL encode the actual characters, not the java escape sequence.
how exactly you URL escape non-ascii characters is somewhat tricky, and
On Tue, Dec 9, 2008 at 10:26 AM, Chris Hostetter
<[EMAIL PROTECTED]> wrote:
>
> : First, LuSql default uses Lucene's StandardAnalyzer[4]. The Javadocs
> : indicates it uses StandardTokenizer[5], StandardFilter[6],
> : LowerCaseFilter[7], and StopFilter[8]. I have created a fieldType in
> : my Solr
: First, LuSql default uses Lucene's StandardAnalyzer[4]. The Javadocs
: indicates it uses StandardTokenizer[5], StandardFilter[6],
: LowerCaseFilter[7], and StopFilter[8]. I have created a fieldType in
: my Solr configuration's schema.xml that I hope is the equivalent to
: this:
if you want to b
Another thought I just had - do you have autocommit enabled?
A lucene commit is now more expensive because it syncs the files for
safety. If you commit frequently, this could definitely cause a
slowdown.
-Yonik
On Wed, Nov 26, 2008 at 10:54 AM, Fergus McMenemie <[EMAIL PROTECTED]> wrote:
> Hell
Hello Joel,
Using MappingCharFilter with mapping-ISOLatin1Accent.txt on your sort
field can solve your problem:
mapping="mapping-ISOLatin1Accent.txt" />
CharFilter is in trunk/Solr 1.4, though, if you use Solr 1.3, you can
download a patch for Solr 1.3:
ht
> Also, is there any way to get Solr to sort, i.e, á, à or â together with
the "regular" a's?
The ISOLatin1 filter "downconverts" these variants to the ASCII a letter. It
does this in the index, not the stored data. This solves the
Bjork/Bjork-umlaut problem: you can type either and find records
Hi Joel,
On 12/08/2008 at 5:37 PM, Joel Karlsson wrote:
> Is there any way to get Solr to sort properly on a text field containing
> international, in my case swedish, letters? It doesn't sort å,ä and ö
> in the proper order.
I wrote a Lucene patch that stores CollationKeys generated by a user-s
One option is to add an additional field for sorting. Create a copy of the
field you want to sort on and modify the data you insert there so that it will
sort the way you want it to.
-ToddFeak
-Original Message-
From: Joel Karlsson [mailto:[EMAIL PROTECTED]
Sent: Monday, December 08, 2
Hello,
Is there any way to get Solr to sort properly on a text field containing
international, in my case swedish, letters? It doesn't sort å,ä and ö in the
proper order. Also, is there any way to get Solr to sort, i.e, á, à or â
together with the "regular" a's?
Thanks in advance! // Joel
I have a cluster of Solr Master/Slaves. We write tot he master and replicate
to the slaves via rsync.
Master:
1. Replication is every 5 minutes.
2. Inserting many 100's docs per minute
3. Index is: 23 million documents
4. commits are every 30 seconds
Slave:
1. Pre-warmed after rsync snaps
Any inputs on this would be really helpful. Looking for
suggestions/viewpoints from you guys.
One area where you might have issues is with date range queries. If
you have many docs, then you can run into OOM errors. There was a
recent thread about this, where Yonik (and others) had some good
Hi,
Any inputs on this would be really helpful. Looking for suggestions/viewpoints
from you guys.
Regards,
Sourav
-Original Message-
From: souravm
Sent: Saturday, December 06, 2008 9:41 PM
To: solr-user@lucene.apache.org
Subject: Limitations of Distributed Search
Hi,
We are plan
We are not using JSPs anywhere. Everything else works because they don't need
to be compiled like JSPs.
Tomcat 5.5.9 uses JDT compiler for compiling JSPs which doesn't understand
Java 1.5.
ryantxu wrote:
>
> I think your best option is to edit the jsp and remove that syntax...
>
> So you are
On Dec 8, 2008, at 2:26 AM, Jana, Kumar Raja wrote:
Hi Grant,
Thanks for the help. It has solved my problem.
Cool. In case you didn't see Solr Cell is now committed.
Is there any example Solrj code to send a document to Solr Cell using
the right ContentHandlers? I've tried to understand
I think your best option is to edit the jsp and remove that syntax...
So you are not running 1.5? how does anything else work?!
On Dec 8, 2008, at 10:12 AM, Sorbo wrote:
Yes everything works except the JSPs. The errors are with the Java
1.5 syntax
viz with the index.jsp
it doesn't like
In case anyone has the same issue, it looks like switching Solr over
from Jetty to Tomcat fixed the problem.
I am using Tomcat v. 6.0.18.
Regards,
Anoop Bhatti
--
Committed to open source technology.
On Mon, Dec 1, 2008 at 10:51 AM, Anoop Bhatti <[EMAIL PROTECTED]> wrote:
> I increased this pa
Yes everything works except the JSPs. The errors are with the Java 1.5 syntax
viz with the index.jsp
it doesn't like the Java 1.5 syntax
for( org.apache.solr.core.SolrCore core : cores.getCores() ) {%>
This is the stack trace
org.apache.jasper.JasperException: Unable to compile class for JSP
An
Right now, you'd have to write an implementation of a
SolrSpellChecker. Seems like a reasonable thing to have, though. We
could have a "Chained" Spell Checker that combined the others, I think.
Another option that might work, would be to define two separate Search
components, 1 for the fi
23 matches
Mail list logo