Hello all,
I have Solr 1.2 installed and I was wandering how Solr 1.2 deals with checking
miss spelled strings and also how to configure it ?
appreciate any docs on this topic ..
thanks a lot
ak
_
All new Live Search at Live.com
Take a look at http://wiki.apache.org/solr/SpellCheckerRequestHandler
If you can use a nightly build of Solr 1.3 then you can use the new and
better http://wiki.apache.org/solr/SpellCheckComponent
On Wed, Jun 25, 2008 at 2:36 PM, dudes dudes <[EMAIL PROTECTED]> wrote:
>
> Hello all,
>
> I have S
I think it's a bit different. I ran into this exact problem about two
weeks ago on a 13 million record DB. MySQL doesn't honor the fetch
size for it's v5 JDBC driver.
See http://www.databasesandlife.com/reading-row-by-row-into-java-from-mysql/
or do a search for MySQL fetch size.
You ac
I'm assuming, of course, that the DIH doesn't automatically modify the
SQL statement according to the batch size.
-Grant
On Jun 25, 2008, at 7:05 AM, Grant Ingersoll wrote:
I think it's a bit different. I ran into this exact problem about
two weeks ago on a 13 million record DB. MySQL doe
thanks for your kind reply
> Date: Wed, 25 Jun 2008 14:48:38 +0530
> From: [EMAIL PROTECTED]
> To: solr-user@lucene.apache.org
> Subject: Re: Solr spell-checker
>
> Take a look at http://wiki.apache.org/solr/SpellCheckerRequestHandler
>
> If you can use
The OP is actually using Sql Server (not MySql) as per his mail.
On Wed, Jun 25, 2008 at 4:40 PM, Grant Ingersoll <[EMAIL PROTECTED]>
wrote:
> I'm assuming, of course, that the DIH doesn't automatically modify the SQL
> statement according to the batch size.
>
> -Grant
>
>
> On Jun 25, 2008, at 7
DIH does not modify SQL. This value is used as a connection property
--Noble
On Wed, Jun 25, 2008 at 4:40 PM, Grant Ingersoll <[EMAIL PROTECTED]> wrote:
> I'm assuming, of course, that the DIH doesn't automatically modify the SQL
> statement according to the batch size.
>
> -Grant
>
> On Jun 25, 2
The latest patch sets fetchSize as Integer.MIN_VALUE if -1 is passed.
It is added specifically for mysql driver
--Noble
On Wed, Jun 25, 2008 at 4:35 PM, Grant Ingersoll <[EMAIL PROTECTED]> wrote:
> I think it's a bit different. I ran into this exact problem about two weeks
> ago on a 13 million r
On Tue, 24 Jun 2008 19:17:58 -0700
Ryan McKinley <[EMAIL PROTECTED]> wrote:
> also, check the LukeRequestHandler
>
> if there is a document you think *should* match, you can see what
> tokens it has actually indexed...
>
hi Ryan,
I can't see the tokens generated using LukeRequestHandler.
I c
Hi,
where can I find these sources? I have the binary jars included with the
nightly builds,but I'd like to look @ the code of some of the objects.
In particular,
http://svn.apache.org/viewvc/lucene/java/
doesnt have any reference to 2.4, and
http://svn.apache.org/viewvc/lucene/java/trunk/src/
trunk is the latest version (which is currently 2.4-dev).
http://svn.apache.org/viewvc/lucene/java/trunk/
There is a contrib directory with things not in lucene-core:
http://svn.apache.org/viewvc/lucene/java/trunk/contrib/
-Yonik
I'm trying with batchSize=-1 now. So far it seems to be working, but very
slowly. I will update when it completes or crashes.
Even with a batchSize of 100 I was running out of memory.
I'm running on a 32-bit Windows machine. I've set the -Xmx to 1.5 GB - I
believe that's the maximum for my envir
Hi,
I don't think the problem is within DataImportHandler since it just streams
resultset. The fetchSize is just passed as a parameter passed to
Statement#setFetchSize() and the Jdbc driver is supposed to honor it and
keep only that many rows in memory.
From what I could find about the Sql Server
: With the understanding that queries for newly indexed fields in this document
: will not return this newly added document, but a query for the document by its
: id will return any new stored fields. When the "real" commit (read: the commit
: that takes 10 minutes to complete) returns the newly i
: Im curious, is there a spot / patch for the latest on Nutch / Solr
: integration, Ive found a few pages (a few outdated it seems), it would be nice
: (?) if it worked as a DataSource type to DataImportHandler, but not sure if
: that fits w/ how it works. Either way a nice contrib patch the way
Hi,
I've been trying to use the NGramTokenizer and I ran into a problem.
It seems like solr is trying to match documents with all the tokens that the
analyzer returns from the query term. So if I index a document with a title
field with the value "nice dog" and search for "dog" (where the
NGramtoke
On 24-Jun-08, at 4:26 PM, Chris Harris wrote:
I have an index that I eventually want to rebuild so I can set
compressed=true on a couple of fields. It's not really practical to
rebuild
the whole thing right now, though. If I change my schema.xml to set
compressed=true and then keep adding ne
It looks like that was the problem. With responseBuffering=adaptive, I'm able
to load all my data using the sqljdbc driver.
--
View this message in context:
http://www.nabble.com/DataImportHandler-running-out-of-memory-tp18102644p18119732.html
Sent from the Solr - User mailing list archive at Na
Hi,
I have the same issue as described in:
http://www.nabble.com/solr-sorting-question-td17498596.html. I am trying
to have some categories before others in search results for different
search terms. For example, for search team "ABC", I want to show
Category "CCC" first, then Category "BBB",
Note, also, that the Manifest file in the JAR has information about
the exact SVN revision so that you can check it out from there.
On Jun 25, 2008, at 12:37 PM, Yonik Seeley wrote:
trunk is the latest version (which is currently 2.4-dev).
http://svn.apache.org/viewvc/lucene/java/trunk/
The
On Wed, 25 Jun 2008 20:22:06 -0400
Grant Ingersoll <[EMAIL PROTECTED]> wrote:
> Note, also, that the Manifest file in the JAR has information about
> the exact SVN revision so that you can check it out from there.
>
>
> On Jun 25, 2008, at 12:37 PM, Yonik Seeley wrote:
>
> > trunk is the late
On Wed, 25 Jun 2008 15:37:09 -0300
"Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
> I've been trying to use the NGramTokenizer and I ran into a problem.
> It seems like solr is trying to match documents with all the tokens that the
> analyzer returns from the query term. So if I index a document with
Hi everyone,
I'm having trouble deleting documents from my solr 1.3 index. To delete a
document, I post something like "12345" to the
solr server, then issue a commit. However, I can still find the document in
the index via the query "id:12345". The document remains visible even after
I restart
On Wed, Jun 25, 2008 at 8:44 PM, Galen Pahlke <[EMAIL PROTECTED]> wrote:
> I'm having trouble deleting documents from my solr 1.3 index. To delete a
> document, I post something like "12345" to the
> solr server, then issue a commit. However, I can still find the document in
> the index via the q
It's not exactly what you want, but putting specific documents first
for certain queries has been done via
http://wiki.apache.org/solr/QueryElevationComponent
-Yonik
On Wed, Jun 25, 2008 at 6:58 PM, Yugang Hu <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I have the same issue as described in:
> http://www
I originally tested with an index generated by solr 1.2, but when that
didn't work, I rebuilt the index from scratch.
>From my schema.xml:
.
.
id
-Galen Pahlke
On Wed, Jun 25, 2008 at 7:00 PM, Yonik Seeley <[EMAIL PROTECTED]> wrote:
> On Wed, Jun 25, 2008 at 8:44 PM, Galen
On Wed, Jun 25, 2008 at 9:34 PM, Galen Pahlke <[EMAIL PROTECTED]> wrote:
> I originally tested with an index generated by solr 1.2, but when that
> didn't work, I rebuilt the index from scratch.
> From my schema.xml:
>
>
> .
>required="true"/>
> .
>
>
> id
I tried this as well...
Well, it is working if I search just two letters, but that just tells me
that something is wrong somewhere.
The Analysis tools is showing me how "dog" is being tokenized to "do og", so
if when indexing and querying I'm using the same tokenizer/filters (which is
my case) I should get results even wh
On Thu, 26 Jun 2008 10:44:32 +1000
Norberto Meijome <[EMAIL PROTECTED]> wrote:
> On Wed, 25 Jun 2008 15:37:09 -0300
> "Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
>
> > I've been trying to use the NGramTokenizer and I ran into a problem.
> > It seems like solr is trying to match documents with all
We must document this information in the wiki. We never had a chance
to play w/ ms sql server
--Noble
On Thu, Jun 26, 2008 at 12:38 AM, wojtekpia <[EMAIL PROTECTED]> wrote:
>
> It looks like that was the problem. With responseBuffering=adaptive, I'm able
> to load all my data using the sqljdbc dr
Ok. Played a bit more with that.
So I had a difference between my unit test and solr. In solr I'm actually
using a solr.RemoveDuplicatesTokenFilterFactory when querying. Tried to add
that to the test, and it fails.
So in my case I think the error is trying to use a
solr.RemoveDuplicatesTokenFilterF
On Thu, 26 Jun 2008 01:15:34 -0300
"Jonathan Ariel" <[EMAIL PROTECTED]> wrote:
> Ok. Played a bit more with that.
> So I had a difference between my unit test and solr. In solr I'm actually
> using a solr.RemoveDuplicatesTokenFilterFactory when querying. Tried to add
> that to the test, and it fai
32 matches
Mail list logo