Hi,
when I do a full import I get the following error :
"Caused by: java.sql.SQLException: Cannot convert value '-00-00
00:00:00' from column 10 to TIMESTAMP.
at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1055)
at com.mysql.jdbc.SQLError.createSQLException(SQLEr
you may need to change the mysql connection parameters so that it does
not throw error for null date
"jdbc:mysql://localhost/test?zeroDateTimeBehavior=convertToNull"
On Thu, May 7, 2009 at 1:39 PM, gateway0 wrote:
>
> Hi,
>
> when I do a full import I get the following error :
>
> "Caused by: ja
Hi!
I agree that Solr is difficult to extend in many cases. We just patch Solr,
and I guess many other users patch it too. What I propose is to create some
Solr-community site (Solr incubator?) to public patches there, and Solr core
team could then look there and choose patches to apply to the Sol
Awesome, thanks!!! I first thought it could be "blob-field" related.
Have a nice day
Sebastian
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> you may need to change the mysql connection parameters so that it does
> not throw error for null date
>
> "jdbc:mysql://localhost/test?zeroDateTimeBehav
Hi
I have tried to run the following code
package org.apache.solr.spelling;
import org.apache.lucene.analysis.fr.FrenchAnalyzer;
public class Test {
public static void main (String args[]) {
SpellingQueryConverter sqc = new SpellingQueryConverter();
sqc.analyzer =
We have not pushed the fix into production yet. However, I am wondering two
things. 1. If the download takes more than 10 seconds (our replication can
take up to 90 seconds) will that be an issue 2. There are 3 patches, 2 have
2 line changes 1 has a large amount. Do we need the latest 2 or just th
> On May 6, 2009, at 3:25 PM, Jeff Newburn wrote:
>
>> We are trying to implement a SearchCompnent plugin. I have been
>> looking at
>> QueryElevateComponent trying to weed through what needs to be done.
>> My
>> basic desire is to get the results back and manipulate them either by
>> altering th
this isn't advice on how to upgrade, but if you/your-project have a
bit of time to wait, 1.4 sounds like it's getting close to an official
releasefyi.
cheers,
rob
On Tue, May 5, 2009 at 1:05 PM, Francis Yakin wrote:
>
> What's the best way to upgrade solr from 1.2.0 to 1.3.0 ?
>
> We have t
Thanks a lot for the information. But I am still a bit confused about the use
of TermsComponents. Like where are we exactly going to put these codes in
Solr.For example I changed schema.xml to add autocomplete feauture.I read
your blog too, its very helpful.But still a little confused. :-((
Can yo
Great... thanks for the response!
2009/5/7 Noble Paul നോബിള് नोब्ळ्
> it is wise to optimize the index once in a while (daily may be). But
> it depends on how many commits you do in a day. Every commit causes
> fragmentation of index files and your search can become slow if you do
> not optimiz
Do you know if it's possible to writing solr results directly on a hard disk
from server side and not to use an HTTP connection to transfer the results?
While the query time is very fast for solr, I want to do that, cause of the
time taken during the transfer of the results between the client and
the patches have gone into the trunk. The latest patch should be the
one if you wish to run a patched Solr.
10 secs readTimeout means that if there is no data coming from the
other end for 10 secs, then the waiting thread returns throwing an
exception. It is not the total time taken to read the en
did you consider using an EmbeddedSolrServer?
On Thu, May 7, 2009 at 8:25 PM, arno13 wrote:
>
> Do you know if it's possible to writing solr results directly on a hard disk
> from server side and not to use an HTTP connection to transfer the results?
>
> While the query time is very fast for solr
Excellent! Thank you I am going to start testing that.
--
Jeff Newburn
Software Engineer, Zappos.com
jnewb...@zappos.com - 702-943-7562
> From: Noble Paul നോബിള് नोब्ळ्
> Reply-To:
> Date: Thu, 7 May 2009 20:26:02 +0530
> To:
> Subject: Re: aka Replication Stall
>
> the patches have gone
Hi, and sorry for slightly hijacking the thread,
On Mar 26, 2009, at 2:54 , Otis Gospodnetic wrote:
Hi,
Without knowing the details, I'd say keep it in the same index if
the additional information shares some/enough fields with the main
product data and separately if it's sufficiently dis
We do optimize once a day at 1am.
Ching-hsien Wang, Manager
Library and Archives System Support Branch
Office of Chief Information Officer
Smithsonian Institution
202-633-5581(office) 202-312-2874(fax)
wan...@si.edu
Visit us online: www.siris.si.edu
-Original Message-
From: Eric Sabouri
The string fieldtype is not being tokenized, while the text fieldtype is
tokenized. So the stop word "for" is being removed by a stop word filter,
which doesn't happen with the text field type (no tokenizing).
Have a look at the schema.xml in the example dir and look at the default
configuration f
It seems to me that this is just the expected behavior of the FrenchAnalyzer
using the FrenchStemmer. I'm not familiar with the French language, but in
English words like running, runner, and runs are all stemmed down to "run"
as intended. I don't know what other words in French would stem down to
Hi KK,
On 5/7/2009 at 2:55 AM, KK wrote:
> In some of the pages I'm getting some \ufffd chars which I think is
> some sort of unmappable[by Java?] character, right?. Any idea on how
> to handle this? Just replacing with blank char will not do [this
> depends on the requirement, though].
>From
I have an index of product names. I'd like to sort results so that entries
starting with the user query come first.
E.g.
q=kitchen
Results would sort something like:
1. kitchen appliance
2. kitchenaid dishwasher
3. fridge for kitchen
It looks like using a query Function Query comes close, but
I have configured solr using tomcat.Everything works fine.I overrode
QParserPlugin and configured it.The overriden QParserPlugin has a dependency
on another project say project1.So I made a jar of the project and copied
the jar to the solr/home lib dir.
the project1 project is using spring.It has
This is resolved.I solved this by reading solrPlugins on the solr wiki.
Thanks,
Raju
Raju444us wrote:
>
> Hi Hoss,
>
> If i extend SolrQueryParser and override method getFieldQuery for some
> customization.Can I configure my new queryParser somthing like below
>
>
>
>
>
The manual suggested by Otis would happen inside of Solr. We use the
last-component to do the sub-query and then merge the results. Since it's a new
sub-query the relevancy and sorting should be independent of the main query.
Thanks,
Kalyan Manepalli
-Original Message-
From: Nicolas Past
This is probably because Solr loads its extensions from a custom class
loader, but if that class then needs to access things from the
classpath, it is only going to see the built-in WEB-INF/lib classes,
not solr/home lib JAR files. Maybe there is a Spring way to point it
at that lib direct
Thanks Otis.
I did set the maxMergeDocs to 10M, but I still see couple of index
files over 30G which do not match with max number of documents. Here
are some numbers,
1) My total index size = 66GB
2) Number of total documents = 200M
3) 1M doc = 300MB
4) 10M doc should be roughly around 3-4GB.
Un
On the page http://wiki.apache.org/solr/SolrReplication, it says the
following:
"Force a snapshot on master.This is useful to take periodic
backups .command : http://master_host:port/solr/replication?
command=snapshoot"
This then puts the snapshot under the data directory. Perfectly
reas
Hi
It does not seem to be related to FrenchStemmer, the stemmer does not split
a word into 2 words. I have checked with other words and
SpellingQueryConverter always splits words with special character.
I think that the issue is in SpellingQueryConverter class
Pattern.compile.("(?:(?!(\\w+:|\\d+)))
Question 1: I see in DirectUpdateHandler2 that there is a read/Write lock
used between addDoc and commit.
My mental model of the process was this: clients can add/update documents
until the auto commit threshold was hit. At that point the commit tracker
would schedule a background commit. The
On Thu, May 7, 2009 at 5:03 PM, Jim Murphy wrote:
> Question 1: I see in DirectUpdateHandler2 that there is a read/Write lock
> used between addDoc and commit.
>
> My mental model of the process was this: clients can add/update documents
> until the auto commit threshold was hit. At that point th
Hi,
I'm importing data using the DIH. I manage all my data updates outside of
Solr, so I use the full-import command to update my index (with
clean=false). Everything works fine, except that I can't delete documents
easily using the DIH. I noticed the preImportDeleteQuery attribute, but
doesn't se
First, your solrconfig.xml should have the something similar to the
following:
class="org.apache.solr.handler.component.TermsComponent"/>
class="org.apache.solr.handler.component.SearchHandler">
termsComp
This will give you a request handler called "/autoSuggest" that
On May 7, 2009, at 4:52 PM, wojtekpia wrote:
Hi,
I'm importing data using the DIH. I manage all my data updates
outside of
Solr, so I use the full-import command to update my index (with
clean=false). Everything works fine, except that I can't delete
documents
easily using the DIH. I notice
For the Drupal Apache Solr Integration module, we are exploring the
possibility of doing facet browsing - since we are using dismax as
the default handler, this would mean issuing a query with an empty q
and falling back to to q.alt='*:*' or some other q.alt that matches
all docs.
However, I noti
Interesting. So is there a JIRA ticket open for this already? Any chance of
getting it into 1.4? Its seriously kicking out butts right now. We write
into our masters with ~50ms response times till we hit the autocommit then
add/update response time is 10-30 seconds. Ouch.
I'd be willing to wo
Hi Jay
Thank you for your response.
The data relating to the string (s_title) defines *exactly* what was
fed into the SOLR indexing. The string is not otherwise relevant to
the question.
The essence of my question is why can the indexed text (t_title) not
be phrase matched by the query on the t
Foreword: I'm not a java developer :)
OSVDB.org and datalossdb.org make use of solr pretty extensively via
acts_as_solr.
I found myself with a real need for some of the StatsComponent stuff
(mainly the sum feature), so I pulled down a nightly build and played
with it. StatsComponent proved perf
On Thu, May 7, 2009 at 8:37 PM, Jim Murphy wrote:
> Interesting. So is there a JIRA ticket open for this already? Any chance of
> getting it into 1.4?
No ticket currently open, but IMO it could make it for 1.4.
> Its seriously kicking out butts right now. We write
> into our masters with ~50ms
makes sense. I'll open an issue
On Fri, May 8, 2009 at 1:53 AM, Grant Ingersoll wrote:
> On the page http://wiki.apache.org/solr/SolrReplication, it says the
> following:
> "Force a snapshot on master.This is useful to take periodic backups .command
> : http://master_host:port/solr/replication?c
are you doing a full-import or a delta-import?
for delta-import there is an option of deletedPkQuery which should
meet your needs
On Fri, May 8, 2009 at 5:22 AM, wojtekpia wrote:
>
> Hi,
> I'm importing data using the DIH. I manage all my data updates outside of
> Solr, so I use the full-import
a point to keep in mind is that all the plugin code and everything
else must be put into the solrhome/lib directory.
where have you placed the file com/mypackage/applicationContext.xml ?
On Fri, May 8, 2009 at 12:19 AM, Raju444us wrote:
>
> I have configured solr using tomcat.Everything works fi
I didn't notice that the mail was not sent to the list. Plz send all
your communication to the mailing list
-- Forwarded message --
From: Noble Paul നോബിള് नोब्ळ्
Date: 2009/5/8
Subject: Re: Solr MultiCore dataDir bug - a fix
To: pasi.j.matilai...@tieto.com
are you sure that
Hi,
I have tracked this problem to:
https://issues.apache.org/jira/browse/SOLR-879
Executive summary is that there are errors that relate to
text fields in both:
- src/java/org/apache/solr/search/SolrQueryParser.java
- example/solr/conf/schema.xml
It is fixed in 1.4.
Thank you Yonik See
>From my understanding re-indexing the documents is a different thing. If you
>have the stop word filter for field type say "text" then after reloading the
>core if i type in a query which is stop word only it would get parsed from
>stop word filter which would eventually will not serach aga
Hi,
I am facing a n wierd issue while searching.
I am searching for word *sytem*, it displays all the records which contains
system, systems etc. But when I tried to search *systems*, it only returns
me those records, which have systems-, systems/ etc etc. It is considering
wildcard as 1 or more
44 matches
Mail list logo