Can anybody suggest me the way to load data from mysql to solr directly.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Loading-data-to-solr-from-mysql-tp2442184p2442184.html
Sent from the Solr - User mailing list archive at Nabble.com.
http://wiki.apache.org/solr/DataImportHandler
On Mon, Feb 7, 2011 at 11:16 AM, Bagesh Sharma wrote:
>
> Can anybody suggest me the way to load data from mysql to solr directly.
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Loading-data-to-solr-from-mysql-tp2442184p244
Pl share your insights on the error.
Regards,
Prasad
java.lang.OutOfMemoryError: Java heap space
Exception in thread "Timer-1" at org.mortbay.util.URIUtil.
decodePath(URIUtil.java:285)
at org.mortbay.jetty.HttpURI.getDecodedPath(HttpURI.java:395)
at
org.mortbay.jetty.HttpConnectio
> Pl share your insights on the error.
> java.lang.OutOfMemoryError: Java heap space
What happens if you increase the Java heap space?
java -Xmx1g -jar start.jar
I have already allocated abt 2gb -Xmx2048m.
Regards,
Prasad
On 7 February 2011 18:17, Ahmet Arslan wrote:
>
> > Pl share your insights on the error.
> > java.lang.OutOfMemoryError: Java heap space
>
> What happens if you increase the Java heap space?
> java -Xmx1g -jar start.jar
>
>
>
>
>
Heap usage can spike after a commit. Existing caches are still in use and new
caches are being generated and/or auto warmed. Can you confirm this is the
case?
On Friday 28 January 2011 00:34:42 Simon Wistow wrote:
> On Tue, Jan 25, 2011 at 01:28:16PM +0100, Markus Jelsma said:
> > Are you sure y
What is your index size and Ram you have?
-
Thanx:
Grijesh
http://lucidimagination.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Error-tp2442417p2443597.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Sat, Feb 5, 2011 at 2:06 PM, Darx Oman wrote:
> I indexed 1000 pdf file with the same configuration, it completed in about
> 32 min.
So, it seems like your indexing scales at least as well as the number
of the PDF documents that you have.
While this might be good news in your case, it is diff
I'm receiving the following exception when trying to perform a
full-import (~30 hours). Any idea on ways I could fix this?
Is there an easy way to use DIH to break apart a full-import into
multiple pieces? IE 3 mini-imports instead of 1 large import?
Thanks.
Feb 7, 2011 5:52:33 AM org.apa
Typo in subject
On 2/7/11 7:59 AM, Mark wrote:
I'm receiving the following exception when trying to perform a
full-import (~30 hours). Any idea on ways I could fix this?
Is there an easy way to use DIH to break apart a full-import into
multiple pieces? IE 3 mini-imports instead of 1 large imp
On Mon, Feb 7, 2011 at 9:29 PM, Mark wrote:
> I'm receiving the following exception when trying to perform a full-import
> (~30 hours). Any idea on ways I could fix this?
>
> Is there an easy way to use DIH to break apart a full-import into multiple
> pieces? IE 3 mini-imports instead of 1 large i
Full import is around 6M documents which when completed totals around
30GB in size.
Im guessing it could be a database connectivity problem because I also
see these types of errors on delta-imports which could be anywhere from
20K to 300K records.
On 2/7/11 8:15 AM, Gora Mohanty wrote:
On M
On Mon, Feb 7, 2011 at 10:15 PM, Mark wrote:
> Full import is around 6M documents which when completed totals around 30GB
> in size.
>
> Im guessing it could be a database connectivity problem because I also see
> these types of errors on delta-imports which could be anywhere from 20K to
> 300K re
Hi Ravi Kiran,
I am using Solr version 1.4, and the solution suggested by you seems to be
there in solrconfig.xml already.
But after reading your message again now, I checked the release
notes(CHANGES.TXT) of Solr 1.4.1 and I found these two entries..
* SOLR-1711: SolrJ - StreamingUpdateSolrSe
hi all,
I'm trying to get result like :
blabla keyword blabla ... blablakeyword blabla...
so, I'd like to show 2 fragments.I've added these settings
20
3
but I get only 1 fragment blabla keyword blabla.
Am I trying to do it right way? Is it what can be done via changes in
conf
Thanks Otis,
I'll give that a try.
Jed.
On 02/06/2011 08:06 PM, Otis Gospodnetic wrote:
Yup, here it is, warning about needing to reindex:
http://twitter.com/#!/lucene/status/28694113180192768
Otis
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
Lucene ecosystem search :: http
--- On Mon, 2/7/11, alex wrote:
> From: alex
> Subject: hl.snippets in solr 3.1
> To: solr-user@lucene.apache.org
> Date: Monday, February 7, 2011, 7:38 PM
> hi all,
> I'm trying to get result like :
> blabla keyword blabla ...
> blablakeyword blabla...
>
> so, I'd like to show 2 fragments.I'
: The stack trace is attached. I also saw this warning in the logs not sure
>From your attachment...
853 SEVERE: org.apache.solr.common.SolrException: undefined field: score
854 at
org.apache.solr.handler.component.TermVectorComponent.process(TermVectorComponent.java:142)
855 at
org.ap
Ahmet Arslan wrote:
--- On Mon, 2/7/11, alex wrote:
From: alex
Subject: hl.snippets in solr 3.1
To: solr-user@lucene.apache.org
Date: Monday, February 7, 2011, 7:38 PM
hi all,
I'm trying to get result like :
blabla keyword blabla ...
blablakeyword blabla...
so, I'd like to show 2 fragmen
Hi! I want to search for special chars like mäcman by giving similar worded
simple characters like maecman.
I used and I'm getting
mäcman from macman but I'm not able to get mäcman from maecman.
Can this be done using any other filter?
Thanks,
Anithya
--
View this message in context:
http://l
> I can see changes if I change fragsize, but no
> hl.snippets.
May be your text is too short to generate more than one snippets?
What happens when you increase hl.maxAnalyzedChars parameter?
&hl.maxAnalyzedChars=2147483647
: While reloading a core I got this following error, when does this
: occur ? Prior to this exception I do not see anything wrong in the logs.
well, there are realy two distinct types of "errors" in your log...
:
[#|2011-02-01T13:02:36.697-0500|SEVERE|sun-appserver2.1|org.apache.solr.
Hi,
I would like to know if the code below is correct, because the date is not
well displayed in Luke
I have a POJO with a date defined as follow:
public class SolrPositionDTO {
@Field
private String address;
@Field
private Date beginDate;
And in the schema config file the field is d
Ahmet Arslan wrote:
I can see changes if I change fragsize, but no
hl.snippets.
May be your text is too short to generate more than one snippets?
What happens when you increase hl.maxAnalyzedChars parameter?
&hl.maxAnalyzedChars=2147483647
It's working now. I guess, it was a problem
Hi Anithya,
There is a mapping file for MappingCharFilterFactory that behaves the same as
ASCIIFoldingFilterFactory: mapping-FoldToASCII.txt, located in Solr's example
conf/ directory in Solr 3.1+. You can rename and then edit this file to map
"ä" to "ae", " ü" to "ue", etc. (look for "WITH DI
Hi everyone,
I have been looking for a searching solution for spatial data and
since I have worked with Solr before, I wanted to give the spatial
features a try.
1. What is the default datum used for the LatLong type? Is it WGS 84?
2. What is the best way to represent a region (a bounding box to
Just so anyone else can know and save themselves 1/2 hour if they spend 4
minutes searching.
When putting a dynamic field into a document into an index, the name of the
field RETAINS the 'constant' part of the dynamic field name.
Example
-
If a dynamic integer field is named '*_i' in
It would be quite annoying if it behaves as you were hoping for. This way it
is possible to use different field types (and analyzers) for the same field
value. In faceting, for example, this can be important because you should use
analyzed fields for q and fq but unanalyzed fields for facet.fiel
I have a long way to go to understand all those implications. Mind you, I
never
-was- whining :-). Just ignorantly surprised.
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better
idea to learn from others’ mistake
It is not reasonable to expect a database session to work over 30
hours, let alone an app/database operation.
If you can mark a database record as successfully indexed, the
incremental feature can be used to only index non-marked records.
SOLR-1499 offers a way to check Solr with a sorted query o
You're probably better off in this instance creating your own
process based on SolrJ and your jdbc-driver-of-choice. DIH
doesn't provide much in the way of fine-grained control over
all aspects of the process, and at +30 hours I suspect you
want some better control.
FWIW, SolrJ is not very hard at
Thanks Bill,
much simpler :-)
On Sat, Feb 5, 2011 at 3:56 AM, Bill Bell wrote:
> Why not just:
>
> q=*:*
> fq={!bbox}
> sfield=store
> pt=49.45031,11.077721
> d=40
> fl=store
> sort=geodist() asc
>
>
> http://localhost:8983/solr/select?q=*:*&sfield=store&pt=49.45031,11.077721&;
> d=40&fq={!bb
You can change the match to be my* and then insert the name you want.
Bill Bell
Sent from mobile
On Feb 7, 2011, at 4:15 PM, gearond wrote:
>
> Just so anyone else can know and save themselves 1/2 hour if they spend 4
> minutes searching.
>
> When putting a dynamic field into a document int
Hi,
I use dismax handler with solr 1.4.
Sometimes, my request comes with q and fq, and others doesn't come with q
(only fq and q.alt=*:*). It's quite ok if I send q.alt=*:* for every
request? Does it have side effects on performance?
--
Chhorn Chamnap
http://chamnapchhorn.blogspot.com/
On Mon, Feb 07, 2011 at 02:06:00PM +0100, Markus Jelsma said:
> Heap usage can spike after a commit. Existing caches are still in use and new
> caches are being generated and/or auto warmed. Can you confirm this is the
> case?
We see spikes after replication which I suspect is, as you say, becau
On Fri, Jan 28, 2011 at 12:29:18PM -0500, Yonik Seeley said:
> That's odd - there should be nothing special about negative numbers.
> Here are a couple of ideas:
> - if you have a really big index and querying by a negative number
> is much more rare, it could just be that part of the index wasn'
There is no measurable performance penalty when setting the parameter, except
maybe the execution of the query with a high value for rows. To make things
easy, you can define q.alt=*:* as default in your request handler. No need to
specifiy it in the URL.
> Hi,
>
> I use dismax handler with s
Do you have GC logging enabled? Tail -f the log file and you'll see what CMS is
telling you. Tuning the occupation fraction of the tenured generation to a
lower value than default and telling the JVM to only use your value to
initiate a collection can help a lot. The same goes for sizing the you
I'd like to use the filter factories in the org.apache.solr.analysis package
for tokenizing text in a separate application. I need to chain a couple
tokenizers together like Solr does on indexing and query parsing. I have
looked into the TokenizerChain class to do this. I have successfully
implemen
Hi,
Yes, assuming you didn't change the index files, say by optimizing the index,
the hot portions of the index should remain in the OS cache unless something
else kicked them out.
Re other thread - I don't think I have those messages any more.
Otis
---
Sematext :: http://sematext.com/ :: Sol
I think what you are trying to achieve is called taxonomy facet.
There is a solution for that. Check for the slides for Taxonomy faceting.
http://www.lucidimagination.com/solutions/webcasts/faceting
However, i don't know if you are able to render the hierachy all at once.
The solution i point
Hi all,
Been a solr user for a while now, and now I need to add some functionality to
solr for which I'm trying to write a custom QueryComponent. Couldn't get much
help from websearch. So, turning to solr-user for help.
I'm implementing search functionality for (micro)blog aggregation. We use
To be able to "see" this well, it would be lovely to have a switch that would
activate a logging of the query expansion result. The Dismax QParserPlugin is
particularly powerful in there so it'd be nice to see what's happening.
Any logging category I need to activate?
paul
Le 8 févr. 2011 à 0
43 matches
Mail list logo