Try referencing the jar directly (by absolute path) with a statement
in the solrconfig.xml (and reloading the core).
The DIH example shipped with Solr shows how it works.
This will help to see if the problem with not finding the jar or something else.
Regards,
Alex.
On Wed, 9 Oct 2019 at 09:14
Try starting Solr with the “-v” option. That will echo all the jars that are
loaded and the paths.
Where _exactly_ is the jar file? You say “in the lib folder of my core”, but
that leaves a lot of room for interpretation.
Are you running stand-alone or SolrCloud? Exactly how do you start Solr?
Hi,
Kindly help me solve the issue when I am connecting NEO4j with solr. I am
facing this issue in my log file while I have the jar file of neo4j driver
in the lib folder of my core.
Full Import failed:java.lang.RuntimeException: java.lang.RuntimeException:
org.apache.solr.handler.dataimport.Data
ing for all the
> cores which have more than 8 SQL requests. But the same is working fine with
> AWS hosting. Really baffled.
>
> Thanks and Regards,
> Srinivas Kashyap
>
> -Original Message-
> From: Erick Erickson
> Sent: 31 July 2019 08:00 PM
> To: solr-us
working fine with
AWS hosting. Really baffled.
Thanks and Regards,
Srinivas Kashyap
-Original Message-
From: Erick Erickson
Sent: 31 July 2019 08:00 PM
To: solr-user@lucene.apache.org
Subject: Re: Dataimport problem
This code is a little old, but should give you a place to start:
https
p
>
> -Original Message-
> From: Alexandre Rafalovitch
> Sent: 31 July 2019 07:41 PM
> To: solr-user
> Subject: Re: Dataimport problem
>
> A couple of things:
> 1) Solr on Tomcat has not been an option for quite a while. So, you must be
> r
2019 07:41 PM
To: solr-user
Subject: Re: Dataimport problem
A couple of things:
1) Solr on Tomcat has not been an option for quite a while. So, you must be
running an old version of Solr. Which one?
2) Compare that you have the same Solr config. In Admin UI, there will be all
O/S variables
A couple of things:
1) Solr on Tomcat has not been an option for quite a while. So, you
must be running an old version of Solr. Which one?
2) Compare that you have the same Solr config. In Admin UI, there will
be all O/S variables passed to the Java runtime, I would check them
side-by-side
3) You c
It is probably autocommit setting in your solrconfig.xml.
But you may also want to consider indexing into a new core and then doing a
core swap at the end. Or re-aliasing if you are running a multiCore
collection.
Regards,
Alex
On Fri, Mar 29, 2019, 2:25 AM 黄云尧, wrote:
> when I do the ful
I have seen the same if the JDBC jar is not found, you cannot tell from the UI,
you have to go to Solr logs. We should fix this!
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
> 30. nov. 2018 kl. 00:46 skrev Shawn Heisey :
>
> I'm looking into a problem where the adm
On 8/20/2018 10:00 PM, Sushant Vengurlekar wrote:
I have a dataimport working on standalone solr instance but the same
doesn't work on solrcloud. I keep on hitting this error
Full Import failed:java.lang.RuntimeException:
java.lang.RuntimeException:
org.apache.solr.handler.dataimport.DataImportH
On 6/7/2018 12:19 AM, kotekaman wrote:
sorry. may i know how to code it?
Code *what*?
Here's the same wiki page that I gave you for your last message:
https://wiki.apache.org/solr/UsingMailingLists
Even if I go to the Nabble website and discover that you've replied to a
topic that's SEVEN A
sorry. may i know how to code it?
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
From: Shawn Heisey
Reply-To: "solr-user@lucene.apache.org"
Date: Tuesday, December 5, 2017 at 1:31 PM
To: "solr-user@lucene.apache.org"
Subject: Re: Dataimport handler showing idle status with multiple shards
On 12/5/2017 10:47 AM, Sarah Weissman wrote:
I’ve rec
On 12/5/2017 10:47 AM, Sarah Weissman wrote:
I’ve recently been using the dataimport handler to import records from a
database into a Solr cloud collection with multiple shards. I have 6 dataimport
handlers configured on 6 different paths all running simultaneously against the
same DB. I’ve no
https://wiki.apache.org/solr/DataImportHandlerFaq#I.27m_using_DataImportHandler_with_a_MySQL_database._My_table_is_huge_and_DataImportHandler_is_going_out_of_memory._Why_does_DataImportHandler_bring_everything_to_memory.3F
-Original Message-
From: Deeksha Sharma [mailto:dsha...@flexera.co
Hello, Dean.
DIH is shard agnostic. How do you try to specify "a shard from the new
collection"?
On Tue, Mar 21, 2017 at 8:24 PM, deansg wrote:
> Hello,
> My team often uses the /dataimport & /dih handlers to move items from one
> Solr collection to another. However, all the times we did that,
On 8/18/2016 5:10 PM, Peri Subrahmanya wrote:
> Hi,
>
> I have a simple one-to-many relationship setup in the data-import.xml and
> when I try to index it using the dataImportHandler, Solr complains of “no
> unique id found”.
>
> managed-schema.xml
> id
> solrconfig,xml:
>
>
> id
>
>
Well, do both parent and child entity have a field called 'id'
containing their corresponding unique ids? That would be the first
step.
Regards,
Alex.
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On 19 August 2016 at 09:10, Peri Subrahmanya
w
Kishor,
Data Import Handler doesn't know how to randomly access rows from the CSV to
"JOIN" them to rows from the MySQL table at indexing time.
However, both MySQL and Solr know how to JOIN rows/documents from multiple
tables/collections/cores.
Data Import Handler could read the CSV first, and
What are the errors reported? Errors can be either seen on admin page
logging tab or log file under solr_home.
If you follow the steps mentioned on the blog precisely, it should almost
work
http://solr.pl/en/2010/10/11/data-import-handler-%E2%80%93-how-to-import-data-from-sql-databases-part-1/
https://issues.apache.org/jira/browse/SOLR-7588
On Mon, Jun 8, 2015 at 2:11 AM, William Bell wrote:
> Uncaught ReferenceError: naturalSort is not defined
> $.ajax.success @ dataimport.js?_=5.2.0:48
> jQuery.Callbacks.fire @ require.js?_=5.2.0:3126
> jQuery.Callbacks.self.fireWith @ require.js?_=
Also getting:
Uncaught ReferenceError: naturalSort is not defined
On Mon, Jun 8, 2015 at 1:50 AM, William Bell wrote:
>
>1. When I click "DataImport" in the UI on any core.. The UI just spins:
>2.
>3.
>
> http://hgsolr2devmstr:8983/solr/autosuggest/admin/mbeans?cat=QUERYHANDLER
Uncaught ReferenceError: naturalSort is not defined
$.ajax.success @ dataimport.js?_=5.2.0:48
jQuery.Callbacks.fire @ require.js?_=5.2.0:3126
jQuery.Callbacks.self.fireWith @ require.js?_=5.2.0:3244
done @ require.js?_=5.2.0:9482
jQuery.ajaxTransport.send.callback @ require.js?_=5.2.0:10263
On Mon
Hello O,
It seems to me (but it's better to look at the heap histogram) that
buffering sub-entities in SortedMapBackedCache blows heap off.
I'm aware about two directions:
- use file based cache instead. I don't know exactly how it works, you can
start from https://issues.apache.org/jira/browse/SOL
On 5/9/2014 9:16 AM, O. Olson wrote:
> I have a Data Schema which is Hierarchical i.e. I have an Entity and a number
> of attributes. For a small subset of the Data - about 300 MB, I can do the
> import with 3 GB memory. Now with the entire 4 GB Dataset, I find I cannot
> do the import with 9 GB of
On 7 March 2014 08:50, Pritesh Patel wrote:
> I'm using the dataimporthandler to index data from a mysql DB. Been
> running it just fine. I've been using full-imports. I'm now trying
> implement the delta import functionality.
>
> To implement the delta query, you need to be reading the last_inde
I'm guessing that "id" in your schema.xml is also a unique key field.
If so, each document must have an id field or Solr will refuse to
index them.
DataImportHandler will map the id field in your table to Solr schema's
id field only if you have not specified a mapping.
On Thu, Jan 23, 2014 at 3:0
The best practice for upgrading is take the distribution and expand it.
Then take your cores and replace it.
Then you are guaranteed to get the jars and not have other WARs/JARs
hanging around.
On Sun, Dec 22, 2013 at 7:24 PM, Shawn Heisey wrote:
> On 12/22/2013 9:51 AM, William Pierce wrote:
On 12/22/2013 9:51 AM, William Pierce wrote:
> My configurations works nicely with solr 4.4. I am encountering a
> configuration error when I try to upgrade from 4.4 to 4.6. All I did was the
> following:
>
> a) Replace the 4.4 solr.war file with the 4.6 solr.war in the tomcat/lib
> folder. I
Hi!
Thanks for all the advice! I finally did it, the most annoying error
that took me the best of a day to figure out was that the state
variable here had to be reset:
https://bitbucket.org/dermotte/liresolr/src/d27878a71c63842cb72b84162b599d99c4408965/src/main/java/net/semanticmetadata/lire/solr/
Hi Mathias,
I'd recommend testing one thing at a time. See if you can get it to work
for one image before you try a directory of images. Also try testing using
the solr-testframework using your ide (I use Eclipse) to debug rather than
your browser/print statements. Hopefully that will give you
Unfortunately it is the same in non-debug, just the first document. I
also output the params to sout, but it seems only the first one is
ever arriving at my custom class. I've the feeling that I'm doing
something seriously wrong here, based on a complete misunderstanding
:) I basically assume that
The first thing I would suggest is to try and run it not in debug mode. DIH's
debug mode limits the number of documents it will take in, so that might be all
that is wrong here.
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: mathias@gmail.com [mailto:mathi
Hmm, I will fix.
https://issues.apache.org/jira/browse/SOLR-4788
On Thu, May 9, 2013 at 8:35 PM, William Bell wrote:
> It does not work anymore in 4.x.
>
> ${dih.last_index_time} does work, but the entity version does not.
>
> Bill
>
>
>
> On Tue, May 7, 2013 at 4:19 PM, Shalin Shekhar Mangar
It does not work anymore in 4.x.
${dih.last_index_time} does work, but the entity version does not.
Bill
On Tue, May 7, 2013 at 4:19 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Using ${dih..last_index_time} should work. Make sure you put
> it in quotes in your query.
>
>
> On
Using ${dih..last_index_time} should work. Make sure you put
it in quotes in your query.
On Tue, May 7, 2013 at 12:07 PM, Eric Myers wrote:
> In the data import handler I have multiple entities. Each one
> generates a date in the
> dataimport.properties i.e. entityname.last_index_time.
>
> H
I also get this. 4.2+
On Fri, Apr 19, 2013 at 10:43 PM, Eric Myers wrote:
> I have multiple parallel entities in my document and when I run an import
> there are times like
> xxx.last_index_time
> where xxx is the name of the entity.
>
> I tried accessing these using dih.xxx.last_index_time but
Hey
It well never turn "green" since we have no explicit status for the Importer
when it's done. But, what did you see when you hit the "Refresh" Button at the
bottom of the page? are the numbers counting?
Stefan
On Friday, March 29, 2013 at 5:38 PM, A. Lotfi wrote:
> Hi,
>
> When I hit E
Looks like it will be helpful. I'm going to give it a shot. Thanks, Otis.
Shikhar
From: Otis Gospodnetic [otis.gospodne...@gmail.com]
Sent: Friday, November 02, 2012 4:36 PM
To: solr-user@lucene.apache.org
Subject: Re: DataImport Handler : Transf
>
> Thanks,
> Shikhar
>
> -Original Message-
> From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
> Sent: Thursday, November 01, 2012 8:13 PM
> To: solr-user@lucene.apache.org
> Subject: Re: DataImport Handler : Transformer Function Eval Failed Error
>
> Hi,
>
-Original Message-
From: Otis Gospodnetic [mailto:otis.gospodne...@gmail.com]
Sent: Thursday, November 01, 2012 8:13 PM
To: solr-user@lucene.apache.org
Subject: Re: DataImport Handler : Transformer Function Eval Failed Error
Hi,
That looks a little painful... what are you trying to achieve by
Hi,
That looks a little painful... what are you trying to achieve by storing
JSON in there? Maybe there's a simpler way to get there...
Otis
--
Performance Monitoring - http://sematext.com/spm
On Nov 1, 2012 6:16 PM, "Mishra, Shikhar"
wrote:
> Hi,
>
> I'm trying to store a list of JSON objects
Ingram Content Group
(615) 213-4311
-Original Message-
From: mechravi25 [mailto:mechrav...@yahoo.co.in]
Sent: Tuesday, August 21, 2012 7:47 AM
To: solr-user@lucene.apache.org
Subject: RE: Dataimport Handler in solr 3.6.1
Hi James,
Thanks for the suggestions.
Actually it is cacheLookup="ent1
Hi James,
Thanks for the suggestions.
Actually it is cacheLookup="ent1.id" . had misspelt it. Also, I will be
needing the transformers mentioned as there are other columns as well.
Actually tried using the 3.5 DIH jars in 3.6.1 and indexed the same and the
indexing was successful. But I wanted
One thing I notice in your configuration...the child entity has this:
cacheLookup="ent1.uid"
but your parent entity doesn't have a "uid" field.
Also, you have these 3 transformers:
RegexTransformer,DateFormatTransformer,TemplateTransformer
but none of your columns seem to make use of these.
;Karsten
>
> View this message in context:
> http://lucene.472066.n3.nabble.com/DataImport-using-last-indexed-id-or-getting-max-id-quickly-tp3993763p3994560.html
>
>
> Original-Nachricht
>> Datum: Wed, 11 Jul 2012 20:59:10 -0700 (PDT)
>> Von: ave
using-last-indexed-id-or-getting-max-id-quickly-tp3993763p3994560.html
Original-Nachricht
> Datum: Wed, 11 Jul 2012 20:59:10 -0700 (PDT)
> Von: avenka
> An: solr-user@lucene.apache.org
> Betreff: Re: DataImport using last_indexed_id or getting max(id) quickly
> Thanks. Ca
Thanks. Can you explain more the first TermsComponent option to obtain
max(id)? Do I have to modify schema.xml to add a new field? How exactly do I
query for the lowest value of "1 - id"?
--
View this message in context:
http://lucene.472066.n3.nabble.com/DataImport-using-last-indexed-id-
Hi Avenka,
*DataImportHandler*
1.) there is no configuration to add the last uniqueKeyField-Values to
dataimport.properties
2.) you can use LogUpdateProcessor to log all "schema.printableUniqueKey(doc)"
to log.info( ""+toLog + " 0 " + (elapsed) )
3.) you can write your own LogUpdateProcessor to
On 1 May 2012 23:12, geeky2 wrote:
> Hello all,
>
> is there a notification / trigger / callback mechanism people use that
> allows them to know when a dataimport process has finished?
>
> we will be doing daily delta-imports and i need some way for an operations
> group to know when the DIH has f
On 10/19/2011 12:42 PM, Fred Zimmerman wrote:
dumb question ...
today I set up solr3.4/example, indexing to 8983 via post is working, so is
search, solr/dataimport reports
0
0
0
2011-10-19 18:13:57
Indexing failed. Rolled back all changes.
Google tells me to look at the exception logs to find
Brian,
I had the same problem a while back and set the JAVA_OPTS env variable
to something my machine could handle. That may also be an option for
you going forward.
Adam
On Wed, Mar 9, 2011 at 9:33 AM, Brian Lamb
wrote:
> This has since been fixed. The problem was that there was not enough mem
This has since been fixed. The problem was that there was not enough memory
on the machine. It works just fine now.
On Tue, Mar 8, 2011 at 6:22 PM, Chris Hostetter wrote:
>
> : INFO: Creating a connection for entity id with URL:
> :
> jdbc:mysql://localhost/researchsquare_beta_library?characterEn
: INFO: Creating a connection for entity id with URL:
:
jdbc:mysql://localhost/researchsquare_beta_library?characterEncoding=UTF8&zeroDateTimeBehavior=convertToNull
: Feb 24, 2011 8:58:25 PM org.apache.solr.handler.dataimport.JdbcDataSource$1
: call
: INFO: Time taken for getConnection(): 137
: K
On 19.12.2010, at 23:30, Alexey Serba wrote:
>
> Also Ephraim proposed a really neat solution with GROUP_CONCAT, but
> I'm not sure that all RDBMS-es support that.
Thats MySQL only syntax.
But if you google you can find similar solution for other RDBMS.
regards,
Lukas Kahwe Smith
m...@pooteew
> With subquery and with left join: 320k in 6 Min 30
It's 820 records per second. It's _really_ impressive considering the
fact that DIH performs separate sql query for every record in your
case.
>> So there's one track entity with an artist sub-entity. My (admittedly
>> rather limited) experien
CPU.
>
> James Dyer
> E-Commerce Systems
> Ingram Content Group
> (615) 213-4311
>
>
> -Original Message-
> From: Ephraim Ofir [mailto:ephra...@icq.com]
> Sent: Thursday, December 16, 2010 3:04 AM
> To: solr-user@lucene.apache.org
> Subject: RE: Dataimport pe
213-4311
-Original Message-
From: Ephraim Ofir [mailto:ephra...@icq.com]
Sent: Thursday, December 16, 2010 3:04 AM
To: solr-user@lucene.apache.org
Subject: RE: Dataimport performance
Check out
http://mail-archives.apache.org/mod_mbox/lucene-solr-user/20
[mailto:rob...@dubture.com]
Sent: Wednesday, December 15, 2010 4:49 PM
To: solr-user@lucene.apache.org
Subject: Re: Dataimport performance
i've benchmarked the import already with 500k records, one time without the
artists subquery, and one time without the join in the main query:
Without sub
Can you do just one join in the top-level query? The DIH does not have
a batching mechanism for these joins, but your database does.
On Wed, Dec 15, 2010 at 7:11 AM, Tim Heckman wrote:
> The custom import I wrote is a java application that uses the SolrJ
> library. Basically, where I had sub-enti
The custom import I wrote is a java application that uses the SolrJ
library. Basically, where I had sub-entities in the DIH config I did
the mappings inside my java code.
1. Identify a subset or "chunk" of the primary id's to work on (so I
don't have to load everything into memory at once) and put
i've benchmarked the import already with 500k records, one time without the
artists subquery, and one time without the join in the main query:
Without subquery: 500k in 3 min 30 sec
Without join and without subquery: 500k in 2 min 30.
With subquery and with left join: 320k in 6 Min 30
so t
2010/12/15 Robert Gründler :
> The data-config.xml looks like this (only 1 entity):
>
>
>
>
>
>
>
> name="sf_unique_id"/>
>
>
>
>
>
>
So there's one track entity with an artist sub-entity. My (admittedly
rather l
We are currently running Solr 4.x from trunk.
-d64 -Xms10240M -Xmx10240M
Total Rows Fetched: 24935988
Total Documents Skipped: 0
Total Documents Processed: 24568997
Time Taken: 5:55:19.104
24.5 Million Docs as XML from filesystem with less than 6 hours.
May be your MySQL is the bottleneck?
Reg
> What version of Solr are you using?
Solr Specification Version: 1.4.1
Solr Implementation Version: 1.4.1 955763M - mark - 2010-06-17 18:06:42
Lucene Specification Version: 2.9.3
Lucene Implementation Version: 2.9.3 951790 - 2010-06-06 01:30:55
-robert
>
> Adam
>
> 2010/12/15 Robert Gründ
You're adding on the order of 750 rows (docs)/second, which isn't bad...
have you profiled the machine as this runs? Even just with top (assuming
unix)...
because the very first question is always "what takes the time, getting
the data from MySQL or indexing or I/O?".
If you aren't maxing out you
What version of Solr are you using?
Adam
2010/12/15 Robert Gründler
> Hi,
>
> we're looking for some comparison-benchmarks for importing large tables
> from a mysql database (full import).
>
> Currently, a full-import of ~ 8 Million rows from a MySQL database takes
> around 3 hours, on a QuadCo
maybe encoding !?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Dataimport-Could-not-load-driver-com-mysql-jdbc-Driver-tp2021616p2027138.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Koji
I finally found the reason for this problem:
I download the tar file of the driver and unzip it in windows. Then I put
the jar file into the server. I don' t know why, but it doesn't work. It
works when I put the tar file and unzip it in the server.
Thanks a lot for your time!!!
Richard
And here are the logs:
Dec 5, 2010 2:00:23 AM org.apache.solr.handler.dataimport.DataImportHandler
processConfiguration
INFO: Processing configuration from solrconfig.xml:
{config=db-data-config.xml}
Dec 5, 2010 2:00:23 AM org.apache.solr.handler.dataimport.DataImporter
loadDataConfig
INFO: Data
Thanks Koji.
I just tried to change the permission of the driver file to 777, still can
not found the driver.
I put the driver into the folder with the original driver is and deleted the
original one. I don't know why solr can find the original one (if I don't
change anything), but not this one.
(10/12/05 18:38), Ruixiang Zhang wrote:
*I got the following error for dataimport:*
*Full Import failed
org.apache.solr.handler.dataimport.DataImportHandlerException: Could not
load driver: com.mysql.jdbc.Driver*
I have the following files:
\example-DIH\solr\db\conf\ solrconfig.xml, schema.x
That's the same series we use... we hade problems when running other
disk-heavy operations like rsync and backup on them too..
But in our case we mostly had hangs or load > 180 :P... Can you
simulate very heavy random disk i/o? if so then you could check if you
still have the same problems...
Tha
On Dec 2, 2010, at 15:43 , Sven Almgren wrote:
> What Raid controller do you use, and what kernel version? (Assuming
> Linux). We hade problems during high load with a 3Ware raid controller
> and the current kernel for Ubuntu 10.04, we hade to downgrade the
> kernel...
>
> The problem was a bug i
What Raid controller do you use, and what kernel version? (Assuming
Linux). We hade problems during high load with a 3Ware raid controller
and the current kernel for Ubuntu 10.04, we hade to downgrade the
kernel...
The problem was a bug in the driver that only showed up with very high
disk load (a
> The very first thing I'd ask is "how much free space is on your disk
> when this occurs?" Is it possible that you're simply filling up your
> disk?
no, i've checked that already. all disks have plenty of space (they have
a capacity of 2TB, and are currently filled up to 20%.
>
> do note that a
The very first thing I'd ask is "how much free space is on your disk
when this occurs?" Is it possible that you're simply filling up your
disk?
do note that an optimize may require up to 2X the size of your index
if/when it occurs. Are you sure you aren't optimizing as you add
items to your index?
The RSS example does not do this. It declares only the source, and gives
all of the parameters in the entity.
You can have different entities with different uses of the datasource.
In general, the DIH is easier to use when starting with one of the
examples and slowing changing one thing at a t
Hi Glen,
Thank you very much for the quick response, I would like to try increasing
the netTimoutForStreamingResults , is that something I can do it in the
MySQL side? or in the solr side?
Giri
On Tue, Jun 8, 2010 at 6:17 PM, Glen Newton wrote:
> As the index gets larger, the underlying housek
As the index gets larger, the underlying housekeeping of the Lucene
index sometimes causes pauses in the indexing. The JDBC connection
(and/or the underlying socket) to the MySql database can time out
during these pauses.
- If it is not set, you should add this to your JCBD url: autoreconnect=true
ember-15-09 3:48 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Dataimport MySQLNonTransientConnectionException: No operations
> allowed after connection closed
>
> First of all let us confirm this issue is fixed in 1.4.
>
> 1.4 is stable and a lot of people are using it in production
First of all let us confirm this issue is fixed in 1.4.
1.4 is stable and a lot of people are using it in production and it is
going to be released pretty soon
On Mon, Sep 14, 2009 at 8:05 PM, palexv wrote:
>
> I am using 1.3
> Do you suggest 1.4 from developer trunk? I am concern if it stable.
I am using 1.3
Do you suggest 1.4 from developer trunk? I am concern if it stable. Is it
safe to use it in big commerce app?
Noble Paul നോബിള് नोब्ळ्-2 wrote:
>
> which version of Solr are you using. can you try with a recent one and
> confirm this?
>
> On Mon, Sep 14, 2009 at 7:45 PM, pale
which version of Solr are you using. can you try with a recent one and
confirm this?
On Mon, Sep 14, 2009 at 7:45 PM, palexv wrote:
>
> I know that my issue is related to
> http://www.nabble.com/dataimporthandler-and-multiple-delta-import-td19160129.html#a19160129
> and https://issues.apache.org/
I have now :-)
Thanks , missed that in the Wiki.
Ruben
On Apr 16, 2009, at 7:10 PM, Noble Paul നോബിള്
नोब्ळ् wrote:
did you try the deletedPkQuery?
On Thu, Apr 16, 2009 at 7:49 PM, Ruben Chadien > wrote:
Hi
I am new to Solr, but have been using Lucene for a while. I am
trying to
rewri
did you try the deletedPkQuery?
On Thu, Apr 16, 2009 at 7:49 PM, Ruben Chadien wrote:
> Hi
>
> I am new to Solr, but have been using Lucene for a while. I am trying to
> rewrite
> some old lucene indexing code using the Jdbc DataImport i Solr, my problem:
>
> I have Entities that can be marked in
an EntityProcessor looks right to me. It may help us add more
attributes if needed.
PlainTextEntityProcessor looks like a good name. It can also be used
to read html etc.
--Noble
On Sat, Jan 24, 2009 at 12:37 PM, Shalin Shekhar Mangar
wrote:
> On Sat, Jan 24, 2009 at 5:56 AM, Nathan Adams wrote
On Sat, Jan 24, 2009 at 5:56 AM, Nathan Adams wrote:
> Is there a way to us Data Import Handler to index non-XML (i.e. simple
> text) files (either via HTTP or FileSystem)? I need to put the entire
> contents of a text file into a single field of a document and the other
> fields are being pulle
Which approach worked? I suggested three
Jetty automatically loads jars in WEB-INF/lib
it is the responsibility of Solr to load jars from solr.ome/lib
it is the responsibility of the JRE to load jars from JAVA_HOME/lib/ext
On Tue, Jan 6, 2009 at 6:18 PM, Performance wrote:
>
> Paul,
>
> Thanks fo
Paul,
Thanks for the feedback and it does work. So if I understand this the app
server code (Jetty) is not reading in the environment variables for the
other libraries I need. How do I add the JDBC files to the path so that I
don't need to copy the files into the directory? Does jetty have a c
The driver can be put directly into the WEB-INF/lib of the solr web
app or it can be put into ${solr.home}/lib dir.
or if something is really screwed up you can try the old fashioned way
of putting your driver jar into JAVA_HOME/lib/ext
--Noble
On Tue, Jan 6, 2009 at 7:05 AM, Performance wrote
I have been following this tutorial but I can't seem to get past an error
related to not being able to load the DB2 Driver. The user has all the
right config to load the JDBC driver and Squirrel works fine. Do I need to
update and path within Solr?
muxa wrote:
>
> Looked through the tutorial
Have you tried using the
options in the schema.xml? After the indexing, take a look to the
fields DIH has generated.
Bye,
L.M.
2008/12/15 jokkmokk :
>
> HI,
>
> I'm desperately trying to get the dataimport handler to work, however it
> seems that it just ignores the field name mapping.
> I
sorry, I'm using the 1.3.0 release. I've now worked around that issue by
using aliases in the sql statement so that no mapping is needed. This way it
works perfectly.
best regards
Stefan
Shalin Shekhar Mangar wrote:
>
> Which solr version are you using?
>
--
View this message in context:
h
Which solr version are you using?
On Mon, Dec 15, 2008 at 6:04 PM, jokkmokk wrote:
>
> HI,
>
> I'm desperately trying to get the dataimport handler to work, however it
> seems that it just ignores the field name mapping.
> I have the fields "body" and "subject" in the database and those are call
I actually found the problem. Oracle returns the field name as "Capital".
On Tue, Dec 2, 2008 at 1:57 PM, Jae Joo <[EMAIL PROTECTED]> wrote:
> Hey,
>
> I am trying to connect the Oracle database and index the values into solr,
> but I ma getting the
> "Document [null] missing required field: id".
Thanks David,
I have updated the wiki documentation
http://wiki.apache.org/solr/DataImportHandler#transformer
The default transformers do not have any special privilege it is like
any normal user provided transformer.We just identified some commonly
found usecases and added transformers for that.
The wiki didn't mention I can specify multiple transformers. BTW, it's
"transformer" (singular), not "transformers". I did mean both NFT and DFT
because I was speaking of the general case, not just mine in particular. I
thought that the built-in transformers were always in-effect and so I
expec
Hi David,
I think you meant RegexTransformer instead of NumberFormatTransformer.
Anyhow, the order in which the transformers are applied is the same as the
order in which you specify them.
So make sure your entity has
transformers="RegexTransformer,DateFormatTransformer".
On Thu, Oct 16, 2008 at
1 - 100 of 114 matches
Mail list logo