Have you done what the message says and looked at your Solr log? If so,
what information is there?
> On Dec 23, 2020, at 5:13 AM, DINSD | SPAutores
> wrote:
>
> Hi,
>
> I'm trying to install the package "data-import-handler", since it was
> discontinued from core SolR distro.
>
> https://git
On 11/30/2020 7:50 AM, David Smiley wrote:
Yes, absolutely to what Eric said. We goofed on news / release highlights
on how to communicate what's happening in Solr. From a Solr insider point
of view, we are "deprecating" because strictly speaking, the code isn't in
our codebase any longer. Fro
Yes, absolutely to what Eric said. We goofed on news / release highlights
on how to communicate what's happening in Solr. From a Solr insider point
of view, we are "deprecating" because strictly speaking, the code isn't in
our codebase any longer. From a user point of view (the audience of news
You don’t need to abandon DIH right now…. You can just use the Github hosted
version…. The more people who use it, the better a community it will form
around it!It’s a bit chicken and egg, since no one is actively discussing
it, submitting PR’s etc, it may languish. If you use it, and
On 11/29/2020 10:32 AM, Erick Erickson wrote:
And I absolutely agree with Walter that the DB is often where
the bottleneck lies. You might be able to
use multiple threads and/or processes to query the
DB if that’s the case and you can find some kind of partition
key.
IME the difficult part has
If you like Java instead of Python, here’s a skeletal program:
https://lucidworks.com/post/indexing-with-solrj/
It’s simple and single-threaded, but could serve as a basis for
something along the lines that Walter suggests.
And I absolutely agree with Walter that the DB is often where
the bottle
I recommend building an outboard loader, like I did a dozen years ago for
Solr 1.3 (before DIH) and did again recently. I’m glad to send you my Python
program, though it reads from a JSONL file, not a database.
Run a loop fetching records from a database. Put each record into a synchronized
(threa
I went through the same stages of grief that you are about to start
but (luckily?) my core dataset grew some weird cousins and we ended up
writing our own indexer to join them all together/do partial
updates/other stuff beyond DIH. It's not difficult to upload docs but
is definitely slower so far.
On 11/28/2020 5:48 PM, matthew sporleder wrote:
... The bottom of
that github page isn't hopeful however :)
Yeah, "works with MariaDB" is a particularly bad way of saying "BYO JDBC
JAR" :)
It's a more general queston though, what is the path forward for users
who with data in two places?
https://solr.cool/#utilities -> https://github.com/rohitbemax/dataimporthandler
You can import it in the many new/novel ways to add things to a solr
install and it should work like always (apparently). The bottom of
that github page isn't hopeful however :)
On Sat, Nov 28, 2020 at 5:21 PM Dmitri
check out the videos on this website TROO.TUBE don't be such a
sheep/zombie/loser/NPC. Much love!
https://troo.tube/videos/watch/aaa64864-52ee-4201-922f-41300032f219
On Tue, May 5, 2020 at 1:58 PM Mikhail Khludnev wrote:
>
> Hello, James.
>
> DataImportHandler has a lock preventing concurrent exe
Hello, James.
DataImportHandler has a lock preventing concurrent execution. If you need
to run several imports in parallel at the same core, you need to duplicate
"/dataimport" handlers definition in solrconfig.xml. Thus, you can run them
in parallel. Regarding schema, I prefer the latter but mile
Data, IM & Analytics
>
>
>
> Lautrupparken 40-42, DK-2750 Ballerup
> E-mail m...@kmd.dk Web www.kmd.dk
> Mobil +4525571418
>
> -Oprindelig meddelelse-
> Fra: Alexandre Rafalovitch
> Sendt: 2. oktober 2018 18:18
> Til: solr-user
> Emne: Re: data-imp
Admin UI for DIH will show you the config file read. So, if nothing is
there, the path is most likely the issue
You can also provide or update the configuration right in UI if you
enable debug.
Finally, the config file is reread on every invocation, so you don't
need to restart the core after cha
> url="C:/Users/z6mhq/Desktop/data_import/nh_test.xml"
Have you tried url="C:\\Users\\z6mhq/Desktop\\data_import\\nh_test.xml" ?
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
> 2. okt. 2018 kl. 17:15 skrev Martin Frank Hansen (MHQ) :
>
> Hi,
>
> I am having some pr
Hi Thomas,
Is this SolrCloud or Solr master-slave? Do you update index while indexing? Did
you check if all your instances behind LB are in sync if you are using
master-slave?
My guess would be that DIH is using cursors to read data from another Solr. If
you are using multiple Solr instances beh
Also, upgrade to 6.4.2. There are serious performance problems in 6.4.0 and
6.4.1.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Mar 15, 2017, at 12:05 PM, Liu, Daphne
> wrote:
>
> For Solr 6.3, I have to move mine to
> ../solr-6.3.0/server/s
For Solr 6.3, I have to move mine to
../solr-6.3.0/server/solr-webapp/webapp/WEB-INF/lib. If you are using jetty.
Kind regards,
Daphne Liu
BI Architect - Matrix SCM
CEVA Logistics / 10751 Deerwood Park Blvd, Suite 200, Jacksonville, FL 32256
USA / www.cevalogistics.com T 904.564.1192 / F 904.
You could configure the dataimporthandler to not delete at the start
(either do a delta or set the preimportdeltequery), and set a
postimportdeletequery if required.
On Saturday, 4 March 2017, Alexandre Rafalovitch wrote:
> Commit is index global. So if you have overlapping timelines and commit
Commit is index global. So if you have overlapping timelines and commit is
issued, it will affect all changes done to that point.
So, the aliases may be better for you. You could potentially also reload a
cure with changes solrconfig.XML settings, but that's heavy on caches.
Regards,
Alex
On
>
> You have indicated that you have a way to avoid doing updates during the
> full import. Because of this, you do have another option that is likely
> much easier for you to implement: Set the "commitWithin" parameter on
> each update request. This works almost identically to autoSoftCommit,
On 3/3/2017 10:17 AM, Sales wrote:
> I am not sure how best to handle this. We use the data import handle to
> re-sync all our data on a daily basis, takes 1-2 hours depending on system
> load. It is set up to commit at the end, so, the old index remains until it’s
> done, and, we lose no access
> On Mar 3, 2017, at 11:30 AM, Erick Erickson wrote:
>
> One way to handle this (presuming SolrCloud) is collection aliasing.
> You create two collections, c1 and c2. You then have two aliases. when
> you start "index" is aliased to c1 and "search" is aliased to c2. Now
> do your full import to
One way to handle this (presuming SolrCloud) is collection aliasing.
You create two collections, c1 and c2. You then have two aliases. when
you start "index" is aliased to c1 and "search" is aliased to c2. Now
do your full import to "index" (and, BTW, you'd be well advised to do
at least a hard co
>
> On Mar 3, 2017, at 11:22 AM, Alexandre Rafalovitch wrote:
>
> On 3 March 2017 at 12:17, Sales
> wrote:
>> When we enabled those, during the index, the data disappeared since it kept
>> soft committing during the import process,
>
> This part does not quite make sense. Could you expand on
On 3 March 2017 at 12:17, Sales wrote:
> When we enabled those, during the index, the data disappeared since it kept
> soft committing during the import process,
This part does not quite make sense. Could you expand on this "data
disappeared" part to understand what the issue is.
The main issue
On 12/11/2016 8:00 PM, Brian Narsi wrote:
> We are using Solr 5.1.0 and DIH to build index.
>
> We are using DIH with clean=true and commit=true and optimize=true.
> Currently retrieving about 10.5 million records in about an hour.
>
> I will like to find from other member's experiences as to how l
Am 12.12.2016 um 04:00 schrieb Brian Narsi:
> We are using Solr 5.1.0 and DIH to build index.
>
> We are using DIH with clean=true and commit=true and optimize=true.
> Currently retrieving about 10.5 million records in about an hour.
>
> I will like to find from other member's experiences as to
Hello Jonas,
Did you figure this out?
Dr. Chuck Brooks
248-838-5070
-Original Message-
From: Jonas Vasiliauskas [mailto:jonas.vasiliaus...@yahoo.com.INVALID]
Sent: Saturday, July 02, 2016 11:37 AM
To: solr-user@lucene.apache.org
Subject: Data import handler in techproducts example
He
Hi Jonas,
Search for the
solr-dataimporthandler-*.jar place it under a lib directory (same level as the
solr.xml file) along with the mysql jdbc driver (mysql-connector-java-*.jar)
Please see:
https://cwiki.apache.org/confluence/display/solr/Lib+Directives+in+SolrConfig
On Saturday, July 2,
There's nothing saying you have
to highlight fields you search on. So you
can specify hl.fl to be the "normal" (perhaps
stored-only) fields and still search on the
uber-field.
Best,
Erick
On Thu, May 26, 2016 at 2:08 PM, kostali hassan
wrote:
> I did it , I copied all my dynamic field into text
I did it , I copied all my dynamic field into text field and it work great.
just one question even if I copied text into content and the inverse for
get highliting , thats not work ,they are another way to get highliting?
thank you eric
2016-05-26 18:28 GMT+01:00 Erick Erickson :
> And, you can c
And, you can copy all of the fields into an "uber field" using the
copyField directive and just search the "uber field".
Best,
Erick
On Thu, May 26, 2016 at 7:35 AM, kostali hassan
wrote:
> thank you it make sence .
> have a good day
>
> 2016-05-26 15:31 GMT+01:00 Siddhartha Singh Sandhu :
>
>>
thank you it make sence .
have a good day
2016-05-26 15:31 GMT+01:00 Siddhartha Singh Sandhu :
> The schema.xml/managed_schema defines the default search field as `text`.
>
> You can make all fields that you want searchable type `text`.
>
> On Thu, May 26, 2016 at 10:23 AM, kostali hassan <
> med
The schema.xml/managed_schema defines the default search field as `text`.
You can make all fields that you want searchable type `text`.
On Thu, May 26, 2016 at 10:23 AM, kostali hassan
wrote:
> I import data from sql databases with DIH . I am looking for serch term in
> all fields not by field.
It's resolved after changing my column name..its all case sensitive...
--
View this message in context:
http://lucene.472066.n3.nabble.com/Data-Import-Handler-Multivalued-fields-splitBy-tp4243667p4260301.html
Sent from the Solr - User mailing list archive at Nabble.com.
I am also having the same problem.
Have you resolved this issue?
"response": {
"numFound": 3,
"start": 0,
"docs": [
{
"genre": [
"Action|Adventure",
"Action",
"Adventure"
]
},
{
"genre": [
"Drama|Suspens
Hi
Dataimport section in web ui page still shows me that no data import handler
is defined. And no data is being added to my new collection.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Data-Import-Handler-Usage-tp4257518p4257576.html
Sent from the Solr - User mailing li
The "other" collection (destination of the import) is the collection where that
data import handler definition resides.
Erik
> On Feb 16, 2016, at 01:54, vidya wrote:
>
> Hi
>
> I have gone through documents to define data import handler in solr. But i
> couldnot implement it.
> I have cr
You can start with one of the suggestions from this link based on your
indexing and query load.
https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Thanks,
Susheel
On Mon, Feb 8, 2016 at 10:15 AM, Troy Edwards
wrote:
> We are running the
we have this for a collection which updated every 3mins with min of 500
documents and once in a day of 10k documents in start of the day
${solr.autoCommit.maxTime:30}
1
true
true
${solr.autoSoftCommit.maxTime:6000}
As per solr documentation, If
While researching the space on the servers, I found that log files from
Sept 2015 are still there. These are solr_gc_log_datetime and
solr_log_datetime.
Is the default logging for Solr ok for production systems or does it need
to be changed/tuned?
Thanks,
On Tue, Feb 2, 2016 at 2:04 PM, Troy Edw
That is help!
Thank you for the thoughts.
On Tue, Feb 2, 2016 at 12:17 PM, Erick Erickson
wrote:
> Scratch that installation and start over?
>
> Really, it sounds like something is fundamentally messed up with the
> Linux install. Perhaps something as simple as file paths, or you have
> old ja
Scratch that installation and start over?
Really, it sounds like something is fundamentally messed up with the
Linux install. Perhaps something as simple as file paths, or you have
old jars hanging around that are mis-matched. Or someone manually
deleted files from the Solr install. Or your disk f
Rerunning the Data Import Handler again on the the linux machine has
started producing some errors and warnings:
On the node on which DIH was started:
WARN SolrWriter Error creating document : SolrInputDocument
org.apache.solr.common.SolrException: No registered leader was found
after waiting fo
The first thing I'd be looking at is how I the JDBC batch size compares
between the two machines.
AFAIK, Solr shouldn't notice the difference, and since a large majority
of the development is done on Linux-based systems, I'd be surprised if
this was worse than Windows, which would lead me to t
Sorry, I should explain further. The Data Import Handler had been running
for a while retrieving only about 15 records from the database. Both in
development env (windows) and linux machine it took about 3 mins.
The query has been changed and we are now trying to retrieve about 10
million reco
What happens if you run just the SQL query from the
windows box and from the linux box? Is there any chance
that somehow the connection from the linux box is
just slower?
Best,
Erick
On Mon, Feb 1, 2016 at 6:36 PM, Alexandre Rafalovitch
wrote:
> What are you importing from? Is the source and Sol
What are you importing from? Is the source and Solr machine collocated
in the same fashion on dev and prod?
Have you tried running this on a Linux dev machine? Perhaps your prod
machine is loaded much more than a dev.
Regards,
Alex.
Newsletter and resources for Solr beginners and intermed
That was it! Thank you!
On Fri, Dec 4, 2015 at 3:13 PM, Dyer, James
wrote:
> Brian,
>
> Be sure to have...
>
> transformer="RegexTransformer"
>
> ...in your tag. It’s the RegexTransformer class that looks
> for "splitBy".
>
> See https://wiki.apache.org/solr/DataImportHandler#RegexTransformer
Brian,
Be sure to have...
transformer="RegexTransformer"
...in your tag. It’s the RegexTransformer class that looks for
"splitBy".
See https://wiki.apache.org/solr/DataImportHandler#RegexTransformer for more
information.
James Dyer
Ingram Content Group
-Original Message-
From: Br
The backup/restore approach in SOLR-5750 and in solrcloud_manager is
really just that - copying the index files.
On backup, it saves your index directories, and on restore, it puts them
in the data dir, moves a pointer for the current index dir, and opens a
new searcher. Both are mostly just wrapp
These are just Lucene indexes. There's the Cloud backup and restore
that is being worked on.
But if the index is static (i.e. not being indexed to), simply copying
the data/index (well, actually the whole data index and subdirs)
directory will backup and restore it. Copying the index directory bac
What are the caveats regarding the copy of a collection?
At this time DIH takes only about 10 minutes. So in case of accidental
delete we can just re-run the DIH. The reason I am thinking about backup is
just in case records are deleted accidentally and the DIH cannot be run
because the database i
https://github.com/whitepages/solrcloud_manager supports 5.x, and I added
some backup/restore functionality similar to SOLR-5750 in the last
release.
Like SOLR-5750, this backup strategy requires a shared filesystem, but
note that unlike SOLR-5750, I haven’t yet added any backup functionality
for
Sorry I forgot to mention that we are using SolrCloud 5.1.0.
On Tue, Nov 17, 2015 at 12:09 PM, KNitin wrote:
> afaik Data import handler does not offer backups. You can try using the
> replication handler to backup data as you wish to any custom end point.
>
> You can also try out : https://gi
afaik Data import handler does not offer backups. You can try using the
replication handler to backup data as you wish to any custom end point.
You can also try out : https://github.com/bloomreach/solrcloud-haft. This
helps backup solr indices across clusters.
On Tue, Nov 17, 2015 at 7:08 AM, Br
Yes the id is unique. If I only select distinct id,count(id) I get the same
results. However I found this is more likely a MySQL issue. I created a new
table called director1 and ran query "insert into director1 select * from
director" I got only 287041 results inserted, which was the same as Solr.
That's not quite the question I asked. Do a distinct on 'id' only in
the database itself. If your ids are NOT unique, you need to create a
composite or a virtual id for Solr. Because whatever your
solrconfig.xml say is uniqueKey will be used to deduplicate the
documents. If you have 10 documents wi
Hi thanks for the continued support. I'm really worried as my project
deadline is near. It was 1636549 in MySQL vs 287041 in Solr. I put select
distinct in the beginning of the query because IMDB doesn't have a table
for cast & crew. It puts movie and person and their roles into one huge
table 'cas
Just to get the paranoid option out of the way, is 'id' actually the
column that has unique ids in your database? If you do "select
distinct id from imdb.director" - how many items do you get?
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-s
ll the contents of those zips and concatenate the
extracted text into one string.
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: Tuesday, July 21, 2015 10:41 AM
To: solr-user@lucene.apache.org
Subject: Re: Data Import Handler Stays Idle
On 7/21/2015 8:17 AM, Pade
Tuesday, July 21, 2015 10:41 AM
To: solr-user@lucene.apache.org
Subject: Re: Data Import Handler Stays Idle
On 7/21/2015 8:17 AM, Paden wrote:
> There are some zip files inside the directory and have been addressed
> to in the database. I'm thinking those are the one's it's jumping
Hey shawn when I use the -m 2g command in my script I get the error a 'cannot
open [path]/server/logs/solr.log for reading: No such file or directory' I
do not see how this would affect that.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Data-Import-Handler-Stays-Idle-tp4
Okay. I'm going to run the index again with specifications that you
recommended. This could take a few hours but I will post the entire trace on
that error when it pops up again and I will let you guys know the results of
increasing the heap size.
--
View this message in context:
http://lucene
On 7/21/2015 8:17 AM, Paden wrote:
> There are some zip files inside the directory and have been addressed to in
> the database. I'm thinking those are the one's it's jumping right over. They
> are not the issue. At least I'm 95% sure. And Shawn if you're still watching
> I'm sorry I'm using solr-5
There are some zip files inside the directory and have been addressed to in
the database. I'm thinking those are the one's it's jumping right over. They
are not the issue. At least I'm 95% sure. And Shawn if you're still watching
I'm sorry I'm using solr-5.1.0.
--
View this message in context:
>Yes the number of unimported matches (with IOExceptions)
What is the IOException about?
On 7/20/15, 5:10 PM, "Paden" wrote:
>Yes the number of unimported matches. No I did not specify "false" to
>commit
>on any of my dataimporthandler. Since it defaults to true I really didn't
>take it into ac
Yes the number of unimported matches. No I did not specify "false" to commit
on any of my dataimporthandler. Since it defaults to true I really didn't
take it into account though.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Data-Import-Handler-Stays-Idle-tp4218250p42182
Number of Ioexceptions , are they equal to un-imported/un processed
documents?
By any chance commit set to false in import request
example:
http://localhost:8983/solr/db/dataimport?command=full-import&commit=false
Thanks
Raja
On 7/20/15, 4:51 PM, "Paden" wrote:
>I was consistently checking th
I was consistently checking the logs to see if there were any errors that
would give me any idling. There were no errors except for a few skipped
documents due to some Illegal IOexceptions from Tika but none of those
occurred around the time that solr began idling. A lot of font warnings. But
again
On 7/20/2015 3:03 PM, Paden wrote:
> I'm currently trying to index about 54,000 files with the Solr Data Import
> Handler and I've got a small problem. It fetches about half (28,289) of the
> 54,000 files and it process about 14,146 documents before it stops and just
> stands idle. Here's the statu
Have you tried? As ${dih.request.foo}?
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
http://www.solr-start.com/
On 16 March 2015 at 14:51, Kiran J wrote:
> Hi,
>
> In data import handler, I can read the "clean" query parameter using
> ${dih.request.clea
On 12/4/2014 9:18 AM, dhwani2388 wrote:
> In SOLR I am fetching DIH status of the core using
> /dataimport?command=status. Now the data import is running though the status
> URL giving me idle status. Some times its giving me idle status on right
> time once data import is completed but some times
early.
Can any one help on this?
--
View this message in context:
http://lucene.472066.n3.nabble.com/RE-Data-Import-Handler-Status-tp4172590.html
Sent from the Solr - User mailing list archive at Nabble.com.
ache.org; Ahmet Arslan
Subject: Re: Data Import Handler for CSV file
Hi Ahmet,
Thank you for this replay. Agree with you that csv update handler is fast but
we need always to specify columns in the http request. In addition, I don't
find documentation how to use csv update from solrj.
Could y
Hi,
I think you can define field names in the first line of csv. Why don't you use
curl to index csv?
I don't have full working example with DIH but I have following example that
indexed every line as a separate solr scoument.
You need to add a transformer that splits each line according to co
You could always define the parameters in the solrconfig.XML on a custom
handler. Don't have to pass the same values over and over again.
Regards,
Alex
On 09/10/2014 5:26 pm, "nabil Kouici" wrote:
> Hi Ahmet,
>
> Thank you for this replay. Agree with you that csv update handler is fast
> bu
Hi Ahmet,
Thank you for this replay. Agree with you that csv update handler is fast but
we need always to specify columns in the http request. In addition, I don't
find documentation how to use csv update from solrj.
Could you please send me an example of DIH to load CSV file?
Regards,
Nabil.
Hi Nabil,
whats wrong with csv update handler? It is quite fast.
By the way DIH has line entity processor, yes it is doable with existing DIH
components.
Ahmet
On Thursday, October 9, 2014 9:58 PM, nabil Kouici wrote:
Hi All,
Is it possible to have in solr a DIH to load from CSV file.
On 8 October 2014 01:00, Ahmet Arslan wrote:
>
>
>
> Hi Durga,
>
> That wiki talks about an uncommitted code. So it is not built in.
Maybe it is just me, but given that there are existing scheduling
solutions in most operating systems, I fail to understand why
people expect Solr to expand to incl
Hi Durga,
That wiki talks about an uncommitted code. So it is not built in.
Ahmet
On Tuesday, October 7, 2014 7:17 PM, Durga Palamakula
wrote:
There is a built in scheduling @
http://wiki.apache.org/solr/DataImportHandler#Scheduling
But as others have mentioned cron is the simplest.
On
There is a built in scheduling @
http://wiki.apache.org/solr/DataImportHandler#Scheduling
But as others have mentioned cron is the simplest.
On Mon, Oct 6, 2014 at 8:56 PM, Karunakar Reddy
wrote:
> Thanks Shawn and Gora for your suggestions.
> @Gora sounds good. I am just getting clarity over
Thanks Shawn and Gora for your suggestions.
@Gora sounds good. I am just getting clarity over it.
Regards,
Karunakar.
On Tue, Oct 7, 2014 at 8:27 AM, Gora Mohanty wrote:
> On 6 October 2014 18:40, Karunakar Reddy wrote:
> >
> > Hey Alex,
> > Thanks for your reply.
> > Is delta-import handler
On 6 October 2014 18:40, Karunakar Reddy wrote:
>
> Hey Alex,
> Thanks for your reply.
> Is delta-import handler configurable? say if I want to update documents
> every 20 mins is it possible through any configuration/settings like
> autocommit?
As a delta-import involves loading a URL, you can d
Hey Alex,
Thanks for your reply.
Is delta-import handler configurable? say if I want to update documents
every 20 mins is it possible through any configuration/settings like
autocommit?
Regards,
Karunakar.
On Mon, Oct 6, 2014 at 6:24 PM, Alexandre Rafalovitch
wrote:
> 1) DIH looks like a match
On 6 October 2014 08:56, Shawn Heisey wrote:
> 2) As a group, the developers are resistant to features that would cause
> Solr to make changes in the index without being *told* to do it by an
> outside force. There is already an issue in Jira for a DIH scheduler,
> but the patch hasn't been commi
1) DIH looks like a match to your needs, yes. You just trigger it from
your script and then it does the rest of the work asynchronously. But
you'll to pull later for the status if you want to report on
success/failure.
2) Yes, you can just by defining several entities next to each other.
You can r
On 10/6/2014 5:09 AM, Karunakar Reddy wrote:
> Please suggest me effective way of using data import handler.
>
> Here is my use case.
>
> I have different kind of items which needs to be indexed in solr . Eg(
> books, shoes,electronics etc... ) each one has in different relational
> table.
> I ha
First of all thank you very much for the answer, James. It is very complete
and it gives us several alternatives :)
I think we will try first the cache approach, as, after solving this
problem https://issues.apache.org/jira/browse/SOLR-5954 the performance has
been improved, so along with the cach
Alejandro,
You can use a sub-entity with a cache using DIH. This will solve the
"n+1-select" problem and make it run quickly. Unfortunately, the only built-in
cache implementation is in-memory so it doesn't scale. There is a fast,
disk-backed cache using bdb-je, which I use in production. S
On 7/25/2014 1:06 AM, Yavar Husain wrote:
> Have most of experience working on Solr with Tomcat. However I recently
> started with Jetty. I am using Solr 4.7.0 on Windows 7. I have configured
> solr properly and am able to see the admin UI as well as velocity browse.
> Dataimporthandler screen is a
Need to be put out of solr like
customized Mysolr_core.properties
how to access it
-Original Message-
From: Dyer, James [mailto:james.d...@ingramcontent.com]
Sent: Wednesday, November 13, 2013 8:50 PM
To: solr-user@lucene.apache.org
Subject: RE: Data Import Handler
In
:
Hope this works for you.
James Dyer
Ingram Content Group
(615) 213-4311
-Original Message-
From: Ramesh [mailto:ramesh.po...@vensaiinc.com]
Sent: Wednesday, November 13, 2013 9:00 AM
To: solr-user@lucene.apache.org
Subject: RE: Data Import Handler
James can elaborate how to process
ames [mailto:james.d...@ingramcontent.com]
Sent: Wednesday, November 06, 2013 7:42 PM
To: solr-user@lucene.apache.org
Subject: RE: Data Import Handler
If you prepend the variable name with "dataimporter.request", you can
include variables like these as request parameters:
/dih?driver=some.driver.cl
If you prepend the variable name with "dataimporter.request", you can include
variables like these as request parameters:
/dih?driver=some.driver.class&url=jdbc:url:something
If you want to include these in solrcore.properties, you can additionally add
each property to solrconfig.xml like thi
I configured a data source in tomcat and referenced it by its jdbc name.
So dev and production sites shares the same config file but uses different dbs
I hope this helps
> Il giorno 06/nov/2013, alle ore 13:25, "Ramesh"
> ha scritto:
>
> Hi Folks,
>
>
>
> Can anyone suggest me how can cu
I've done this by adding an attribute to the entity element (e.g.
myconfig="myconfig.xml"), and reading it in the 'init' method with
context.getResolvedEntityAttribute("myconfig").
Peter
On Wed, Nov 6, 2013 at 8:25 AM, Ramesh wrote:
> Hi Folks,
>
>
>
> Can anyone suggest me how can customize d
s is an easy way.
> Thanks.
>
>
>
> -
> Phat T. Dong
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Re-Data-import-handler-with-multi-tables-tp4098048p4098328.html
> Sent from the Solr - User mailing list archive at Nabble.com
> (http://Nabble.com).
>
>
yes, I've just used concat(id, '_', tableName) instead using compound key. I
think this is an easy way.
Thanks.
-
Phat T. Dong
--
View this message in context:
http://lucene.472066.n3.nabble.com/Re-Data-import-handler-with-multi-tables-tp4098048p4098328.html
Sent from
1 - 100 of 158 matches
Mail list logo