On 2/21/2021 3:07 PM, cratervoid wrote:
Thanks Shawn, I copied the solrconfig.xml file from the gettingstarted
example on 7.7.3 installation to the 8.8.0 installation, restarted the
server and it now works. Comparing the two files it looks like as you said
this section was left out of the _defaul
Thanks Shawn, I copied the solrconfig.xml file from the gettingstarted
example on 7.7.3 installation to the 8.8.0 installation, restarted the
server and it now works. Comparing the two files it looks like as you said
this section was left out of the _default/solrconfig.xml file in version
8.8.0:
Thanks Alex. I copied the solrconfig.xml over from 7.7.3 to the 8.8.0 conf
folder and restarted the server. Now indexing works without erroring on
sample.html. There is 1K difference between the 2 files so I'll diff them
to see what was left out of the 8.8 version.
On Sat, Feb 20, 2021 at
,
Alex.
On Sat, 20 Feb 2021 at 17:59, cratervoid wrote:
>
> I am trying out indexing the exampledocs in the examples folder with the
> SimplePostTool on windows 10 using solr 8.8. All the documents index
> except sample.html. For that file I get the errors below. I then
> downloaded
On 2/20/2021 3:58 PM, cratervoid wrote:
SimplePostTool: WARNING: Solr returned an error #404 (Not Found) for url:
http://localhost:8983/solr/gettingstarted/update/extract?resource.name=C%3A%5Csolr-8.8.0%5Cexample%5Cexampledocs%5Csample.html&literal.id=C%3A%5Csolr-8.8.0%5Cexample%5Cexampledocs%5Cs
I am trying out indexing the exampledocs in the examples folder with the
SimplePostTool on windows 10 using solr 8.8. All the documents index
except sample.html. For that file I get the errors below. I then
downloaded solr 7.7.3 and indexed the exampledocs folder with no errors,
including
gt; Looking for some help on document indexing frequency. I am using apache
> solr 7.7 and SolrNet library to commit documents to Solr. Summary for this
> function is:
> // Summary:
> // Commits posted documents, blocking until index changes are flushed
> to disk and
> //
Hi All
Looking for some help on document indexing frequency. I am using apache solr
7.7 and SolrNet library to commit documents to Solr. Summary for this function
is:
// Summary:
// Commits posted documents, blocking until index changes are flushed to
disk and
// blocking until a new
Hi,
The issue was buildOnCommit=true on a SuggestComponent.
Dominique
Le mar. 2 févr. 2021 à 00:54, Shawn Heisey a écrit :
> On 2/1/2021 12:08 AM, haris.k...@vnc.biz wrote:
> > Hope you're doing good. I am trying to configure NRT - Indexing in my
> > project. For
On 2/1/2021 12:08 AM, haris.k...@vnc.biz wrote:
Hope you're doing good. I am trying to configure NRT - Indexing in my
project. For this reason, I have configured *autoSoftCommit* to execute
every second and *autoCommit* to execute every 5 minutes. Everything
works as expected on the de
u grep your solr logs on with the "commit' pattern in order to see
>
> hard and soft commit occurrences ?
>
> How are you pushing new docs or updates in the collection ?
>
>
> Regards.
>
>
> Dominique
>
>
>
>
>
> Le lun. 1 févr. 2021 à 0
g good. I am trying to configure NRT - Indexing in my>
project. For this reason, I have configured *autoSoftCommit* to execute>
every second and *autoCommit* to execute every 5 minutes. Everything> works
as expected on the dev and test server. But on the production server,> there
are m
Le lun. 1 févr. 2021 à 08:08, a écrit :
> Hello,
>
> Hope you're doing good. I am trying to configure NRT - Indexing in my
> project. For this reason, I have configured *autoSoftCommit* to execute
> every second and *autoCommit* to execute every 5 minutes. Everything
> work
I'm running into the same issue. I've set autoSoftCommit and autoCommit but
the speed at which docs are indexed seems to be inconsistent with the
settings. I have lowered the autoCommit to a minute but it still takes a
few minutes for docs to show after indexing. Soft commit settings al
Hello,
Hope you're doing good. I am trying to configure NRT - Indexing in my project.
For this reason, I have configured autoSoftCommit to execute every
second and autoCommit to execute every 5 minutes. Everything works as
expected on the dev and test server. But on the production s
Hello,
Hope you're doing good. I am trying to configure NRT - Indexing in my project.
For this reason, I have configured autoSoftCommit to execute every second and
autoCommit to execute every 5 minutes. Everything works as expected on the dev
and test server. But on the production s
I agree, documents may be gigantic or very small, with heavy text analysis
or simple strings ...
so it's not possible to give an evaluation here.
But you could make use of the nightly benchmark to give you an idea of
Lucene indexing speed (the engine inside Apache Solr) :
http://home.apach
PErfect ! Thanks !
-Message d'origine-
De : xiefengchang [mailto:fengchang_fi...@163.com]
Envoyé : dimanche 10 janvier 2021 04:50
À : solr-user@lucene.apache.org
Objet : Re:[Solr8.7] Indexing only some language ?
Take a look at the document here:
https://lucene.apache.org/solr/guid
it's hard to answer your question without your solrconfig.xml,
managed-schema(or schema.xml), and good to have some log snippet as well~
At 2021-01-07 21:28:00, "ufuk yılmaz" wrote:
>Hello all,
>
>I have been looking at our SolrCloud indexing performance
Take a look at the document here:
https://lucene.apache.org/solr/guide/8_7/dynamic-fields.html#dynamic-fields
here's the point: "a field that does not match any explicitly defined fields
can be matched with a dynamic field."
so I guess the priority is quite clear~
At 2021-01-
Hello,
I would like to define in my schema.xml some text_xx fields.
I have patent titles in several languages.
Only 6 of them (EN, IT, FR, PT, ES, DE) interest me.
I know how to define these 6 fields, I use text_en, text_it etc.
i.e. for English language:
But I have more than 6 lang
Hello all,
I have been looking at our SolrCloud indexing performance statistics and trying
to make sense of the numbers. We are using a custom Flume sink and sending
updates to Solr (8.4) using SolrJ.
I know these stuff depend on a lot of things but can you tell me if these
statistics are
On 23/12/2020 16:00, Ron Buchanan wrote:
> - both run Java 1.8, but 7.3 is running HotSpot and 8.7 is running
> OpenJDK (and a bit newer)
If you're using G1GC, you probably want to give Java 11 a go. It's an
easy thing to test, and it's had a positive impact for us. Your mileage
may va
(this is long, just trying to be thorough)
I'm working on upgrading from Solr 7.3 to Solr 8.7 and I am seeing a
significant drop in indexing throughput during a full index reload - from
~1300 documents per second to ~450 documents/sec
Background:
VM hosts (these are configured identi
utilities that allow you do to this
transformation easily.
> Am 20.11.2020 um 21:50 schrieb Fiz N :
>
> Hello Experts,
>
> I am having issues with indexing Date field in SOLR 8.6.0. I am indexing
> from MongoDB. In MongoDB the Format is as follows
>
>
> * &qu
Hello Experts,
I am having issues with indexing Date field in SOLR 8.6.0. I am indexing
from MongoDB. In MongoDB the Format is as follows
* "R_CREATION_DATE" : "12-Jul-18", "R_MODIFY_DATE" : "30-Apr-19", *
In my Managed Schema I have the following e
Hello all
We are using Apache Solr 7.7 on Windows platform. The data is synced to Solr
using Solr.Net commit. The data is being synced to SOLR in batches. The
document size is very huge (~0.5GB average) and solr indexing is taking long
time. Total document size is ~200GB. As the solr commit is
) and filterbyname().
> Thus you may wish to consider them or equivalents for inclusion in your
> system, whatever that may be.
> Thanks,
> Joe D.
>
> On 27/08/2020 20:32, Alexandre Rafalovitch wrote:
>> If you are indexing from Drupal into Solr, that's the question
system, whatever that may be.
Thanks,
Joe D.
On 27/08/2020 20:32, Alexandre Rafalovitch wrote:
If you are indexing from Drupal into Solr, that's the question for
Drupal's solr module. If you are doing it some other way, which way
are you doing it? bin/post command?
Most likely t
If you are indexing from Drupal into Solr, that's the question for
Drupal's solr module. If you are doing it some other way, which way
are you doing it? bin/post command?
Most likely this is not the Solr question, but whatever you have
feeding data into Solr.
Regards,
Alex.
On T
Can you or how do you exclude a specific folder/directory from indexing in SOLR
version 7.x or 8.x? Also our CMS is Drupal 8
Thanks,
Phil Staley
DCF Webmaster
608 422-6569
phil.sta...@wisconsin.gov
#x27;s single threaded and
>> deprecated anyway.
>> 3. Minor point - consider whether you need to index everything every
>> time or just the deltas.
>> 4. Upgrade Solr anyway, not for speed reasons but because that's a very
>> old version you're running.
>>
>
. Upgrade Solr anyway, not for speed reasons but because that's a very
> old version you're running.
>
> HTH
>
> Charlie
>
> On 17/08/2020 19:22, Abhijit Pawar wrote:
> > Hello,
> >
> > We are indexing some 200K plus documents in SOLR 5.4.1 with no sh
ndex everything every
time or just the deltas.
4. Upgrade Solr anyway, not for speed reasons but because that's a very
old version you're running.
HTH
Charlie
On 17/08/2020 19:22, Abhijit Pawar wrote:
Hello,
We are indexing some 200K plus documents in SOLR 5.4.1 with no shards /
Adding on to what others have said, indexing speed in general is largely
affected by the parallelism and isolation you can give to each node.
Is there a reason why you cannot have more than 1 shard?
If you have 5 node cluster, why not have 5 shards, maxshardspernode=1 replica=1
is ok. You should
On 8/17/2020 12:22 PM, Abhijit Pawar wrote:
We are indexing some 200K plus documents in SOLR 5.4.1 with no shards /
replicas and just single core.
It takes almost 3.5 hours to index that data.
I am using a data import handler to import data from the mongo database.
Is there something we can do
while you are indexing. If it is under
50%, the bottleneck is MongoDB and single-threaded indexing.
For another check, run that same query in a regular database client and time it.
The Solr indexing will never be faster than that.
wunder
Walter Underwood
wun...@wunderwood.org
http
ities...
On Mon, Aug 17, 2020 at 1:32 PM Divye Handa
wrote:
> Can you share the dih configuration you are using for same?
>
> On Mon, 17 Aug, 2020, 23:52 Abhijit Pawar, wrote:
>
> > Hello,
> >
> > We are indexing some 200K plus documents in SOLR 5.4.1 with no shards
t; Hello,
>
> We are indexing some 200K plus documents in SOLR 5.4.1 with no shards /
> replicas and just single core.
> It takes almost 3.5 hours to index that data.
> I am using a data import handler to import data from the mongo database.
>
> Is there something we can
Can you share the dih configuration you are using for same?
On Mon, 17 Aug, 2020, 23:52 Abhijit Pawar, wrote:
> Hello,
>
> We are indexing some 200K plus documents in SOLR 5.4.1 with no shards /
> replicas and just single core.
> It takes almost 3.5 hours to index that data.
>
Hello,
We are indexing some 200K plus documents in SOLR 5.4.1 with no shards /
replicas and just single core.
It takes almost 3.5 hours to index that data.
I am using a data import handler to import data from the mongo database.
Is there something we can do to reduce the time taken to index
Hi Eric, Toke,
Can you please look at the details shared in my trail email & respond with your
suggestions/feedback?
Thanks & Regards,
Vinodh
From: Kommu, Vinodh K.
Sent: Monday, July 6, 2020 4:58 PM
To: solr-user@lucene.apache.org
Subject: RE: Time-out errors while indexing (So
of a bug causing an exponential
> > explosion of needed grid squares when you have polygons super-close to
> the
> > pole. Might you try S2PrefixTree instead? I forget if this would fix it
> > or not by itself. For indexing non-point data, I recommend
> > class="
de1/solr/Collection1_shard1_replica_n20
209G node2/solr/Collection1_shard4_replica_n16
1.3T total
Thanks & Regards,
Vinodh
-Original Message-
From: Erick Erickson
Sent: Saturday, July 4, 2020 7:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Time-out errors whi
u try S2PrefixTree instead? I forget if this would fix it
> or not by itself. For indexing non-point data, I recommend
> class="solr.RptWithGeometrySpatialField" which internally is based off a
> combination of a course grid and storing the original vector geomet
you’re running at all unless that 13B is a round number. If you keep adding
> documents, your installation will shortly, at best, stop accepting new
> documents for indexing. At worst you’ll start seeing weird errors and
> possibly corrupt indexes and have to re-index everything fr
, your
installation will shortly, at best, stop accepting new documents for indexing.
At worst you’ll start seeing weird errors and possibly corrupt indexes and have
to re-index everything from scratch.
You’ve backed yourself in to a pretty tight corner here. You either have to
re-index to a
the indexing become slow and I also
have same impression that the size of the collection is creating this issue.
Appreciate if you can suggests any solution on this.
Regards,
Madhava
Sent from my iPhone
> On 3 Jul 2020, at 23:30, Erick Erickson wrote:
>
> Oops, I transposed that
Hi Sunil,
Your shape is at a pole, and I'm aware of a bug causing an exponential
explosion of needed grid squares when you have polygons super-close to the
pole. Might you try S2PrefixTree instead? I forget if this would fix it
or not by itself. For indexing non-point data, I recommend
wapping, how much of your I/O is just because Lucene can’t
>>> hold all the parts of the index it needs in memory at once? Lucene
>>> uses MMapDirectory to hold the index and you may well be
>>> swapping, see:
>>>
>>> https://blog.thetaphi.de/2012/0
it.html
>>
>> But my guess is that you’ve just reached a tipping point. You say:
>>
>> "From last 2-3 weeks we have been noticing either slow indexing or timeout
>> errors while indexing”
>>
>> So have you been continually adding more docume
ry at once? Lucene
> uses MMapDirectory to hold the index and you may well be
> swapping, see:
>
> https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
>
> But my guess is that you’ve just reached a tipping point. You say:
>
> "From last 2-3 week
We are seeing OOM errors when trying to index some spatial data. I believe
the data itself might not be valid but it shouldn't cause the Server to
crash. We see this on both Solr 7.6 and Solr 8. Below is the input that is
causing the error.
{
"id": "bad_data_1",
"spatialwkt_srpt": "LINESTRING (-1
may well be
swapping, see:
https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
But my guess is that you’ve just reached a tipping point. You say:
"From last 2-3 weeks we have been noticing either slow indexing or timeout
errors while indexing”
So have you been contin
e than write operations like 100:1 ratio, is this expected during
> indexing or solr nodes are doing any other operations like syncing?
Are you saying that there are 100 times more read operations when you
are indexing? That does not sound too unrealistic as the disk cache
might be filled with th
Anyone has any thoughts or suggestions on this issue?
Thanks & Regards,
Vinodh
From: Kommu, Vinodh K.
Sent: Thursday, July 2, 2020 4:46 PM
To: solr-user@lucene.apache.org
Subject: Time-out errors while indexing (Solr 7.7.1)
Hi,
We are performing QA performance testing on couple of collect
It seems that the issue is not with reference_url field itself. There is
one copy field which has the reference_url field as source and another
field called url_path as destination.
This destination field url_path has the following field type definition.
Hi,
We are performing QA performance testing on couple of collections which holds 2
billion and 3.5 billion docs respectively. Indexing happens from a separate
client using solrJ which uses 10 thread and batch size 1000. From last 2-3
weeks we have been noticing either slow indexing or timeout
How are you sending this to Solr? I just tried 8.5, submitting that doc through
the admin UI and it works fine.
I defined “asset_id” with as the same type as your reference_url field.
And does the log on the Solr node that tries to index this give any more info?
Best,
Erick
> On Jun 27, 2020,
Hi,
I have the following document which fails to get indexed.
{
"asset_id":"add-ons:576deefef7453a9189aa039b66500eb2",
"reference_url":"modeling-a-high-speed-backplane-part-3-4-port-s-parameters-to-differential-tdr-and-tdt.html"}
I am not sure what is so special about the content in the
eld), it does an in-place update without deletion. But the
> problem is I don't know if the document is present or I'm indexing it the
> first time.
>
> Is there a way to prevent re-indexing if other fields are the same?
>
> *P.S. I'm looking for a solution that doesn't require looking up if doc is
> present in the Collection or not.*
all the fields, it deletes the document and re-index it. But
if I just "set" the "LASTUPDATETIME" field (non-indexed, non-stored,
docValue field), it does an in-place update without deletion. But the
problem is I don't know if the document is present or I'm indexing it
Hi all
1. Setup simple 1 node solrcloud test setup using docker-compose,
solr:8.5.2, zookeeper:3.5.8.
2. Upload a configset
3. Create two collections, one standard collection, one CRA, both
using the same configset
legacy:
action=CREATE&name=products_old&collection.configName=products&autoAddRepl
Thanks Erick...
On Sun, Jun 7, 2020 at 1:50 PM Erick Erickson
wrote:
> https://lucidworks.com/post/indexing-with-solrj/
>
>
> > On Jun 7, 2020, at 3:22 PM, Fiz N wrote:
> >
> > Thanks Jorn and Erick.
> >
> > Hi Erick, looks like the skeletal SOLRJ progra
https://lucidworks.com/post/indexing-with-solrj/
> On Jun 7, 2020, at 3:22 PM, Fiz N wrote:
>
> Thanks Jorn and Erick.
>
> Hi Erick, looks like the skeletal SOLRJ program attachment is missing.
>
> Thanks
> Fiz
>
> On Sun, Jun 7, 2020 at 12:20 PM Erick Eri
Thanks Jorn and Erick.
Hi Erick, looks like the skeletal SOLRJ program attachment is missing.
Thanks
Fiz
On Sun, Jun 7, 2020 at 12:20 PM Erick Erickson
wrote:
> Here’s a skeletal SolrJ program using Tika as another alternative.
>
> Best,
> Erick
>
> > On Jun 7, 2020, at 2:06 PM, Jörn Franke w
Here’s a skeletal SolrJ program using Tika as another alternative.
Best,
Erick
> On Jun 7, 2020, at 2:06 PM, Jörn Franke wrote:
>
> You have to write an external application that creates multiple threads,
> parses the PDFs and index them in Solr. Ideally you parse the PDFs once and
> store th
You have to write an external application that creates multiple threads, parses
the PDFs and index them in Solr. Ideally you parse the PDFs once and store the
resulting text on some file system and then index it. Reason is that if you
upgrade to two major versions of Solr you might need to reind
Hello SOLR Experts,
I am working on a POC to Index millions of PDF documents present in
Multiple Folder in fileshare.
Could you please let me the best practices and step to implement it.
Thanks
Fiz Nadiyal.
I think the OP is indexing flat files, not web pages (but otherwise, I
agree with you that Scrapy is great - I know some of the people behind
it too and they're a good bunch).
Charlie
On 02/06/2020 16:41, Walter Underwood wrote:
On Jun 2, 2020, at 7:40 AM, Charlie Hull wrote:
If it w
> On Jun 2, 2020, at 7:40 AM, Charlie Hull wrote:
>
> If it was me I'd probably build a standalone indexer script in Python that
> did the file handling, called out to a separate Tika service for extraction,
> posted to Solr.
I would do the same thing, and I would base that script on Scrapy
t was me I'd probably build a standalone indexer script in Python
that did the file handling, called out to a separate Tika service for
extraction, posted to Solr.
Cheers
Charlie
On 02/06/2020 14:48, Zheng Lin Edwin Yeo wrote:
Hi Charlie,
The main code that is doing the indexing is from
Hi Charlie,
The main code that is doing the indexing is from the Solr's
SimplePostTools, but we have done some modification to it.
The walking through a folder is done by PowerShell script, the extracting
of the content from .eml file is from Tika that comes with Solr, and the
images in the
Hi Edwin,
What code is actually doing the indexing? AFAIK Solr doesn't include any
code for actually walking a folder, extracting the content from .eml
files and pushing this data into its index, so I'm guessing you've built
something external?
Charlie
On 01/06/2020 02:13,
Hi,
I am running this on Solr 7.6.0
Currently I have a situation whereby there's more than 2 million EML file
in a folder, and the folder is constantly updating the EML files with the
latest information and adding new EML files.
When I do the indexing, it is suppose to index the new EML
n table 6)
> (table 7 join table 8)
>
>
>
> Do you have any recommendations for it to run multiple sql’s and make it as
> single solr document that can be sent over solrJ for indexing?
>
> Say parent entity has 100 documents, should I iterate over ea
table 2)
(table 3 join table 4)
(table 5 join table 6)
(table 7 join table 8)
Do you have any recommendations for it to run multiple sql’s and make it as
single solr document that can be sent over solrJ for indexing?
Say parent entity has 100
PM Erick Erickson wrote:
>
> You have a lot more control over the speed and form of importing data if
> you just do the initial load in SolrJ. Here’s an example, taking the Tika
> parts out is easy:
>
> https://lucidworks.com/post/indexing-with-solrj/
>
> It’s especially inst
You have a lot more control over the speed and form of importing data if
you just do the initial load in SolrJ. Here’s an example, taking the Tika
parts out is easy:
https://lucidworks.com/post/indexing-with-solrj/
It’s especially instructive to comment out just the call to
CloudSolrClient.add
Hi All,
We are runnnig solr 8.4.1. We have a database table which has more than 100
million of records. Till now we were using DIH to do full-import on the tables.
But for this table, when we do full-import via DIH it is taking more than 3-4
days to complete and also it consumes fair bit of JVM
issues at
significantly than 2B. Note that when segments are merged, the internal IDs get
reassigned...
Indexing scales pretty linearly with the number of shards, _assuming_ you’re
adding more hardware. To really answer the question you need to look at what
the bottleneck is on your current
Hi,
Recently we had noticed that one of the largest collection (shards = 6 ;
replication factor =3) which holds up to 1TB of data & nearly 3.2 billion of
docs is taking longer time to index than it used to before. To see the indexing
time difference, we created another collection using lar
On 5/14/2020 3:14 PM, matthew sporleder wrote:> Can a non-nested entity
write into existing docs, or do they always> have to produce
document-per-entity?
This is the only thing I found on this topic, and it is on a third-party
website, so I can't say much about how accurate it is:
https://stac
On Thu, May 14, 2020 at 4:46 PM Shawn Heisey wrote:
>
> On 5/14/2020 9:36 AM, matthew sporleder wrote:
> > It appears that adding entities to my entities in my data import
> > config is slowing down my import process by a lot. Is there a good
> > way to speed this up? I see the ID's are individu
On 5/14/2020 9:36 AM, matthew sporleder wrote:
It appears that adding entities to my entities in my data import
config is slowing down my import process by a lot. Is there a good
way to speed this up? I see the ID's are individually queried instead
of using IN() or similar normal techniques to
It appears that adding entities to my entities in my data import
config is slowing down my import process by a lot. Is there a good
way to speed this up? I see the ID's are individually queried instead
of using IN() or similar normal techniques to make things faster.
Just looking for some tips.
meFn16-B_f9XyuoAA-hQapaIas&e=
>
> Regards,
> Markus
>
>
>
> -Original message-
> > From:Audrey Lorberfeld - audrey.lorberf...@ibm.com
>
> > Sent: Friday 1st May 2020 17:34
> > To: solr-user@lucene.apache.org
> &g
paIas&e=
Regards,
Markus
-Original message-
> From:Audrey Lorberfeld - audrey.lorberf...@ibm.com
> Sent: Friday 1st May 2020 17:34
> To: solr-user@lucene.apache.org
> Subject: Indexing Korean
>
> Hi All,
>
> My team w
.lorberf...@ibm.com
> Sent: Friday 1st May 2020 17:34
> To: solr-user@lucene.apache.org
> Subject: Indexing Korean
>
> Hi All,
>
> My team would like to index Korean, but it looks like Solr OOTB does not have
> explicit support for Korean. If any of you have schema pipel
Hi All,
My team would like to index Korean, but it looks like Solr OOTB does not have
explicit support for Korean. If any of you have schema pipelines you could
share for your Korean documents, I would love to see them! I'm assuming I would
just use some combination of the OOTB CJK factories..
If users can upload any PDF, including broken or huge ones, and some
cause a Tika error, you should decouple Tika from Solr and run it as a
separate process to extract text before indexing with Solr. Otherwise
some of what is uploaded *will* break Solr.
https://lucidworks.com/post/indexing
Hi,
Iam also facing same issue. Does anyone have any update/soulution how to fix
this issue as part DIH?
Thanks.
Regards,
Ravi kumar
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
with this link
https://sematext.com/opensee/m/Solr/eHNlswSd1vD6AF?subj=RE+Indexing+data+from+multiple+data+sources
As it is open to the world, what we are requesting here is, could you
please remove that post as-soon-as possible before it creates any
sucurity issues for us.
Your help is very
+Indexing+data+from+multiple+data+sources
As it is open to the world, what we are requesting here is, could you please
remove that post as-soon-as possible before it creates any sucurity issues for
us.
Your help is very very appreciable!!!
FYI.
Here I'm attaching the below screenshot
Hi,
I am working on indexing data from multiple data sources using a single
collection. I specified data sources information in the data-config file and
also updated managed schema.xml by adding the fields from all the data sources
by specifying the common unique key across all the sources
What does your Solr.log say? Any error ?
> Am 17.04.2020 um 20:22 schrieb RaviKiran Moola
> :
>
>
> Hi,
>
> Greetings!!!
>
> We are working on indexing data from multiple data sources (MySQL & MSSQL) in
> a single collection. We specified data sour
Hi,
Greetings!!!
We are working on indexing data from multiple data sources (MySQL & MSSQL) in a
single collection. We specified data source details like connection details
along with the required fields for both data sources in a single data config
file, along with specified required fi
: Is the documentation wrong or have I misunderstood it?
The documentation is definitely wrong, thanks for pointing this out...
https://issues.apache.org/jira/browse/SOLR-14383
-Hoss
http://www.lucidworks.com/
Hi,
The page "Indexing Nested Documents" has an XML example showing two
different ways of adding nested documents:
https://lucene.apache.org/solr/guide/8_5/indexing-nested-documents.html#xml-examples
The text says:
"It illustrates two styles of adding child documents: the firs
Hi all,
A couple of months ago, I migrated my solr deployment off of some legacy
hardware (old spinning disks), and onto much newer hardware (SSD's, newer
processors). While I am seeing much improved search performance since this
move, I am also seeing intermittent indexing timeouts for
1 - 100 of 5862 matches
Mail list logo