Hi all
For a given solr host and shard, is there any way to get a breakdown on
QTime to see where is the time being spent?
Thanks
Nawab
Hi Rick
My software is not very sophisticated. I have picked some queries from
production logs, which I am replaying against this solr installation. It is
not a SolrCloud but i specify "shards=" in the query to gather results
from all shards.
I found some values to tweak e.g.
1500
15
Perhaps there is potential to optimize with some PLSQL functions on Oracle side
to do as much work within database as possible and have the text indexers only
access a view referencing that function. Also, the obvious optimization is a
record-updated timestamp so that every time indexer runs, on
I start Solr in my Eclipse for small test.
I have made some changes to ant build script to copy the webapp to required
location and also added eclipse launchers in this commit (
https://github.com/mrkarthik/lucene-solr/commit/d793a9b8ac0b1b4969aace4329ea5a6ddc22de16
)
Run "ant eclipse" from shell
Personally I don't start Solr inside my IDE. I use IntelliJ, but I'm
sure Eclipse has the same capability. Have you seen:
https://wiki.apache.org/solr/HowToConfigureEclipse?
I create a "remote" configuration and fill in the blanks. Then
starting the project like this:
bin/solr start -c -z localho
Hello,
The comments in the example schema's for Solr 6.6, for state that the
StrField type must be single-valued to support doc values
For example
Solr-6.6.0/server/solr/configsets/basic_configs/conf/managed-schema:
216
However, on line 221 a StrField is declared with docValues that is
mu
This might be a hack, but the CSV importer is really fast. Run the query in
your favorite command line and export to CSV, then load it.
You can even make batches. Maybe use ranges of the ID, then delete by query for
that range.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunde
Hello Giovanni,
I could not resolve the issue on my workingplace computer.
On my laptop following exactly your steps it works perfectly without warnings
or else.
But how to start solr correctly within eclipse (oxygen)?
I am navigating to the subdirectory "solr/webapp/web/" an with right-click o
On 8/15/2017 8:09 AM, Mannott, Birgit wrote:
> I'm using solr 6.6.0 and I have to do a complex data import from an oracle db
> concerning 3.500.000 data rows.
> For each row I have 15 additional entities. That means that more than 52
> Million selects are send to the database.
> For every select
Hi Bernd,
In LUCENE-3758, a new member field added into ComplexPhraseQuery class. But we
didn't change its hashCode method accordingly. This caused anomalies in Solr,
and Yonik found the bug and fixed hashCode. Your e-mail somehow reminded me
this.
Could it be the QueryCache and hashCode method
Birgit,
any chance to utilise one of the caching strategies that DIH offers?
Like building a complete map for one of the subentities? That would mean
reading the whole table at the beginning and then only doing lookups by key.
Or getting data from subentities with joins in your main entity?
Hea
If you dont want to use your own Solj code, why not try many concurrent
indexers that index different data sets. So run seven indexers each
getting 500,000 rows at the exact same time perhaps. Its a hack, if it
works, but if you have the machinery to do it, why not. or use the
deltaquery, but i h
Hi Everyone,
A quick question.. does the SolrTextTagger plugin (
https://github.com/OpenSextant/SolrTextTagger) work in SolrCould mode? It
works in standalone, but in SolrCloud, while I do get a response, it's
empty:
{
"responseHeader": {
"status": 0,
"QTime": 36,
"params": {
Hey guys. I have sent email to solr-user-unsubscr...@lucene.apache.org
several times, but doesn't seem to be having any effect.
Regards
Yes, I'm using Data Import Handler and I would prefer a solution for this way
of import because it's already tested and the imported data is ok and
everything is fine.
I just have to speed it up a little...
But thanks for your info. Next time I'll try indexing with solrj.
Regards,
Birgit
---
I presume you're using Data Import Handler? An alternative when you
get into complex imports is to use a SolrJ client, here's a sample.
That way you can use whatever tools the particular JDBC connector will
allow and can be much faster.
https://lucidworks.com/2012/02/14/indexing-with-solrj/
Best,
Hi,
I'm using solr 6.6.0 and I have to do a complex data import from an oracle db
concerning 3.500.000 data rows.
For each row I have 15 additional entities. That means that more than 52
Million selects are send to the database.
For every select that is done I optimized the oracle execution path
17 matches
Mail list logo