5.5/zookeeperReconfig.html#ch_reconfig_rebalancing>
Or, could we leave it as is, and as long as the ZK Ensemble has the same
IPs?
Thanks!
Joe
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
We finally got this fixed by temporarily disabling any updates to the SOLR
index.
--
Sent from: https://lucene.472066.n3.nabble.com/Solr-User-f472068.html
stemctl
enable solr to let systemd take charge.
Thus some busy work to check on things, and then making a choice
of which flavour will be in charge.
Thanks,
Joe D.
On 15/10/2020 21:03, Ryan W wrote:
I didn't realize that to start a systemd service, I need to do...
systemc
mctl start
solr.service.
Thanks,
Joe D.
On 15/10/2020 16:01, Ryan W wrote:
I have been starting solr like so...
service solr start
On Thu, Oct 15, 2020 at 10:31 AM Joe Doupnik wrote:
Alex has it right. In my environment I created user "solr" in group
"users". Then I
bellish) in which there is
control line RUNAS="solr". The RUNAS variable is used to properly start
Solr.
Thanks,
Joe D.
On 15/10/2020 15:02, Alexandre Rafalovitch wrote:
It sounds like maybe you have started the Solr in a different way than
you are restarting it. E.g. maybe you
Anyone use Solr with Erasure Coding on HDFS? Is that supported?
Thank you
-Joe
rally. It is important to
use proven robust tools when we deal with the bad guys.
Thanks,
Joe D.
On 04/09/2020 08:43, Aroop Ganguly wrote:
Try looking at a simple ldap authentication suggested here:
https://github.com/itzmestar/ldap_solr <https://github.com/itzmestar/ldap_solr>
You
every 100 and 1000 submissions. Also the crawler is set to run at
a lower priority than Solr, thus giving preference to Solr.
In the end we ought to run experiments to find and verify working
values.
Thanks,
Joe D.
On 02/09/2020 03:40, yaswanth kumar wrote:
I got some understandin
system, whatever that may be.
Thanks,
Joe D.
On 27/08/2020 20:32, Alexandre Rafalovitch wrote:
If you are indexing from Drupal into Solr, that's the question for
Drupal's solr module. If you are doing it some other way, which way
are you doing it? bin/post command?
Most likely t
More properly,it would be best to fix Tika and thus not push extra
complexity upon many many users. Error handling is one thing, crashes
though ought to be designed out.
Thanks,
Joe D.
On 25/08/2020 10:54, Charlie Hull wrote:
On 25/08/2020 06:04, Srinivas Kashyap wrote:
Hi
hdfs://nameservice1:8020/solr8.2.0
/etc/hadoop/conf.cloudera.hdfs1
-Joe
On 8/20/2020 2:30 AM, Prashant Jyoti wrote:
Hi Joe,
These are the errors I am running into:
org.apache.solr.common.SolrException: Error CREATEing SolrCore
'newcollsolr2_shard1_replica_n1'
r/data"
SOLR_LOGS_DIR="/home/search/solr/logs"
SOLR_PORT="8983"
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=3000"
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoCommit.maxTime=6"
SOLR_OPTS="$SOLR_OPTS -Djava.io.tmpdir=/home/search/tmp"
Thanks,
Joe D.
Your exception didn't come across - can you paste it in?
-Joe
On 8/19/2020 10:50 AM, Prashant Jyoti wrote:
You're right Andrew. Even I read about that. But there's a use case for
which we want to configure the said case.
Are you also aware of what feature we are moving towards
Could you use a multi-valued field for user in each of your products?
So productA and a field User that is a list of all the users that have
productA. Then you could do a search like:
user:User1 AND Product_A_cost:[5 TO 10]
user:(User1 User5...) AND Product_B_cost[0 TO 40]
-Joe
On 5/11
Hi All - while I'm still getting the error, it does appear to work
(still gives the error - but a search of the data then shows less
results - so the delete is working). In some cases, it may be necessary
to run the query several times.
-Joe
On 4/29/2020 9:03 AM, Joe Obernberger wrot
(SolrSearcher.java:60)
On 4/28/2020 11:50 AM, Joe Obernberger wrote:
Hi all - I'm running this query on solr cloud 8.5.1 with the index on
HDFS:
curl http://enceladus:9100/solr/PROCESSOR_LOGS/update?commit=true -H
"Connect-Type: text/xml" --data-binary
'StartTime:[2020-01-01T0
r$ReservedThread.run(ReservedThreadExecutor.java:388)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.lang.Thread.run(Thread.java:748)
500
Is there a way to process this asynchronously?
-Joe
[80 TO 100]))
AND (person:[80 TO 100])
Thank you!
-Joe
Nevermind - I see that I need to specify an existing collection not a
schema. There is no collection called UNCLASS - only a schema.
-Joe
On 4/1/2020 4:52 PM, Joe Obernberger wrote:
Hi All - I'm trying this:
curl -X POST -H 'Content-type:application/json' --data-binary
'
ary
'{"add-field":{"name":"Para450","type":"text_general","stored":"false","indexed":"true","docValues":"false","multiValued":"false"}}'
http://ursula.querymasters.com:9100/api/c/UNCLASS/schema
results in:
{
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"no such collection or alias",
"code":400}}
What am I doing wrong? The schema UNCLASS does exist in Zookeeper.
Thanks!
-Joe
Erick - tried this, had to run it async, but it's been running for over
24 hours on one collection with:
{
"responseHeader":{
"status":0,
"QTime":18326},
"status":{
"state":"submitted",
"msg":"
Thank you Erick - I have no record of that, but will absolutely give the
API RELOAD a shot! Thank you!
-Joe
On 3/6/2020 10:26 AM, Erick Erickson wrote:
Didn’t we talk about reloading the collections that share the schema after the
schema change via the collections API RELOAD command?
Best
Hi All - any ideas on this? Anything I can try?
Thank you!
-Joe
On 2/26/2020 9:01 AM, Joe Obernberger wrote:
Hi All - I have several solr collections all with the same schema. If
I add a field to the schema and index it into the collection on which
I added the field, it works fine
schema), I get an error that the field doesn't exist.
If I restart the cluster, this problem goes away and I can add a
document with the new field to any solr collection that has the schema.
Any work-arounds that don't involve a restart?
Thank you!
-Joe Obernberger
1755371094",
"exception": {
"msg": "not enough free disk space to perform index split on
node sys-hadoop-1:9100_solr, required: 306.76734546013176, available:
16.772361755371094",
"rspCode": 500
},
"status": {
"state": "failed",
"msg": "found [] in failed tasks"
}
}
-Joe Obernberger
!
-Joe Obernberger
er (it was
65,530) and I increased it to 262144. For the solr user, we're using
102,400 for open files and for max user processes, we use 65,000.
-Joe
On 12/10/2019 7:46 AM, Erick Erickson wrote:
One other red flag is you’re apparently running in “schemaless”
-Xms20g-Xmx25g-Xss256k
-verbose:gc
Any ideas?
Thanks.
-Joe
Thank you Shawn.
What I'm trying to get for my application is the commitTimeMSec. I use
that value to build up an alias of solr collections. Is there a better way?
-Joe
On 11/1/2019 10:17 AM, Shawn Heisey wrote:
On 11/1/2019 7:20 AM, Joe Obernberger wrote:
Hi All - getting this error
0190610]
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)
~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610]
-Joe
I don't believe I am getting expected results when using a streaming
expression that simply uses innerJoin.
Here's the example:
innerJoin(
search(illuminate, q=(mrn:123) (*:*), fl="key,mrn", sort="mrn asc"),
search(illuminate, q=(foo*), fl="key,mrn,*", sort="mrn asc"),
on="mrn"
)
All
dded a while back and show both properties and schema.
While I can facet on this field using an alias, I get 'Error from server
at null: undefined field: FaceCluster'. If I search an individual solr
collection, I can facet on it.
Any ideas?
-Joe
hard had two replicas. Now some shards
have 4, and some 3. In addition the auto scaling policy of:
cluster-policy":[{
"replica":"<2",
"shard":"#EACH",
"node":"#ANY"}],
seems to be ignored as many collections have the same node hosting multiple
replicas. Is this related to JIRA:
https://issues.apache.org/jira/browse/SOLR-13586
?
Thank you!
-Joe
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:555)
... 6 more
At this point, no nodes are hosting one of the collections.
-Joe
On 9/26/2019 1:32 PM, Joe Obernberger wrote:
Hi all - I have a 4 node cluster for test, and created several solr
collections with 2 shards and 2 replicas each
->shard2->replica1,replica2
all of those replicas above are hosted by the same node. What am I
doing wrong here? Thank you!
-Joe
asued
exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
not enough free disk space to perform index split on node
-Joe
es^1.0 name^1.2 sku^1.5 id^10.0 manu^1.1
cat^1.4
What's odd is that this doesn't cause an issue with 7.x, but does with
8.2. Removed the fields that my schema doesn't have and clustering
works on the fields I have defined for carrot2.
-Joe
On 8/29/2019 10:39 AM, Jörn Franke w
27;m missing a field called 'features', but it's not
defined in the prior schema either. Thanks again!
-Joe
On 8/28/2019 6:19 PM, Erick Erickson wrote:
What it says ;)
My guess is that your configuration mentions the field “features” in, perhaps
carrot.snippet or carrot.title.
olrClient$RemoteSolrException",
"root-error-class","org.apache.solr.client.solrj.impl.BaseHttpSolrClient$RemoteSolrException"],
"msg":"Error from server at null: org.apache.solr.search.SyntaxError: Query
Field 'features' is not a valid field name",
"code":400}}
-Joe
ove only having one large file system to manage
instead of lots of individual file systems across many machines. HDFS
makes this easy.
-Joe
On 8/2/2019 9:10 AM, lstusr 5u93n4 wrote:
Hi Joe,
We fought with Solr on HDFS for quite some time, and faced similar issues
as you're seeing. (
cally. Is that possible with HDFS?
While adding an alias to other collections would be an option, if that
collection is the only collection, or one that is currently needed, in a
live system, we can't bring it down, re-create it, and re-index when
that process may take weeks to do.
Any id
oming and going?
Thank you!
-Joe
Hi All - I've created an alias, but when I try to index to the alias
using CloudSolrClient, I get 'Collection not Found: TestAlias'. Can you
not use an alias name to index to with CloudSolrClient? This is with
SolrCloud 8.1.
Thanks!
-Joe
;m running SolrCloud version 7.6.0 on 4
nodes. Thank you!
-Joe Obernberger
Ooohh...interesting. Then, presumably there is some way to have what was the
cross-data-center replica become the new "primary"?
It's getting too easy!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
at got re-indexed after Friday at noon
Is there a cleaner/simpler/more official way of moving an index from what
place to another? Export/import, or something like that?
Thanks for any help!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
One day I will learn to type. In the meanwhile the command, as
root, is chown -R solr:users solr. That means creating that username if
it is not present.
Thanks,
Joe D.
On 30/05/2019 20:12, Joe Doupnik wrote:
On 30/05/2019 20:04, Bernard T. Higonnet wrote:
Hello,
I have
that the Solr material expects to be owned by user solr, and
group users on Linux. Thus a chmod -R solr:users solr command would
take care of the problem.
Thanks,
Joe D.
nodes several times. I then updated the zookeeper node and put
the necessary information into it with a leader selected. Then I
restarted the nodes again - no luck.
-Joe
On 5/30/2019 10:42 AM, Walter Underwood wrote:
We had a 6.6.2 prod cluster get into a state like this. It did not have an
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1328)
... 9 more
Can I manually enter information for the leader? How would I get that?
-Joe
On 5/30/2019 8:39 AM, Joe Obernberger wrote:
Hi All - I have a 40 node cluster that has been running great for a
long while, but it all came down due to OOM. I adjusted the
der registration.
I've tried FORCELEADER, but it had no effect. I also tried adding a
shard, but that one didn't come up either. The index is on HDFS.
Help!
-Joe
answering queries rather than indexing
files. If the openjdk folks get their reduction work (below) into our
hands then idle memory may shrink further.
In closing, Solr v8.1 has one very nice advantage over its
predecessors: indexing speed, about double that of v8.0.
Thanks,
Joe D.
On
ease read the full web page to have a rounded view of that
discussion.
Thanks,
Joe D.
On 27/05/2019 18:17, Joe Doupnik wrote:
My comments are inserted in-line this time. Thanks for the
amplifications Shawn.
On 27/05/2019 17:39, Shawn Heisey wrote:
On 5/27/2019 9:49 AM,
My comments are inserted in-line this time. Thanks for the
amplifications Shawn.
On 27/05/2019 17:39, Shawn Heisey wrote:
On 5/27/2019 9:49 AM, Joe Doupnik wrote:
A few more numbers to contemplate. An experiment here, adding 80
PDF and PPTX files into an empty index.
Solr v8.0
ving a few sets of them for different operating situations and
the customer chooses appropriately.
Thanks,
Joe D.
On 27/05/2019 11:05, Joe Doupnik wrote:
You are certainly correct about using external load balancers when
appropriate. However, a basic problem with servers, that of
server is currently not
accepting new requests. Establishing limits does take some creative
thinking about how the system as a whole is constructed.
I brought up the overload case because it pertains to this main
memory management thread.
Thanks,
Joe D.
On 27/05/2019 10:21, Bernd
27/05/2019 08:52, Joe Doupnik wrote:
Generalizations tend to fail when confronted with conflicting
evidence. The simple evidence is asking how much real memory the Solr
owned process has been allocated (top, or ps aux or similar) and that
yields two very different values (the ~1.6GB of Solr v8.0
because perfection is not possible.
Thanks,
Joe D.
On 26/05/2019 20:30, Shawn Heisey wrote:
On 5/26/2019 12:52 PM, Joe Doupnik wrote:
I do queries while indexing, have done so for a long time,
without difficulty nor memory usage spikes from dual use. The system
has been designed to
experience here. For
reference, the only memory adjustables set in my configuration is in the
Solr startup script solr.in.sh saying add "-Xss1024k" in the SOLR_OPTS
list and setting SOLR_HEAP="4024m".
Thanks,
Joe D.
On 26/05/2019 19:43, Jörn Franke wrote:
I think th
On 26/05/2019 19:38, Jörn Franke wrote:
Different garbage collector configuration? It does not mean that Solr uses more
memory if it is occupied - it could also mean that the JVM just kept it
reserved for future memory needs.
Am 25.05.2019 um 17:40 schrieb Joe Doupnik :
Comparing
On 26/05/2019 19:15, Joe Doupnik wrote:
On 26/05/2019 19:08, Shawn Heisey wrote:
On 5/25/2019 9:40 AM, Joe Doupnik wrote:
Comparing memory consumption (real, not virtual) of quiesent
Solr v8.0 and prior with Solr v8.1.0 reveals the older versions use
about 1.6GB on my systems but v8.1.0
On 26/05/2019 19:08, Shawn Heisey wrote:
On 5/25/2019 9:40 AM, Joe Doupnik wrote:
Comparing memory consumption (real, not virtual) of quiesent
Solr v8.0 and prior with Solr v8.1.0 reveals the older versions use
about 1.6GB on my systems but v8.1.0 uses 4.5 to 5+GB. Systems used
are SUSE
issue. I have seen no mention of it in the docs nor forums.
Thanks,
Joe D.
correct - true? Thank you!
-Joe Obernberger
stored on HDFS.
-Joe
On 2/27/2019 5:04 AM, Lukas Weiss wrote:
Hello,
we recently updated our Solr server from 6.6.5 to 7.7.0. Since then, we
have problems with the server's CPU usage.
We have two Solr cores configured, but even if we clear all indexes and do
not start the index process, we se
Reverted back to 7.6.0 - same settings, but now I do not encounter the
large CPU usage.
-Joe
On 2/12/2019 12:37 PM, Joe Obernberger wrote:
Thank you Shawn. Yes, I used the settings off of your site. I've
restarted the cluster and the CPU usage is back up again. Looking at
it now, it do
ent
Thank you For the gceasy.io site - that is very slick! I'll use that in
the future. I can try using the standard settings, but again - at this
point it doesn't look GC related to me?
-Joe
On 2/12/2019 11:35 AM, Shawn Heisey wrote:
On 2/12/2019 7:35 AM, Joe Obernberger
rfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=16m \
-XX:MaxGCPauseMillis=300 \
-XX:InitiatingHeapOccupancyPercent=75 \
-XX:+UseLargePages \
-XX:ParallelGCThreads=16 \
-XX:-ResizePLAB \
-XX:+AggressiveOpts"
Anything I can try / change?
Thank you!
-Joe
Our application runs on Tomcat. We found that when we deploy to Tomcat using
Jenkins or Ansible--a "hot" deployment--the ZK log problem starts. The only
solution we've been able to find was to bounce Tomcat.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
eploy our
application to production, and we use Jenkins to continuously deploy in
development. But, it is what it is, and at least our logs are readable now.
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
rs are
not experiencing any problems--they are searching SOLR like crazy.
Any suggestions?
Thanks!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
, and I assume
indicate
that something is wrong.
Thanks for any help!
Joe Lerner
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
2
My plan would be to do the following, while users are still online (it's a
big [bad] deal if we need to take search offline):
1. Take zk #3 down.
2. Fix zk #3 by deleting the contents of the zk data directory and assign it
myid#3
3. Bring zk#3 back up
4. Do a full re-build of all collections
:2181
server.1=host1:2190:2195
server.2=host2:2191:2196
server.3=host3:2192:2197
Notice the port range, and overlap...
Is that.../copacetic/?
Thanks!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
from another server we get this:
These are making our logs impossible to read, but worse, I assume indicate
that something is wrong.
Thanks for any help!
Joe Lerner
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
g for 4000ms , collection:
UNCLASS slice: shard23 saw
state=DocCollection(UNCLASS//collections/UNCLASS/state.json/3828)={
any ideas on what to try? I've been trying to figure this out for a
couple days now, but it's very intermittent.
Thank you!
-Joe
OK--yes, I can see how that would work. But it would require some quick
infrastructure flexibility that, at least to this point, we don't really
have.
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
collection that depends on this config-set.
2. Reload the config-set
3. Recreate the dependent collection
It seems to me that between steps #1 and #3, users will not be able to
search, which is not cool.
Can I avoid the outage to my search capabilitty?
Thanks!
Joe
--
Sent from: http
Shawn - thank you! That works great. Stupid huge searches here I come!
-Joe
On 7/12/2018 4:46 PM, Shawn Heisey wrote:
On 7/12/2018 12:48 PM, Joe Obernberger wrote:
Hi - I'm using SolrCloud 7.3.1 and calling a search from Java using:
org.apache.solr.client.solrj.response.QueryRes
umber of terms set to 1024 (default), and I'm using
about 500 terms. Is there a way around this? The total query length is
10,131 bytes.
Thank you!
-Joe
a:624)
at java.lang.Thread.run(Thread.java:748)
Thank you very much for the help!
-Joe
On 7/2/2018 8:32 PM, Shawn Heisey wrote:
On 7/2/2018 1:40 PM, Joe Obernberger wrote:
Hi All - having this same problem again with a large index in HDFS. A
replica needs to recover, and it just spins retrying
Just to add to this - looks like the only valid replica that is
remaining is a TLOG type, and I suspect that is why it no longer has a
leader. Poop.
-Joe
On 7/2/2018 7:54 PM, Joe Obernberger wrote:
Hi - On startup, I'm getting the following error. The shard had 3
replicas, but non
nt.getData(SolrZkClient.java:340)
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1248)
... 9 more
-Joe
Hi All - having this same problem again with a large index in HDFS. A
replica needs to recover, and it just spins retrying over and over
again. Any ideas? Is there an adjustable timeout?
Screenshot:
http://lovehorsepower.com/images/SolrShot1.jpg
Thank you!
-Joe Obernberger
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
-Joe
quot;,
"_4joy.si",
"_4joy_6.liv",
"_4jpi.cfe",
"_4jpi.cfs",
"_4jpi.si",
"_4jpi_4.liv",
"_4jq2.cfe",
"_4jq2.cfs",
"_4jq2.si",
&q
comes back up. Usually the error is a timeout.
Has anyone seen this? We've tried adjust the /replication
requestHandler and setting:
75
but it appears to have no effect. Any ideas?
Thank you!
-Joe
On 22/04/2018 19:26, Joe Doupnik wrote:
On 22/04/2018 19:04, Nicolas Paris wrote:
Hello
I wonder if there is a plain text query syntax to say:
give me all document that match:
wonderful pizza NOT peperoni
all those in a 5 distance word bag
then
pizza are wonderful -> would match
I mad
ly, in
a search request. Thus regular facilities can do most of this work. What
this example does not address is your distance 5 critera. However, the
NOT facility may do the trick for you, though a minus sign is taken as a
literal minus sign or word separator if located within a quoted string.
Thanks, Joe D.
f the pipeline.
That's a classical problem with known solutions.
Thanks,
Joe D.
On 21/04/2018 19:16, Erick Erickson wrote:
Yeah, trying to have something that satisfies all use cases is a bear.
I know of one installation where the indexing rate was so huge that
they couldn't aff
lets the system
run in the background all day if necessary without disturbing main
activities. My longest run was over a full day, 660+K documents which
worked just fine and did not upset other activities in the machine.
Thanks,
Joe D.
On 21/04/2018 17:54, Erick Erickson wrote:
that
brings down a Solr cluster that I think I agree with the decision to remove
such an inviting button.
Doug
On Sat, Apr 21, 2018 at 8:08 AM Joe Doupnik wrote:
-
Doug,
Thanks for that feedback. Here are my thoughts on the matter.
Removing deleted docs is often an irregular
be returned to use, as it was until Solr v7.3.0.
Thanks,
Joe D.
Just as a side note, when Solr goes OOM and kills itself, and if you're
running HDFS, you are guaranteed to have write.lock files left over. If
you're running lots of shards/replicas, you may have many files that you
need to go into HDFS and delete before restarting.
-Joe
On 4/
files stored in /etc/solr.
Once those are also removed on all the noeds, then I can re-create the
collection.
-Joe
pache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpCli
I tried to build a large model based on about 1.2 million documents.
One of the nodes ran out of memory and killed itself. Is this much data
not reasonable to use? The nodes have 16g of heap. Happy to increase
it, but not sure if this is possible?
Thank you!
-Joe
On 4/5/2018 10:24 AM
solr cloud. Most we can do is around 57 million
per day; usually limited by pulling data out of HBase not Solr.
-Joe
On 4/4/2018 10:57 PM, 苗海泉 wrote:
When we have 49 shards per collection, there are more than 600 collections.
Solr will have serious performance problems. I don't know how to
MainClientExec.java:272)
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at
org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at
org.apache.http.impl.client.InternalHttpClient.doExecute(
would stop, and I've not seen that happen.
Interestingly, despite the error, the model is still built at least up
to some number of iterations. In other words, many iterations complete OK.
-Joe
On 4/2/2018 6:54 PM, Joel Bernstein wrote:
It looks like it accessing a replica that's dow
1 - 100 of 417 matches
Mail list logo