extend the similarity class, compile it against the jars in lib, put in
a path solr can find and set your schema to use it
http://wiki.apache.org/solr/SolrPlugins#Similarity
On 02/25/2010 10:09 PM, Pooja Verlani wrote:
Hi,
I want to modify Similarity class for my app like the following-
Right no
you can set a default shard parameter on the request handler doing
distributed search, you can set up two different request handlers one
with shards default and one without
On Thu, Feb 25, 2010 at 1:35 PM, Jeffrey Zhao
wrote:
> Now I got it, just forgot put qt=search in query.
>
> By the way, in
what are you using for the mm parameter? if you set it to 1 only one
word has to match,
On 03/01/2010 05:07 PM, Steve Reichgut wrote:
***Sorry if this was sent twice. I had connection problems here and it
didn't look like the first time it went out
I have been testing out results for some
or you can try the commongrams filter that combines tokens next to a stopword
On Tue, Mar 2, 2010 at 6:56 AM, Walter Underwood wrote:
> Don't remove stopwords if you want to search on them. --wunder
>
> On Mar 2, 2010, at 5:43 AM, Erick Erickson wrote:
>
>> This is a classic problem with Stopword
ive found the csv update to be exceptionally fast, though others enjoy
the flexibility of the data import handler
On Fri, Mar 5, 2010 at 10:21 AM, Mark N wrote:
> what should be the fastest way to index a documents , I am indexing huge
> collection of data after extracting certain meta - data inf
did u enable the highlighting component in solrconfig.xml? try setting
debugQuery=true to see if the highlighting component is even being
called...
On Tue, Mar 9, 2010 at 12:23 PM, Lee Smith wrote:
> Hey All
>
> I have indexed a whole bunch of documents and now I want to search against
> them.
>
just to make sure were on the same page, youre saying that the
highlight section of the response is empty right? the results section
is never highlighted but a separate section contains the highlighted
fields specified in hl.fl=
On Wed, Mar 10, 2010 at 5:23 AM, Ahmet Arslan wrote:
>
>
>> Yes Cont
no problem with the query.
>
> But from what I believe it should wrap around the text in the result.
>
> So if I search ie Andrew within the return content Ie would have the
> contents with the word Andrew
>
> and hl.fl=attr_content
>
> Thank you for you help
>
>
--joe
On 03/12/2010 07:34 PM, JavaGuy84 wrote:
Hi,
I had made some changes to solrqueryparser.java using Eclipse and I am able
to do a leading wildcard search using Jetty plugin (downloaded this plugin
for eclipse).. Now I am not sure how I can package this code and redploy it.
Can someone help me
hello *, ive been using the highlighter and been pretty happy with
its results, however theres an edge case im not sure how to fix
for query: amazing grace
the record matched and highlighted is
amazing rendition of amazing grace
is there any way to only highlight amazing grace without using phr
ual text and offsets ?
basically whats happening now is if i search
'the e', i get:
'SeinfeldThe EEx-Girlfriend'
for 'the ex', i get:
'SeinfeldThe ExEx-Girlfriend'
and so on
thx much
--joe
ing the RemoveDuplicatesTokenFilter(Factory) do the trick here?
>
> Erik
>
> On Apr 2, 2010, at 4:13 PM, Joe Calderon wrote:
>
>> hello *, i have a field that is indexing the string "the
>> ex-girlfriend" as these tokens: [the, exgirlfriend, ex, gi
t part
of the text is highlighted, how can i fix my filter chain?
thx much
--joe
dont know if its the best solution but i have a field i facet on
called type its either 0,1, combined with collapse.facet=before i just
sum all the values of the facet field to get the total number found
if you dont have such a field u can always add a field with a single value
--joe
On Wed
youve created an infinite loop, the shard you query calls all other
shards and itself and so on, create a separate requestHandler and
query that, ex
localhost:7500/solr,localhost:7501/solr,localhost:7502/solr,localhost:7503/solr,localhost:7504/solr,localhost:7505/solr,localhost:7506/
from another server we get this:
These are making our logs impossible to read, but worse, I assume indicate
that something is wrong.
Thanks for any help!
Joe Lerner
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
:2181
server.1=host1:2190:2195
server.2=host2:2191:2196
server.3=host3:2192:2197
Notice the port range, and overlap...
Is that.../copacetic/?
Thanks!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
2
My plan would be to do the following, while users are still online (it's a
big [bad] deal if we need to take search offline):
1. Take zk #3 down.
2. Fix zk #3 by deleting the contents of the zk data directory and assign it
myid#3
3. Bring zk#3 back up
4. Do a full re-build of all collections
, and I assume
indicate
that something is wrong.
Thanks for any help!
Joe Lerner
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
rs are
not experiencing any problems--they are searching SOLR like crazy.
Any suggestions?
Thanks!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
eploy our
application to production, and we use Jenkins to continuously deploy in
development. But, it is what it is, and at least our logs are readable now.
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Our application runs on Tomcat. We found that when we deploy to Tomcat using
Jenkins or Ansible--a "hot" deployment--the ZK log problem starts. The only
solution we've been able to find was to bounce Tomcat.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
rfDisableSharedMem \
-XX:+ParallelRefProcEnabled \
-XX:G1HeapRegionSize=16m \
-XX:MaxGCPauseMillis=300 \
-XX:InitiatingHeapOccupancyPercent=75 \
-XX:+UseLargePages \
-XX:ParallelGCThreads=16 \
-XX:-ResizePLAB \
-XX:+AggressiveOpts"
Anything I can try / change?
Thank you!
-Joe
ent
Thank you For the gceasy.io site - that is very slick! I'll use that in
the future. I can try using the standard settings, but again - at this
point it doesn't look GC related to me?
-Joe
On 2/12/2019 11:35 AM, Shawn Heisey wrote:
On 2/12/2019 7:35 AM, Joe Obernberger
Reverted back to 7.6.0 - same settings, but now I do not encounter the
large CPU usage.
-Joe
On 2/12/2019 12:37 PM, Joe Obernberger wrote:
Thank you Shawn. Yes, I used the settings off of your site. I've
restarted the cluster and the CPU usage is back up again. Looking at
it now, it do
stored on HDFS.
-Joe
On 2/27/2019 5:04 AM, Lukas Weiss wrote:
Hello,
we recently updated our Solr server from 6.6.5 to 7.7.0. Since then, we
have problems with the server's CPU usage.
We have two Solr cores configured, but even if we clear all indexes and do
not start the index process, we se
justice than I could hope to):
http://blog.innoventsolutions.com/innovent-solutions-blog/2017/02/solr-edismax-boolean-query.html
If you're just starting out and you're starting with a recent version, this
won't matter. But if you're upgrading, it's critical to be aware
concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Last Check: 4/2/2018, 3:47:15 PM
Thank you!
-Joe Obernberger
would stop, and I've not seen that happen.
Interestingly, despite the error, the model is still built at least up
to some number of iterations. In other words, many iterations complete OK.
-Joe
On 4/2/2018 6:54 PM, Joel Bernstein wrote:
It looks like it accessing a replica that's dow
MainClientExec.java:272)
at
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at
org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at
org.apache.http.impl.client.InternalHttpClient.doExecute(
solr cloud. Most we can do is around 57 million
per day; usually limited by pulling data out of HBase not Solr.
-Joe
On 4/4/2018 10:57 PM, 苗海泉 wrote:
When we have 49 shards per collection, there are more than 600 collections.
Solr will have serious performance problems. I don't know how to
I tried to build a large model based on about 1.2 million documents.
One of the nodes ran out of memory and killed itself. Is this much data
not reasonable to use? The nodes have 16g of heap. Happy to increase
it, but not sure if this is possible?
Thank you!
-Joe
On 4/5/2018 10:24 AM
pache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpCli
files stored in /etc/solr.
Once those are also removed on all the noeds, then I can re-create the
collection.
-Joe
Just as a side note, when Solr goes OOM and kills itself, and if you're
running HDFS, you are guaranteed to have write.lock files left over. If
you're running lots of shards/replicas, you may have many files that you
need to go into HDFS and delete before restarting.
-Joe
On 4/
be returned to use, as it was until Solr v7.3.0.
Thanks,
Joe D.
that
brings down a Solr cluster that I think I agree with the decision to remove
such an inviting button.
Doug
On Sat, Apr 21, 2018 at 8:08 AM Joe Doupnik wrote:
-
Doug,
Thanks for that feedback. Here are my thoughts on the matter.
Removing deleted docs is often an irregular
lets the system
run in the background all day if necessary without disturbing main
activities. My longest run was over a full day, 660+K documents which
worked just fine and did not upset other activities in the machine.
Thanks,
Joe D.
On 21/04/2018 17:54, Erick Erickson wrote:
f the pipeline.
That's a classical problem with known solutions.
Thanks,
Joe D.
On 21/04/2018 19:16, Erick Erickson wrote:
Yeah, trying to have something that satisfies all use cases is a bear.
I know of one installation where the indexing rate was so huge that
they couldn't aff
ly, in
a search request. Thus regular facilities can do most of this work. What
this example does not address is your distance 5 critera. However, the
NOT facility may do the trick for you, though a minus sign is taken as a
literal minus sign or word separator if located within a quoted string.
Thanks, Joe D.
On 22/04/2018 19:26, Joe Doupnik wrote:
On 22/04/2018 19:04, Nicolas Paris wrote:
Hello
I wonder if there is a plain text query syntax to say:
give me all document that match:
wonderful pizza NOT peperoni
all those in a 5 distance word bag
then
pizza are wonderful -> would match
I mad
comes back up. Usually the error is a timeout.
Has anyone seen this? We've tried adjust the /replication
requestHandler and setting:
75
but it appears to have no effect. Any ideas?
Thank you!
-Joe
quot;,
"_4joy.si",
"_4joy_6.liv",
"_4jpi.cfe",
"_4jpi.cfs",
"_4jpi.si",
"_4jpi_4.liv",
"_4jq2.cfe",
"_4jq2.cfs",
"_4jq2.si",
&q
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2275)
-Joe
Hi All - having this same problem again with a large index in HDFS. A
replica needs to recover, and it just spins retrying over and over
again. Any ideas? Is there an adjustable timeout?
Screenshot:
http://lovehorsepower.com/images/SolrShot1.jpg
Thank you!
-Joe Obernberger
nt.getData(SolrZkClient.java:340)
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1248)
... 9 more
-Joe
Just to add to this - looks like the only valid replica that is
remaining is a TLOG type, and I suspect that is why it no longer has a
leader. Poop.
-Joe
On 7/2/2018 7:54 PM, Joe Obernberger wrote:
Hi - On startup, I'm getting the following error. The shard had 3
replicas, but non
a:624)
at java.lang.Thread.run(Thread.java:748)
Thank you very much for the help!
-Joe
On 7/2/2018 8:32 PM, Shawn Heisey wrote:
On 7/2/2018 1:40 PM, Joe Obernberger wrote:
Hi All - having this same problem again with a large index in HDFS. A
replica needs to recover, and it just spins retrying
umber of terms set to 1024 (default), and I'm using
about 500 terms. Is there a way around this? The total query length is
10,131 bytes.
Thank you!
-Joe
Shawn - thank you! That works great. Stupid huge searches here I come!
-Joe
On 7/12/2018 4:46 PM, Shawn Heisey wrote:
On 7/12/2018 12:48 PM, Joe Obernberger wrote:
Hi - I'm using SolrCloud 7.3.1 and calling a search from Java using:
org.apache.solr.client.solrj.response.QueryRes
collection that depends on this config-set.
2. Reload the config-set
3. Recreate the dependent collection
It seems to me that between steps #1 and #3, users will not be able to
search, which is not cool.
Can I avoid the outage to my search capabilitty?
Thanks!
Joe
--
Sent from: http
OK--yes, I can see how that would work. But it would require some quick
infrastructure flexibility that, at least to this point, we don't really
have.
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
g for 4000ms , collection:
UNCLASS slice: shard23 saw
state=DocCollection(UNCLASS//collections/UNCLASS/state.json/3828)={
any ideas on what to try? I've been trying to figure this out for a
couple days now, but it's very intermittent.
Thank you!
-Joe
!
-Joe Obernberger
1755371094",
"exception": {
"msg": "not enough free disk space to perform index split on
node sys-hadoop-1:9100_solr, required: 306.76734546013176, available:
16.772361755371094",
"rspCode": 500
},
"status": {
"state": "failed",
"msg": "found [] in failed tasks"
}
}
-Joe Obernberger
schema), I get an error that the field doesn't exist.
If I restart the cluster, this problem goes away and I can add a
document with the new field to any solr collection that has the schema.
Any work-arounds that don't involve a restart?
Thank you!
-Joe Obernberger
Hi All - any ideas on this? Anything I can try?
Thank you!
-Joe
On 2/26/2020 9:01 AM, Joe Obernberger wrote:
Hi All - I have several solr collections all with the same schema. If
I add a field to the schema and index it into the collection on which
I added the field, it works fine
Thank you Erick - I have no record of that, but will absolutely give the
API RELOAD a shot! Thank you!
-Joe
On 3/6/2020 10:26 AM, Erick Erickson wrote:
Didn’t we talk about reloading the collections that share the schema after the
schema change via the collections API RELOAD command?
Best
Erick - tried this, had to run it async, but it's been running for over
24 hours on one collection with:
{
"responseHeader":{
"status":0,
"QTime":18326},
"status":{
"state":"submitted",
"msg":"
ary
'{"add-field":{"name":"Para450","type":"text_general","stored":"false","indexed":"true","docValues":"false","multiValued":"false"}}'
http://ursula.querymasters.com:9100/api/c/UNCLASS/schema
results in:
{
"error":{
"metadata":[
"error-class","org.apache.solr.common.SolrException",
"root-error-class","org.apache.solr.common.SolrException"],
"msg":"no such collection or alias",
"code":400}}
What am I doing wrong? The schema UNCLASS does exist in Zookeeper.
Thanks!
-Joe
Nevermind - I see that I need to specify an existing collection not a
schema. There is no collection called UNCLASS - only a schema.
-Joe
On 4/1/2020 4:52 PM, Joe Obernberger wrote:
Hi All - I'm trying this:
curl -X POST -H 'Content-type:application/json' --data-binary
'
[80 TO 100]))
AND (person:[80 TO 100])
Thank you!
-Joe
r$ReservedThread.run(ReservedThreadExecutor.java:388)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:806)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:938)
at java.lang.Thread.run(Thread.java:748)
500
Is there a way to process this asynchronously?
-Joe
(SolrSearcher.java:60)
On 4/28/2020 11:50 AM, Joe Obernberger wrote:
Hi all - I'm running this query on solr cloud 8.5.1 with the index on
HDFS:
curl http://enceladus:9100/solr/PROCESSOR_LOGS/update?commit=true -H
"Connect-Type: text/xml" --data-binary
'StartTime:[2020-01-01T0
Hi All - while I'm still getting the error, it does appear to work
(still gives the error - but a search of the data then shows less
results - so the delete is working). In some cases, it may be necessary
to run the query several times.
-Joe
On 4/29/2020 9:03 AM, Joe Obernberger wrot
Could you use a multi-valued field for user in each of your products?
So productA and a field User that is a list of all the users that have
productA. Then you could do a search like:
user:User1 AND Product_A_cost:[5 TO 10]
user:(User1 User5...) AND Product_B_cost[0 TO 40]
-Joe
On 5/11
Your exception didn't come across - can you paste it in?
-Joe
On 8/19/2020 10:50 AM, Prashant Jyoti wrote:
You're right Andrew. Even I read about that. But there's a use case for
which we want to configure the said case.
Are you also aware of what feature we are moving towards
r/data"
SOLR_LOGS_DIR="/home/search/solr/logs"
SOLR_PORT="8983"
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoSoftCommit.maxTime=3000"
SOLR_OPTS="$SOLR_OPTS -Dsolr.autoCommit.maxTime=6"
SOLR_OPTS="$SOLR_OPTS -Djava.io.tmpdir=/home/search/tmp"
Thanks,
Joe D.
hdfs://nameservice1:8020/solr8.2.0
/etc/hadoop/conf.cloudera.hdfs1
-Joe
On 8/20/2020 2:30 AM, Prashant Jyoti wrote:
Hi Joe,
These are the errors I am running into:
org.apache.solr.common.SolrException: Error CREATEing SolrCore
'newcollsolr2_shard1_replica_n1'
More properly,it would be best to fix Tika and thus not push extra
complexity upon many many users. Error handling is one thing, crashes
though ought to be designed out.
Thanks,
Joe D.
On 25/08/2020 10:54, Charlie Hull wrote:
On 25/08/2020 06:04, Srinivas Kashyap wrote:
Hi
system, whatever that may be.
Thanks,
Joe D.
On 27/08/2020 20:32, Alexandre Rafalovitch wrote:
If you are indexing from Drupal into Solr, that's the question for
Drupal's solr module. If you are doing it some other way, which way
are you doing it? bin/post command?
Most likely t
every 100 and 1000 submissions. Also the crawler is set to run at
a lower priority than Solr, thus giving preference to Solr.
In the end we ought to run experiments to find and verify working
values.
Thanks,
Joe D.
On 02/09/2020 03:40, yaswanth kumar wrote:
I got some understandin
rally. It is important to
use proven robust tools when we deal with the bad guys.
Thanks,
Joe D.
On 04/09/2020 08:43, Aroop Ganguly wrote:
Try looking at a simple ldap authentication suggested here:
https://github.com/itzmestar/ldap_solr <https://github.com/itzmestar/ldap_solr>
You
Anyone use Solr with Erasure Coding on HDFS? Is that supported?
Thank you
-Joe
bellish) in which there is
control line RUNAS="solr". The RUNAS variable is used to properly start
Solr.
Thanks,
Joe D.
On 15/10/2020 15:02, Alexandre Rafalovitch wrote:
It sounds like maybe you have started the Solr in a different way than
you are restarting it. E.g. maybe you
mctl start
solr.service.
Thanks,
Joe D.
On 15/10/2020 16:01, Ryan W wrote:
I have been starting solr like so...
service solr start
On Thu, Oct 15, 2020 at 10:31 AM Joe Doupnik wrote:
Alex has it right. In my environment I created user "solr" in group
"users". Then I
stemctl
enable solr to let systemd take charge.
Thus some busy work to check on things, and then making a choice
of which flavour will be in charge.
Thanks,
Joe D.
On 15/10/2020 21:03, Ryan W wrote:
I didn't realize that to start a systemd service, I need to do...
systemc
correct - true? Thank you!
-Joe Obernberger
issue. I have seen no mention of it in the docs nor forums.
Thanks,
Joe D.
On 26/05/2019 19:08, Shawn Heisey wrote:
On 5/25/2019 9:40 AM, Joe Doupnik wrote:
Comparing memory consumption (real, not virtual) of quiesent
Solr v8.0 and prior with Solr v8.1.0 reveals the older versions use
about 1.6GB on my systems but v8.1.0 uses 4.5 to 5+GB. Systems used
are SUSE
On 26/05/2019 19:15, Joe Doupnik wrote:
On 26/05/2019 19:08, Shawn Heisey wrote:
On 5/25/2019 9:40 AM, Joe Doupnik wrote:
Comparing memory consumption (real, not virtual) of quiesent
Solr v8.0 and prior with Solr v8.1.0 reveals the older versions use
about 1.6GB on my systems but v8.1.0
On 26/05/2019 19:38, Jörn Franke wrote:
Different garbage collector configuration? It does not mean that Solr uses more
memory if it is occupied - it could also mean that the JVM just kept it
reserved for future memory needs.
Am 25.05.2019 um 17:40 schrieb Joe Doupnik :
Comparing
experience here. For
reference, the only memory adjustables set in my configuration is in the
Solr startup script solr.in.sh saying add "-Xss1024k" in the SOLR_OPTS
list and setting SOLR_HEAP="4024m".
Thanks,
Joe D.
On 26/05/2019 19:43, Jörn Franke wrote:
I think th
because perfection is not possible.
Thanks,
Joe D.
On 26/05/2019 20:30, Shawn Heisey wrote:
On 5/26/2019 12:52 PM, Joe Doupnik wrote:
I do queries while indexing, have done so for a long time,
without difficulty nor memory usage spikes from dual use. The system
has been designed to
27/05/2019 08:52, Joe Doupnik wrote:
Generalizations tend to fail when confronted with conflicting
evidence. The simple evidence is asking how much real memory the Solr
owned process has been allocated (top, or ps aux or similar) and that
yields two very different values (the ~1.6GB of Solr v8.0
server is currently not
accepting new requests. Establishing limits does take some creative
thinking about how the system as a whole is constructed.
I brought up the overload case because it pertains to this main
memory management thread.
Thanks,
Joe D.
On 27/05/2019 10:21, Bernd
ving a few sets of them for different operating situations and
the customer chooses appropriately.
Thanks,
Joe D.
On 27/05/2019 11:05, Joe Doupnik wrote:
You are certainly correct about using external load balancers when
appropriate. However, a basic problem with servers, that of
My comments are inserted in-line this time. Thanks for the
amplifications Shawn.
On 27/05/2019 17:39, Shawn Heisey wrote:
On 5/27/2019 9:49 AM, Joe Doupnik wrote:
A few more numbers to contemplate. An experiment here, adding 80
PDF and PPTX files into an empty index.
Solr v8.0
ease read the full web page to have a rounded view of that
discussion.
Thanks,
Joe D.
On 27/05/2019 18:17, Joe Doupnik wrote:
My comments are inserted in-line this time. Thanks for the
amplifications Shawn.
On 27/05/2019 17:39, Shawn Heisey wrote:
On 5/27/2019 9:49 AM,
answering queries rather than indexing
files. If the openjdk folks get their reduction work (below) into our
hands then idle memory may shrink further.
In closing, Solr v8.1 has one very nice advantage over its
predecessors: indexing speed, about double that of v8.0.
Thanks,
Joe D.
On
der registration.
I've tried FORCELEADER, but it had no effect. I also tried adding a
shard, but that one didn't come up either. The index is on HDFS.
Help!
-Joe
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1328)
... 9 more
Can I manually enter information for the leader? How would I get that?
-Joe
On 5/30/2019 8:39 AM, Joe Obernberger wrote:
Hi All - I have a 40 node cluster that has been running great for a
long while, but it all came down due to OOM. I adjusted the
nodes several times. I then updated the zookeeper node and put
the necessary information into it with a leader selected. Then I
restarted the nodes again - no luck.
-Joe
On 5/30/2019 10:42 AM, Walter Underwood wrote:
We had a 6.6.2 prod cluster get into a state like this. It did not have an
that the Solr material expects to be owned by user solr, and
group users on Linux. Thus a chmod -R solr:users solr command would
take care of the problem.
Thanks,
Joe D.
One day I will learn to type. In the meanwhile the command, as
root, is chown -R solr:users solr. That means creating that username if
it is not present.
Thanks,
Joe D.
On 30/05/2019 20:12, Joe Doupnik wrote:
On 30/05/2019 20:04, Bernard T. Higonnet wrote:
Hello,
I have
at got re-indexed after Friday at noon
Is there a cleaner/simpler/more official way of moving an index from what
place to another? Export/import, or something like that?
Thanks for any help!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Ooohh...interesting. Then, presumably there is some way to have what was the
cross-data-center replica become the new "primary"?
It's getting too easy!
Joe
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
;m running SolrCloud version 7.6.0 on 4
nodes. Thank you!
-Joe Obernberger
Hi All - I've created an alias, but when I try to index to the alias
using CloudSolrClient, I get 'Collection not Found: TestAlias'. Can you
not use an alias name to index to with CloudSolrClient? This is with
SolrCloud 8.1.
Thanks!
-Joe
oming and going?
Thank you!
-Joe
301 - 400 of 417 matches
Mail list logo