please unsubscribe me
50-100k, but couldn't find how to set it up in Solr. Any
ideas?
Is there anything else I can do?
Thanks,
John
most relevant? There's no way I know to determine that
> without examining all the docs.
>
> Best
> Erick
>
> On Fri, Sep 2, 2011 at 11:20 AM, John wrote:
> > Hi,
> >
> > In my search application, I sometimes get more than 200k matches for a
> > specific quer
Please forgive my lack of knowledge; I'm posting for the first time!
I'm using solrindex to index and it appears all is going OK in that I'm
receiving the following for each segment:
2011-10-30 20:18:06,870 INFO solr.SolrIndexer - SolrIndexer: starting
2011-10-30 20:18:06,993 INFO indexer.Indexe
solar.data.dir is set, but the files aren't in that location. I've checked
the logs, and I don't see any errors. Obviously something is wrong, but I
can't find any indications as to what. Anyone have suggestions?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Locating-index-
I failed to mention that the segments* files were indeed created; it is the
other files that are missing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Locating-index-files-tp3496865p3498692.html
Sent from the Solr - User mailing list archive at Nabble.com.
Yes, it's a Nutch class that provides integration with Solr and in doing so,
should place the files where Solr expects them based on the Solr config
file. Based on the use of Solr and its configuration, I placed the post
here. The issue is resolved.
--
View this message in context:
http://lucene.
Hi,
I am using a function query that based on the query of the user gives a
score for the results I am presenting.
Some of the results are receiving score=0 in my function and I would like
them not to appear in the search results.
How can I achieve that?
Thanks in advance.
5:04 PM, Andre Bois-Crettez
wrote:
> John wrote:
>
>> Some of the results are receiving score=0 in my function and I would like
>> them not to appear in the search results.
>>
>>
> you can use frange, and filter by score:
>
> q=ipod&fq={!frange
7;})
With the above query, I am getting only the results that I want, the ones
whose score after my FucntionQuery are above 0, but the problem now is that
the final score for all results is changed to 1, which affects the sorting.
How can I keep the original score that is calculated by the edismax q
Can this be fixed somehow? I also need the real score.
On Sun, Nov 20, 2011 at 10:44 AM, John wrote:
> After playing some more with this I managed to get what I want, almost.
>
> My query now looks like:
>
> q={!frange l=0 incl=false}query({!type=edismax qf="abstra
Hi Hoss,
Thanks for the detailed response.
My XY problem is:
1) I am trying to search for a complex query:
q={!type=edismax qf="abstract^0.02 title^0.08 categorysearch^0.05"
boost='eqsim(alltokens,"xyz")' v='+tokens5:"xyz" '}
Which answers my query needs. BUT, my boost function actually changes
Thanks Hoss,
I will give those a try and let you know.
Cheers.
On Wed, Nov 23, 2011 at 8:35 PM, Chris Hostetter
wrote:
>
> : Which answers my query needs. BUT, my boost function actually changes
> some
> : of the results to be of score 0, which I want to be excluded from the
> : result set.
>
>
I have a complex edismax query:
facet=true&facet.mincount=0&qf=title^0.08+categorysearch^0.05+abstract^0.03+body^0.1&wt=javabin&rows=25&defType=edismax&version=2&omitHeader=true&fl=*,score&bq=eqid:(3yp^1.57+OR+5fi^1.55+OR+c1s^1.55+OR+3ym^1.55+OR+gjz^1.55...)&start=0&q=*:*&facet.field=category&face
eaker.29
>
> Marc.
>
> On Wed, Dec 7, 2011 at 5:48 PM, John wrote:
>
> > I have a complex edismax query:
> >
> >
> >
> facet=true&facet.mincount=0&qf=title^0.08+categorysearch^0.05+abstract^0.03+body^0.1&wt=javabin&rows=25&defType=edismax&am
Hi all,
Some help with function queries.
I am trying to use a custom function query where the field is declared as:
In the code, I am trying to retrieve the string value of this field by:
DocValues token = vs.getValues(context, reader);
...
String str = token.strVal(docNum);
But getting th
score mechanism you need.
Are there some special configurations that I can use that take make
FunctionQueries faster?
Cheers,
John
way to access document info?
Cheers,
John
On Thu, Jan 17, 2013 at 6:40 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello John,
>
> > getting all the documents and analyzing their result fields?
>
> is almost not ever possible. Lucene stored fields
Using Solr 3.6, I am trying to get suggestions for phrases.
I managed getting prefixed suggestions, but not suggestions for middle of
phrase.
Can this be achieved with built in Solr suggest, or do I need to create a
special core for this purpose?
Thanks in advance.
works similarly to custom scoring? which one is faster?
3. is there a better solution for my problem?
Thanks in advance,
John
My issue is with the use of WordDelimiterFilter and how the QueryParser
(Dismax) converts the query into a MultiPhraseQuery.
This is on solr 1.3 / lucene 2.4.1.
For example:
1. yuma -> 3:10 to Yuma
2. yUma -> no results
For #2 it gets split into y + uma and becomes a MultiPhraseQuery requiring
If you have several tokens, for example after a WordDelimiterFilter, there
is almost no way NOT to trigger a MultiPhraseQuery when you have
catenateWords="1" or catenateAll="1".
For example the title: Jokers Wild
In the index it is: jokers wild, jokers, wild, jokerswild.
When you query "jOkerswi
I've misunderstood WordDelimiterFilter. You might think that
catenateAll="1" would append the full phrase (sans delimiters) as an OR
against the query.
So "jOkersWild" would produce:
"j (okers wild)" OR "jokerswild"
But you thought wrong. Its actually:
"j (okers wild jokerswild)"
Which is co
This is mostly my misunderstanding of catenateAll="1" as I thought it would
break down with an OR using the full concatenated word.
Thus:
Jokers Wild -> { jokers, wild } OR { jokerswild }
But really it becomes: { jokers, {wild, jokerswild}} which will not match.
And if you have a mistyped camel
Thanks Bill!!
Here is the content of the log file?(I restarted Solr so we have a clean log):
127.0.0.1 -? -? [20/03/2008:13:38:09 +] "GET
/solr/select/?q=*%3A*&version=2.2&start=0&rows=10&indent=on HTTP/1.1" 200 2538
127.0.0.1 -? -? [20/03/2008:13:38:31 +] "GET /solr/admin/logging.jsp
Thanks Yonik!!
Yep, I'm on Windows ... so if it can't delete the old files, shouldn't a
restart of Solr do the trick?? i.e. the files are no longer locked by Windows
... so they can now be deleted when Solr exits ... I tried it and didn't see
any change.
Who is keeping those files around / loc
;"
On Thu, Mar 20, 2008 at 10:55 AM, John <[EMAIL PROTECTED]> wrote:
> Yep, I'm on Windows ... so if it can't delete the old files, shouldn't a
restart of Solr do the trick?? i.e. the files are no longer locked by Windows
... so they can now be deleted when Sol
I should start? I've checked disk space, memory
usage, max number of open files, everything seems fine there. My guess
is that the configuration is rather unaltered from the defaults. I've
extended timeouts in Zookeeper already.
Thanks,
John
Thanks, I'll have a try. Can the load on the Solr servers impair the zk
response time in the current situation, which would cause the desync? Is
this the reason for the change?
John.
On 21/12/15 16:45, Erik Hatcher wrote:
> John - the first recommendation that pops out is to run
lot of "Caused by:
java.net.SocketException: Connection reset" lines, but this isn't very
explicit. I suppose I'll have to cross-check on the concerned server(s).
Anyway, I'll have a try at the updated setting and I'll get back to the
list.
Thanks,
John.
On 21/
any other suggestion?
Thanks,
John
On 21/12/15 17:39, Erick Erickson wrote:
> right, do note that when you _do_ hit an OOM, you really
> should restart the JVM as nothing is _really_ certain after
> that.
>
> You're right, just bumping the memory is a band-aid, but
> what
Hi,
This morning one of the 2 nodes of our SolrCloud went down. I've tried
many ways to recover it but to no avail. I've tried to unload all cores
on the failed node and reload it after emptying the data directory,
hoping it would sync from scratch. The core is still marked as down and
no data is
hi all,
i'm having trouble with what would seem to be a pretty straightforward
filter.
i'm trying to 'tag' documents based off of a list of relevant words that a
description field may contain. if the data contains any of the words then
this field is populated with it and acts as a quick reference
Hi there
I have a a catch all field called 'text' that I copy my item description,
manufacturer name, and the item's catalog number into. I'm having an issue
with keeping the broadness of the tokenizers in place whilst still allowing
some good precision in the case of very specific queries.
The r
i immediately realized after sending that i'd had stored="true" in the
field's config and that it was storing the original data, not the processed
data. silly me, thanks anyway!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.cur
nice tip. i appreciate it!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, Feb 1, 2016 at 4:55 PM, Erik Hatcher wrote:
> And if you want to have the “kept” words stored, consider the tri
the 1234-L and 1234-LT example.
Thanks for any insight-
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, Feb 1, 2016 at 3:30 PM, Erick Erickson
wrote:
> Likely you also have WordDelimiterFilterFac
ots I
need to. In this case, maybe I need not break the tokens down so much
before WDF starts operating.
susheel: thanks, i'll continue sharing as i explore and run into various
walls.
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolab
hi all,
i'm trying to find more information online about how to implement something
similar to the 'signals' feature found in Fusion. so far i've found one
decent article that isn't discussing the Fusion feature specifically. does
any of you happen to have some solid resources to point me in the d
hey erick,
thanks for this. if it's not against the newsletters policy, and is alright
w you in general, i'd love to have a side discussion about LW/Fusion.
j...@curvolabs.com
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs
That's why i provide my email :) already spoke w several people at fusion and
was hoping to pick your brain in particular. Thanks anyway tho, the info was
helpful!
best,
--
John Blythe
On Feb 5, 2016, 8:44 AM -0500, Erik Hatcher, wrote:
> John - best to not have non-Solr discussion
hi all,
last year i had gotten a site recommended to me on this forum. it helped
you break down the results/score you were getting from your queries. it
isn't explain.solr.pl, but another one that seemed a bit more robust if my
memory serves me correctly. i want to say a member of the thread not o
that's it!
and doug is the one from back in the day :)
thanks guys
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Mon, Feb 8, 2016 at 3:08 PM, Toke Eskildsen
wrote:
> John Blythe wro
amazing, thanks!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, Feb 9, 2016 at 6:04 AM, Vincenzo D'Amore wrote:
> Hi,
>
> I did a chrome extension:
>
>
> https:/
hi all,
i'm currently populating my documents via a mysql query. it occurred to me
that i have another source of similar data that would be helpful to use
that resides in the same database, but in another table. the two tables
share nothing relationally so there's no joining that can occur that i
not that i'm aware of. i think you could also simply have q=field1:value
field2:value in which the OR is implied
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Fri, Feb 26, 2016 at 8:31 AM,
what does your current analyzer look like?
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, Mar 8, 2016 at 6:42 AM, Mugeesh Husain wrote:
> Hello,
>
> I am implementing simple sea
hey all,
i'm tossing a lot of mud against the wall and am wanting to see what
sticks. part of that includes throwing item descriptions against some
fields i've set up as doubles. the imported data is a double and some of
the descriptions will have the related data within it (product sizes, e.g.
"S
makes sense. could i set up a simple regex filter in a placeholder field of
sorts and then copy that field into my tdouble field?
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Fri, Mar 11, 2016 a
gotcha. thanks for the tips guys
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Fri, Mar 11, 2016 at 11:25 AM, Alessandro Benedetti <
abenede...@apache.org> wrote:
> Copyfield
e if I can
commit Apache Nutch crawled data into Solr.
I tried the tutorial Integrate Solr with Nutchat
https://wiki.apache.org/nutch/NutchTutorial#Integrate_Solr_with_Nutch but
the location and files referred to don't match my Solr 5.3.0 setup.
Thanks,
John Mitchell
hey all,
is there any out of the box way to use your stop words to completely skip a
document? if something has X in its description when being indexed i just
want to ignore it altogether / when something is searched with X then go
ahead and automatically return 0 results. quick context: using so
e offered as additional filters using
facets. Note that you'd have to re-index them as plain strings.
It's more difficult to achieve but popularity boost can also be useful:
you can measure it by sales or by number of clicks. I use a combination
of both, and store those values using partia
"?
I have pasted below my shell script which starts with an empty Solr, then
adds to the Schema via "curl -X POST -H 'Content-type:application/json'
--data-binary '
e (and feasible) to convert the PostFilter into a plain filter query
such as "*:* NOT (id:1 OR id:2)" or something similar? How could I
implement this and how to estimate the filter cost in order for Solr to
execute it at the right position?
- Maybe I took the wrong path altogether?
Thanks in advance
John
I believe I want to set up a search handler with a function query to avoid
needing to code it.
The function query does some weighting by checking the "title" field for
whatever the user entered as their search term (named myCurrentSearchTerm
below)
To test this out in the Admin UI, I have the fol
Specifically, what drives the position in the list? Is it arbitrary or is
it driven by some piece of data?
If data-driven - code could do the sorting based on that data... separate
from SOLR...
Alternatively, if the data point exists in SOLR, a "sub-query" might be
used to get the right sort or
e fine to solve it in Solr because Solr does the work of
> filtering and pagination. If sorting were done outside than I would have to
> read every document from Solr to sort them. It is not an option, I have to
> query onle one page.
>
> I don't understand how to solve it using
r intent is for this search.
On Fri, Apr 1, 2016 at 11:15 AM, John Bickerstaff
wrote:
> Just to be clear - I don't mean who requests the list (application or
> user) I mean what "rule" determines the ordering of the list?
>
> Or, is there even a rule of any kind?
>
s with some criteria (status, amount, ..) from offset and 50
> rows then it would be perfect and fast. If ordering would be outside of
> solr then i have to retrive almost every 1 documents from solr (a bit
> less if filtered) to order them and display the page of 50 products.
>
that match the search terms and are on List X from Solr -
and then sort them by ID based on the data associated with the User (a list
of ID's, in order)
There is even a way to write a plugin that will go after external data to
help sort Solr documents, although I'm guessing you'd
ose as
well...
http://stackoverflow.com/questions/3931827/solr-merging-results-of-2-cores-into-only-those-results-that-have-a-matching-fie
On Fri, Apr 1, 2016 at 12:40 PM, John Bickerstaff
wrote:
> Tamas,
>
> I'm brainstorming here - not being careful, just throwing out ideas..
list
> via:
>
> fq=listid_s:378
> sort=listpos(listpos_s,378) asc
>
> Regards,
> Tamas
>
> On Fri, Apr 1, 2016 at 8:55 PM, John Bickerstaff >
> wrote:
>
> > Tamas,
> >
> > This feels a bit like a "user favorites" problem.
> >
>
You can sort like this (I believe that _version_ is the internal id/index
number for the document, but you might want to verify)
In the Admin UI, enter the following in the sort field:
_version_ asc
You could also put an entry in the default searchHandler in solrconfig.xml
to do this to every in
Will the processes be Solr processes? Or do you mean multiple threads
hitting the same Solr server(s)?
There will be a natural bottleneck at one Solr server if you are hitting it
with a lot of threads - since that one server will have to do all the
indexing.
I don't know if this idea is helpful,
Does SOLR cloud push indexing across all nodes? I've been planning 4 SOLR
boxes with only 3 exposed via the load balancer, leaving the 4th available
internally for my microservices to hit with indexing work.
I was assuming that if I hit my "solr4" IP address, only "solr4" will do
the indexing...
The first question is whether you have duplicate ID's in your data set.
I had the same kind of thing a few months back, freaked out, and spent a
few hours trying to figure it out by coding extra logging etc... to keep
track of every single count at every stage of the process.. All the
numbers mat
Sweet - that's a good point - I ran into that too - I had not run the
commit for the last "batch" (I was using SolrJ) and so numbers didn't match
until I did.
On Mon, Apr 4, 2016 at 9:50 PM, Binoy Dalal wrote:
> 1) Are you sure you don't have duplicates?
> 2) All of your records might have been
the
duplicates ID's are on documents that are actually unique.
On Mon, Apr 4, 2016 at 9:51 PM, John Bickerstaff
wrote:
> Sweet - that's a good point - I ran into that too - I had not run the
> commit for the last "batch" (I was using SolrJ) and so numbers didn't mat
x bq function may be
similar, but 1) i'd like to be certain before proceeding, 2) i'd prefer
even more to stick w my vanilla query processing instead of migrating to
dismax, at least for the near term.
thanks for any pointers
best,
--
John Blythe
My own choices were driven mostly by the usage of the data - from a more
architectural perspective.
I have "appDocuments" and "appImages" for one of the applications I'm
supporting. Because they are so closely connected (an appDocuments can
have N number of appImages and appImages can belong to m
check the status of the new collection.
/opt/solr/bin/solr healthcheck -z 192.168.56.5,192.168.56.6,
192.168.56.7/solr5_4 -c statdx
You should see something like this. NOTE: There are two JSON objects - one
for each SOLR VM
(And there was much rejoicing!!!)
john@solr6:/opt/solr$ ./bin/solr he
In terms of #2, this might be of use...
https://wiki.apache.org/solr/HowToReindex
On Tue, Apr 5, 2016 at 3:08 PM, Anuj Lal wrote:
> I am new to solr. Need some advice from more experienced solr team
> members
>
> I am upgrading 4.4 solr cluster to 5.5
>
>
> One of the step I am doing for upgr
A few thoughts...
>From a black-box testing perspective, you might try changing that
softCommit time frame to something longer and see if it makes a difference.
The size of your documents will make a difference too - so the comparison
to 300 - 500 on other cloud setups may or may not be compari
I recently upgraded from 4.x to 5.5 -- it was a pain to figure it out, but
it turns out to be fairly straightforward...
Caveat: Because I run all my data into Kafka first, I was able to easily
re-create my collections by running a microservice that pulls from Kafka
and dumps into Solr.
I have a r
e.org/confluence/display/solr/Upgrading+Solr
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+4.x+Cluster+to+Solr+5.0
https://cwiki.apache.org/confluence/display/solr/Major+Changes+from+Solr+4+to+Solr+5
On Wed, Apr 6, 2016 at 8:58 AM, John Bickerstaff
wrote:
> I recently upgrade
problem...
On Wed, Apr 6, 2016 at 10:28 AM, Anuj Lal wrote:
> Hi John , Shawn
>
> Thanks for replying to my query . Really appreciate your responses
>
> Ideally I’d like to do node by node rolling upgrade from 4.4 to 5.5
>
> But gave this approach of rolling upgrade because I f
Right... You can store that anywhere - but at least consider not storing
it in your existing SOLR collection just because it's there... It's not
really the same kind of data -- it's application meta-data and/or
user-specific data...
Getting it out later will be more difficult than if you store i
llection command and referenced the config in
Zookeeper with the -n(?) flag...
sudo /opt/solr/server/scripts/cloud-scripts/zkcli.sh -cmd upconfig -confdir
/home/john/conf/ -confname statdx -z 192.168.56.5/solr5_4
On Wed, Apr 6, 2016 at 3:26 PM, Don Bosco Durai wrote:
> I have SolrCloud pre-
Yup - just tested - that command runs fine with Solr NOT running...
On Wed, Apr 6, 2016 at 3:41 PM, John Bickerstaff
wrote:
> If you can get to the IP addresses from your application, then there's
> probably a way... Do you mean you're firewalled off or in some other way
> u
Therefore, this becomes possible:
http://stackoverflow.com/questions/525212/how-to-run-unix-shell-script-from-java-code
Hackish, but certainly doable... Given there's no API...
On Wed, Apr 6, 2016 at 3:44 PM, John Bickerstaff
wrote:
> Yup - just tested - that command runs fine with
;
>
>
>
> Right now I am asking users to install (just unzip) solr on any server and
> I give them a shell script to run the script from command line before
> starting my application. It is inconvenient, so I was seeing if anyone was
> able to automate it.
>
> Thanks
gotcha, thanks for the response. will check {!boost} out for now and start
working on moving our current query builders to a dismax/edismax
configuration.
best,
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville
Hello all,
I'm wondering if anyone can comment on arguments for and against putting
solr.xml into Zookeeper?
I assume one argument for doing so is that I would then have all
configuration in one place.
I also assume that if it doesn't get included as part of the upconfig
command, there is likely
If you delete a lot of documents over time, or if you add updated documents
of the same I'd over time, optimizing your collection(s) may help.
On Apr 14, 2016 3:52 AM, "Emir Arnautovic"
wrote:
> Hi Edwin,
> Indexing speed depends on multiple factors: HW, Solr configurations and
> load, documents,
Stupid phone autocorrect...
If you add updated documents of the same ID over time, optimizing your
collection(s) may help.
On Thu, Apr 14, 2016 at 7:50 AM, John Bickerstaff
wrote:
> If you delete a lot of documents over time, or if you add updated
> documents of the same I'
I have the following (essentially hard-coded) line in the Solr Admin Query
UI
=
bq: contentType:(searchTerm1 searchTerm2 searchTerm2)^1000
=
The "searchTerm" entries represent whatever the user typed into the search
box. This can be one or more words. Usually less than 5.
I want to put
ts the way.
Thanks.
On Thu, Apr 14, 2016 at 12:34 PM, John Bickerstaff wrote:
> I have the following (essentially hard-coded) line in the Solr Admin Query
> UI
>
> =
> bq: contentType:(searchTerm1 searchTerm2 searchTerm2)^1000
> =
>
> The "searchTerm" en
and accessing the
> terms in solrconfig.xml. You've already found the ability
> to configure edismax as your defType and apply boosts
> to particular fields...
>
> Best,
> Erick
>
> On Thu, Apr 14, 2016 at 11:53 AM, John Bickerstaff
> wrote:
> > May
not definitive. By that I mean that
> boosting just influences the score it does _not_ explicitly order the
> results. So the docs with "figo" in the conentType field will tend to
> the top, but won't be absolutely guaranteed to be there.
>
>
>
> Best,
> Erick
I had a hard time getting replicas made via the API, once I had created the
collection for the first time although that may have been ignorance on
my part.
I was able to get it done fairly easily on the Linux command line. If
that's an option and you're interested, let me know - I have a roug
su - solr -c "/opt/solr/bin/solr create -c statdx -d /home/john/conf
-shards 1 -replicationFactor 2"
However, this won't work by itself. There is some preparation
necessary... I'll send you the doc.
On Thu, Apr 14, 2016 at 4:55 PM, Jay Potharaju
wrote:
> Curious w
5.4
This problem drove me insane for about a month...
I'll send you the doc.
On Thu, Apr 14, 2016 at 5:02 PM, Jay Potharaju
wrote:
> Thanks John, which version of solr are you using?
>
> On Thu, Apr 14, 2016 at 3:59 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
&
ion, and then
> inspecting the live_nodes list in Zookeeper to confirm that the (live) node
> list is actually what you think it is.
>
>
>
>
>
> On 4/14/16, 4:04 PM, "John Bickerstaff" wrote:
>
> >5.4
> >
> >This problem drove me insane for ab
stance must be up and running for the replica to
> be added, but that's not onerous
>
>
> The bin/solr script is a "work in progress", and doesn't have direct
> support
> for "addreplica", but it could be added.
>
> Best,
> Erick
>
> O
I note that you're using ports different from the default 8983 for your
Solr instances...
You probably checked already, but I thought I'd mention it.
On Thu, Apr 14, 2016 at 8:30 PM, John Bickerstaff
wrote:
> Thanks Eric!
>
> I'll look into that immediately - yes, I think
<http://x.x.x.x:8984/solr/admin/collections?action=ADDREPLICA&collection=test2&shard=shard1&node=x.x.x.x:9001_solr>
(Note the / instead of _ )
On Thu, Apr 14, 2016 at 10:45 PM, John Bickerstaff wrote:
> Jay - it's probably too simple, but the error says "not currentl
i
> wrote:
> > Hi,
> >
> > Does the `&name=...` actually work for you? When attempting similar with
> > Solr 5.3.1, despite what documentation said, I had to use
> > `node_name=...`.
> >
> >
> > Thanks,
> > Jarek
> >
> > On Fri,
Oh, and what, if any directories need to exist for the ADDREPLICA
On Fri, Apr 15, 2016 at 11:09 AM, John Bickerstaff wrote:
> Thanks again Eric - I'm going to be trying the ADDREPLICA again today or
> Monday. I much prefer that to hand-edit hackery...
>
> Thanks also for point
Oh, and what, if any directories need to exist for the ADDREPLICA command
to work?
Hopefully nothing past the already existing /var/solr/data created by the
Solr install script?
On Fri, Apr 15, 2016 at 11:18 AM, John Bickerstaff wrote:
> Oh, and what, if any directories need to exist for
1 - 100 of 853 matches
Mail list logo