Hello all,
I downloaded 5.4 and started doing a rolling upgrade from a 5.0
solrcloud cluster and discovered that there seems to be a compatibility
issue where doing a rolling upgrade from pre-5.4 which causes the 5.4 to
fail with unable to determine leader errors.
Is there a work around that
ok,
I just found the 5.4.1 RC2 download, it seems to work ok for a rolling
upgrade.
I will see about downgrading back to 5.4.0 afterwards to be on an
official release ...
On 01/19/2016 04:27 PM, Michael Joyner wrote:
Hello all,
I downloaded 5.4 and started doing a rolling upgrade from a
fix release. It should be out
around the weekend.
On Tue, Jan 19, 2016 at 1:48 PM, Michael Joyner wrote:
ok,
I just found the 5.4.1 RC2 download, it seems to work ok for a rolling
upgrade.
I will see about downgrading back to 5.4.0 afterwards to be on an official
release ...
On 01/19/2016 04
On 01/21/2016 01:22 PM, Ishan Chattopadhyaya wrote:
Perhaps you could stay on 5.4.1 RC2, since that is what 5.4.1 will be
(unless there are last moment issues).
On Wed, Jan 20, 2016 at 7:50 PM, Michael Joyner wrote:
Unfortunately, it really couldn't wait.
I did a rolling upgrade t
|Help!|
|
|
|What is the best way to recover from: |
Can't load schema managed-schema: unknown field 'id'
|I was managing the schema on a test collection, fat fingered it, but now
I find out the schema ops seem to altering all collections on the core?
SolrCloud 5.5.1 |||
|
-Mike|||
id" field using Solr's UI or the schema API since you are using the
managed-schema.
Alexandre Drouin
-Original Message-
From: Michael Joyner [mailto:mich...@newsrx.com]
Sent: July 26, 2016 2:34 PM
To: solr-user@lucene.apache.org
Subject: Can't load schema managed-sch
r the schema API since you are using the
managed-schema.
Alexandre Drouin
-----Original Message-
From: Michael Joyner [mailto:mich...@newsrx.com]
Sent: July 26, 2016 2:34 PM
To: solr-user@lucene.apache.org
Subject: Can't load schema managed-schema: unknown field 'id'
|Help!
field that do not exists anymore. You can try
adding an "id" field using Solr's UI or the schema API since you are using the
managed-schema.
Alexandre Drouin
-Original Message-
From: Michael Joyner [mailto:mich...@newsrx.com]
Sent: July 26, 2016 2:34 PM
To: solr-user@
ities)
utility to download and upload the file from ZooKeeper.
Alexandre Drouin
-Original Message-
From: Michael Joyner [mailto:mich...@newsrx.com]
Sent: July 26, 2016 3:48 PM
To: solr-user@lucene.apache.org
Subject: Re: Can't load schema managed-schema: unknown field 'id'
he file...
id
On Tue, Jul 26, 2016 at 2:17 PM, John Bickerstaff <
j...@johnbickerstaff.com> wrote:
I don't see a managed schema file. As far as I understand it, id is set
as a "uniqueKey" in the schema.xml file...
On Tue, Jul 26, 2016 at 2:11 PM, Michael Joyner
wrote:
ok
schema file. As far as I understand it, id is
set as a "uniqueKey" in the schema.xml file...
On Tue, Jul 26, 2016 at 2:11 PM, Michael Joyner
wrote:
ok, I think I need to do a manual edit on the managed-schema file
but I get "NoNode" for /managed-schema when trying t
Hello all,
We've been indexing documents with empty strings for some fields.
After our latest round of Solr/SolrJ updates to 6.3.0 we have discovered
that fields with empty strings are no longer being stored, effectively
storing documents with those fields as being NULL/NOT-PRESENT instead of
Hello all,
How does setting mm = 1 for edismax impact multi-field searches?
We set mm to 1 and get zero results back when specifying multiple fields
to search across.
Is there a way to set mm = 1 for each field, but to OR the individual
field searches together?
-Mike/NewsRx
keywords_text_general^0.5 keywords_en^0.1
On 07/21/2017 01:46 PM, Susheel Kumar wrote:
Interesting. If its working for you then its good but to your original
question, qf seems to be working.
Adding to mailing list for the benefit of others.
On Fri, Jul 21, 2017 at 9:41 AM, Michael Joyner wrote:
Thanks,
We
Hello,
We are using highlighting and are looking for the exact phrase "HIV
Prevention" but are receiving back highlighted snippets like the
following where non-phrase matching portions are being highlighted, is
there a setting to highlight the entire phrase instead of any partial
token match
Hey all!
Is there a way to determine fields available for faceting (those with
data) for a search without actually doing the faceting for the fields?
-Mike/NewsRx
Hello all,
What is the difference between the following two queries that causes
them to give different results? Is there a parsing issue with "OR NOT"
or is something else going on?
a) ("batman" AND "indiana jones") OR NOT ("cancer") /*only seems to
match the and clause*/
parsedquery=Boost
Hey all,
I'm wanting to update our managed-schemas to include the latest options
available in the 6.6.2 branch. (point types for one)
I would like to be able to sort them and diff them (production vs dist
supplied) to create a simple patch that can be reviewed, edited if
necessary, and then
Thanks!
On 12/20/2017 11:37 AM, Erick Erickson wrote:
The schema is not order dependent, I freely mix-n-match the fieldType,
copyField and field definitions for instance.
On Wed, Dec 20, 2017 at 8:29 AM, Michael Joyner wrote:
Hey all,
I'm wanting to update our managed-schemas to in
Have a "store only" text field that contains a serialized (json?) of the
master object for deserilization as part of the results parsing if you
are wanting to save a DB lookup.
I would still store everything in a DB though to have a "master" copy of
everthing.
On 11/18/2016 04:45 AM, Dorian
Help,
(Solr 6.3)
Trying to do a "sub-facet" using the new json faceting API, but can't
seem to figure out how to get the "max" date in the subfacet?
I've tried a couple of different ways:
== query ==
json.facet={
code_s:{
limit:-1,
type:terms,field:code_s,facet:{
Argh! I'm trying to run some test queries using the web ui but it keeps
aborting the connection at 10 seconds? Is there anyway to easily change
this?
(We currently have heavy indexing going on and the cache keeps getting
"un-warmed").
idate:"max(issuedate_tdt)"
}
}
}
}
}
On 11/21/2016 03:42 PM, Michael Joyner wrote:
Help,
(Solr 6.3)
Trying to do a "sub-facet" using the new json faceting API, but can't
seem to figure out how to get the "max" d
Hello all,
It seems I can't find a "getFacets" method for SolrJ when handling a
query response from a json.facet call.
I see that I can get a top level opaque object via "Object obj =
response.getResponse().get("facets");"
Is there any code in SolrJ to parse this out as an easy to use navig
Hello all,
I'm running out of spacing when trying to restart nodes to get a cluster
back up fully operational where a node ran out of space during an optimize.
It appears to be trying to do a full sync from another node, but doesn't
take care to check available space before starting downloads
-aids. Although I
suppose refusing to even start if there wasn't enough free disk space
isn't a bad idea, it's not foolproof though
Best,
Erick
On Mon, Nov 28, 2016 at 8:39 AM, Michael Joyner wrote:
Hello all,
I'm running out of spacing when trying to restart nodes to ge
On 11/28/2016 12:26 PM, Erick Erickson wrote:
Well, such checks could be put in, but they don't get past the basic problem.
And all this masks your real problem; you didn't have enough disk
space to optimize in the first place. Even during regular indexing w/o
optimizing, Lucene segment merg
Halp!
I need to reindex over 43 millions documents, when optimized the
collection is currently < 30% of disk space, we tried it over this
weekend and it ran out of space during the reindexing.
I'm thinking for the best solution for what we are trying to do is to
call commit/optimize every 10
ad Thing.
Best,
Erick
On Mon, Dec 12, 2016 at 8:36 AM, Michael Joyner wrote:
Halp!
I need to reindex over 43 millions documents, when optimized the collection
is currently < 30% of disk space, we tried it over this weekend and it ran
out of space during the reindexing.
I'm thinking for t
Huh? What does this even mean? If the schema is updated already how can
we be out of time to update it?
Not enough time left to update replicas. However, the schema is updated
already.
You can solve the disk space and time issues by specifying multiple
segments to optimize down to instead of a single segment.
When we reindex we have to optimize or we end up with hundreds of
segments and very horrible performance.
We optimize down to like 16 segments or so and it doesn't do
Try Increasing the number of connections your ZooKeeper allows to a very
large number.
On 04/04/2017 09:02 AM, Salih Sen wrote:
Hi,
One of the replicas went down again today somehow disabling all
updates to cluster with error message "Cannot talk to ZooKeeper -
Updates are disabled.” half a
Isn't the unified html escaper a rather bit extreme in it's escaping?
It makes it hard to deal with for simple post-processing.
The original html escaper seems to do minimial escaping, not every
non-alphabetical character it can find.
Also, is there a way to control how much text is returned
I am in a situation where I need to access a solrcloud behind a firewall.
I have a tunnel enabled to one of the zookeeper as a starting points and
the following test code:
CloudSolrServer server = new CloudSolrServer("localhost:2181");
server.setDefaultCollection("test");
SolrPingResponse p =
On 09/16/2014 04:03 PM, Doug Balog wrote:
Not sure if this will work, but try to use ssh to setup a SOCKS proxy via
the -D command option.
Then use the socksProxyHost and socksProxyPort via the java command line
(ie java -DsocksProxyHost="localhost") or
System.setProperty("socksProxyHost",
I have a SolrCloud setup with two shards.
When I use "query.set("fq","{!collapse field=title_s}");" the results
show duplicates because of the sharding.
EX:
{status=0,QTime=1141,params={fl=id,code_s,issuedate_tdt,pageno_i,subhead_s,title_s,type_s,citation_articleTitle_s,citation_articlePageNo
Try escaping special chars with a "\"
On 10/08/2014 01:39 AM, Lanke,Aniruddha wrote:
We are using a eDisMax parser in our configuration. When we search using the
query term that has a ‘-‘ we don’t get any results back.
Search term: red - yellow
This doesn’t return any data back but
Is there an easy way to compare schemas?
When upgrading nodes, we are wanting to compare the "core" and
"automatically mapped" data types between our existing schema and the
new manage-schema available as part of the upgraded distrubtion.
Would you supply the snippet for the custom HttpClient to get it to
honor/use proxy?
Thanks!
On 10/10/2018 10:50 AM, Andreas Hubold wrote:
Thank you, Shawn. I'm now using a custom HttpClient that I create in a
similar manner as SolrJ, and it works quite well.
Of course, a fix in a future rel
Based on experience, 2x head room is room is not always enough,
sometimes not even 3x, if you are optimizing from many segments down to
1 segment in a single go.
We have however figured out a way that can work with as little as 51%
free space via the following iteration cycle:
public void so
uffer from the problem
that they create massive segments that then stick around for a very
long time, see:
https://lucidworks.com/2017/10/13/segment-merging-deleted-documents-optimize-may-bad/
Best,
Erick
On Mon, Apr 30, 2018 at 1:56 PM, Michael Joyner wrote:
Based on experience, 2x head room is r
h no parameters respects the maximum segment size, which is a
change from now.
Finally, expungeDeletes may be useful as that too will respect max
segment size, again after LUCENE-7976 is committed.
Best,
Erick
On Wed, May 2, 2018 at 9:22 AM, Michael Joyner wrote:
The main reason we go this route is th
installed
and enabled on nodes, only the NFS host needs any real configuration
this way)
On 05/31/2018 05:28 PM, Greg Roodt wrote:
Thanks! I wasn't aware this existed.
Have you used it with Solr backups?
On Fri, 1 Jun 2018 at 00:07, Michael Joyner <mailto:mich...@newsrx.com>> wro
That is the way we do it here - also helps a lot with not needing x2 or
x3 disk space to handle the merge:
public void solrOptimize() {
int initialMaxSegments = 256;
int finalMaxSegments = 4;
if (isShowSegmentCounter()) {
log.info("Optimizing ...");
}
Ref:
https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
If an update specifies only the non-routed id, will SolrCloud select the
correct shard for updating?
If an update specifies a different route, will SolrCloud delete the
previous document with the same
Hello all,
Can the Solr indexes be safely stored and used via mounted NFS shares?
-Mike
46 matches
Mail list logo