st by changing the command line to
this one (called from the */danny* directory) :
The Solr cloud starts without problems.
Then I go in the admin interface, in the *Dataimport* page of the core, to
execute a data import. But now the *Entity* dropdown list is empty, and I
get this error in th
ements.
So I'm wondering if there's a way to store data in the index with arrays of
arrays, which would yield something like this :
or something else, as long as the structure keeps the relations between
array levels.
Thanks,
Danny.
--
View this message in context:
http://lucene.
links between "two" elements
and "one" elements.
So I'm wondering if there's a way to store data in the index with arrays of
arrays, which would yield something like this :
http://pastebin.com/j3eY1eVv
or something else, as long as the structure keeps the relations b
Erick : you're right, allowing nested arrays would be like opening Pandora's
Box :)
Michaël : having parallel arrays and losing their relations is what I want
to avoid, actually :)
I guess I'll have to find another way.
Thanks,
Danny.
--
View this message in context:
http:
>From my short experience, it indicates that the particular node lost
connection with zookeeper.
Like Binoy said, It may be because the process/host is down, but also could
be a result of a network problem.
On Tue, Jan 19, 2016 at 12:20 PM, Binoy Dalal
wrote:
> In my experience 'Gone' indicates
Hi,
I would like to describe a process we use for overcoming problems in
cluster state when we have networking issues. Would appreciate if anyone
can answer about what are the flaws on this solution and what is the best
practice for recovery in case of network problems involving zookeeper.
I'm wo
discovery?
I would like to be able to specify collection.configName in the
core.properties and when starting server, the collection will be created
and linked to the config name specified.
On Mon, Feb 29, 2016 at 4:01 PM, danny teichthal
wrote:
> Hi,
>
>
> I would like to describe a pr
ough I clearly
> haven’t touched it lately. Feel free to ask if you have issues:
> https://github.com/randomstatistic/git_zk_monitor
>
>
>
>
> On 3/1/16, 12:09 PM, "danny teichthal" wrote:
>
> >Hi,
> >Just summarizing my questions if the long mail is a little
disk for
> non-SolrCloud, and ZK for SolrCloud.
>
>
>
>
>
> On 3/2/16, 12:13 AM, "danny teichthal" wrote:
>
> >Thanks Jeff,
> >I understand your philosophy and it sounds correct.
> >Since we had many problems with zookeeper when switching to Solr
Hi Li,
If you could supply some more info from your logs would help.
We also had some similar issue. There were some bugs related to SolrCloud
that were solved on solr 4.10.4 and further on solr 5.x.
I would suggest you compare your logs with defects on 4.10.4 release notes
to see if they are the s
Hi,
We are using Solr cloud with solr 4.10.4.
On the passed week we encountered a problem where all of our servers
disconnected from zookeeper cluster.
This might be ok, the problem is that after reconnecting to zookeeper it
looks like for every collection both replicas do not have a leader and are
ou cannot migrate, but would at least
> give a confirmation and maybe workaround on what you are facing.
>
> Regards,
>Alex.
>
> Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
> http://www.solr-start.com/
>
>
> On 10 August 2015 at 11:37, dan
ircumstances.
>
> Of course if your ZK was down for minutest this wouldn't help.
>
> Best,
> Erick
>
> On Mon, Aug 10, 2015 at 1:06 PM, danny teichthal
> wrote:
> > Hi Alexander ,
> > Thanks for your reply, I looked at the release notes.
> > The
t 3 minutes or so for action to be
> taken is the
> fallback.
>
> Best,
> Erick
>
> On Mon, Aug 10, 2015 at 1:34 PM, danny teichthal
> wrote:
> > Erick, I assume you are referring to zkClientTimeout, it is set to 30
> > seconds. I also see these messages on Solr side:
Hi,
We are experiencing some intermittent slowness on updates for one of our
collections.
We see user operations hanging on updates to SOLR via SolrJ client.
Every time in the period of the slowness we see something like this in the
log of the replica:
[org.apache.solr.update.UpdateHandler] R
lection,
> you may well be hitting long GC pauses.
>
> Also note that there was a bottleneck in Solr prior to 5.2
> when replicas were present, see:
> http://lucidworks.com/blog/indexing-performance-solr-5-2-now-twice-fast/
>
> Best,
> Erick
>
> On Sun, Jun 21, 2015 at 7:
If you are running on tomcat you will probably have a deployment problem.
On version 5.2.1 it worked fine for me, I manually packaged solr.war on
build time.
But, when trying to upgrade to Solr 5.5.1, I had problems with incompatible
servlet-api of Solr's jetty version and my tomcat servlert-api.
S
Hi,
SOLR-7036 introduced a new faster method for group.facet, which uses
UnInvertedField.
It was patched for version 4.x.
Over the last week, my colleague uploaded a new patch that work against the
trunk.
We would really appreciate if anyone could take a look at it and give us
some feedback about
Hi Bharath,
I'm no expert, but we had some major problems because of deleteByQuery ( in
short DBQ).
We ended up replacing all of our DBQ to delete by ids.
My suggestion is that if you don't realy need it - don't use it.
Especially in your case, since you already know the population of ids, it
is r
ts on JSON API.
Please take a look at https://issues.apache.org/jira/browse/SOLR-7036
Comments and votes are welcome.
On Wed, Jul 27, 2016 at 11:31 AM, danny teichthal
wrote:
> Hi,
> SOLR-7036 introduced a new faster method for group.facet, which uses
> UnInvertedField.
> It was pa
Hi,
Not sure if it is related, but could be - I see that you do this =
CloudSolrClient
solrClient = new
CloudSolrClient.Builder().withZkHost(zkHosts).build();
Are you creating a new client on each update?
If yes, pay attention that the Solr Client should be a singleton.
Regarding session timeout,
Hi,
I have an "id" field that is defined on schema.xml with type long.
For some use cases the id that is indexed exceeds Max long limitation.
I thought about solving it by changing the id to type string.
For my surprise, by only changing the definition on schema.xml and
restarting Solr, I was able
hange the uniqeKey from my "id" to "id_str".
May this work? Assuming that all ids are unique?
On Thu, Mar 9, 2017 at 5:14 PM, Shawn Heisey wrote:
> On 3/9/2017 4:20 AM, danny teichthal wrote:
> > I have an "id" field that is defined on schema.xml with type
Hi,
I create a collection of 2 shards with 1 replication factor and enable
autoAddReplicas. Then I kill shard2 with 'kill -9' . The overseer asked the
other solr node to create a new solr core and point to the dataDir of shard2.
Unfortunately, the new core failed to come up because of pre-exist
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.LockObtainFailedException: Index dir
'hdfs://psvrlxcdh5mmdev1.somewhere.com:8020/Test/LDM/psvrlxbdecdh1Cluster/solr/collection1/core_node2/data/index/'
of core 'collection1_shard2_replica1' is a
Hi All,
To make things short, I would like to use block joins, but to be able to
index each document on the block separately.
Is it possible?
In more details:
We have some nested parent-child structure where:
1. Parent may have a single level of children
2. Parent and child
> -- Jack Krupansky
>
> From: danny teichthal
> Sent: Sunday, March 16, 2014 6:47 AM
> To: solr-user@lucene.apache.org
> Subject: Nested documents, block join - re-indexing a single document upon
> update
>
>
>
>
> Hi All,
>
>
>
>
> To make things
he entire block, and maybe some way to delete individual child documents
> as well.
>
> -- Jack Krupansky
>
> -Original Message- From: danny teichthal
> Sent: Tuesday, March 18, 2014 3:58 AM
> To: solr-user@lucene.apache.org
> Subject: Re: Nested documents, block join
I wonder about performance difference of 2 indexing options: 1- multivalued
field 2- separate fields
The case is as follows: Each document has 100 “properties”: prop1..prop100.
The values are strings and there is no relation between different
properties. I would like to search by exact match on se
Hi,
On our system we currently initiate a soft commit to SOLR after each
business transaction that initiate an update. Hard commits are automatic
each 2 minutes.
We want to limit the explicit commit and move to autoSoftCommit.
Because of business restrictions:
Online request should be available fo
ommits with openSearcher=true, but
> they aren't free either. At that fast a commit rate you probably won't
> get much benefit out of the top-level caches, and you'll be warming an
> awful lot.
>
> FWIW,
> Erick
>
> On Sun, Nov 30, 2014 at 12:32 PM, danny
Thanks for the clarification, I indeed mixed it with UpdateRequestHandler.
On Mon, Dec 1, 2014 at 11:24 PM, Chris Hostetter
wrote:
>
> : I thought that the auto commit is per update handler because they are
> : configured within the update handler tag.
>
> is not the same thing as a that does
Hi,
Is there a way to make some patterns to be excluded on the source of a
copyField?
We are using globs to copy all our text fields to some target field.
It looks something like this:
I would like a subset of the fields starting with "prefix_" to be excluded
and not copied to destination. (e.g.
on patterns.
>
> Regards,
>Alex.
>
> Sign up for my Solr resources newsletter at http://www.solr-start.com/
>
>
> On 2 February 2015 at 02:53, danny teichthal wrote:
> > Hi,
> > Is there a way to make some patterns to be excluded on the source of a
> > co
Hi, I have a field that is defined to be of type "text_en". Occasionally, I
notice that lengthy strings are converted to hash symbols. Here is a
snippet of my field type:
Here is an example of the field's value:
###
Yes... the is what I see in the admin console when I perform a
search for the document. Currently, I am using solrj and the addBean()
method to update the core. Whats strange is in our QA env, the document
indexed correctly. But in prod, I see hash symbols and thus any user search
against that
I looked at the text via the admin analysis tool. The text appeared to be
ok! Unfortunately, the description is client data... so I can't post it
here, but I do not see any issues when running the analysis tool.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lengthy-desc
Here is a query that should return 2 documents... but it only returns 1.
/solr/m7779912/select?indent=on&version=2.2&q=description%3Agateway&fq=&start=0&rows=10&fl=description&qt=&wt=&explainOther=&hl.fl=
Oddly enough, the description of the two documents are exactly the same.
Except one is inde
Hello,
Wondering if anyone could point me to the right way of streaming a .zip file:
my goal is to stream a zipped version of the index. I zip up the index files I
get from calling IndexCommit#getFileNames, and then attempt to stream using a
custom handler with the following in handleRequestBod
Hello,
Wondering if anyone could point me to the right way of streaming a .zip file:
my goal is to stream a zipped version of the index. I zip up the index files I
get from calling IndexCommit#getFileNames, and then attempt to stream using a
custom handler with the following in handleRequestBod
not configurable via zkClientTimeout (solr.xml) or SOLR_WAIT_FOR_ZK
(solr.in.sh).
Is there a way to configure this, and if not, should I open a bug?
Thanks,
Danny
Are there any significant (or not so significant) changes? I have browsed the
release notes and searched JIRA, but the latest news seems to be in 7.3 (where
the old Leader-In-Recovery logic was replaced).
Context:
We are currently running Solr 7.4 in production. In the past year, we’ve seen
t
42 matches
Mail list logo