Any update?
From: jatin roy
Sent: Tuesday, April 3, 2018 12:37 PM
To: solr-user@lucene.apache.org
Subject: Copy field on dynamic fields?
Hi,
Can we create copy field on dynamic fields? If yes then how it decide which
field should be copied to which one?
For exam
@wunder
Are you sending updates in batches? Are you doing a commit after every
update?
>> We want the system to be near real time, so we are not doing updates in
>> batches and also we are not doing commit after every update.
>> autoSoftCommit once in every minute, and autoCommit once in every
I am facing issue with LTR query not supported with grouping.
I see the patch for this has been raised here
https://issues.apache.org/jira/browse/SOLR-8776
Is it available in solr/master (7.2.2) now?
Looks like this patch is not merged yet.
-
--Ilay
--
Sent from: http://lucene.472066.n3
Are you sending updates in batches? Are you doing a commit after every update?
You should use batches and you should not commit after every update.
wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/ (my blog)
> On Apr 4, 2018, at 8:01 PM, 苗海泉 wrote:
>
> A lot of col
A lot of collection time, we also found that there are a lot of time_wait
thread, mainly committed submit thread and search thread, which led to the
rapid decline in the speed of solr, the number of these threads up to more
than 2,000.
I didn't have a solution to this problem, but I found that relo
I have the data ready for index now, it is a json file:
{"122": "20180320-08:08:35.038", "49": "VIPER", "382": "0", "151": "1.0",
"9": "653", "10071": "20180320-08:08:35.088", "15": "JPY", "56": "XSVC",
"54": "1", "10202": "APMKTMAKING", "10537": "XOSE", "10217": "Y", "48":
"179492540", "201": "1"
When we have 49 shards per collection, there are more than 600 collections.
Solr will have serious performance problems. I don't know how to deal with
them. My advice to you is to minimize the number of collections.
Our environment is 49 solr server nodes, each with 32cpu/128g, and the data
volume
4th April 2018, Apache Solr™ 7.3.0 available
The Lucene PMC is pleased to announce the release of Apache Solr 7.3.0
Solr is the popular, blazing fast, open source NoSQL search platform from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting, faceted
On 4/4/2018 12:13 PM, Doug Turnbull wrote:
> Thanks for the responses. Yeah I thought they were weird errors too... :)
>
> Below are the logs from zookeeper running in foreground after a connection
> attempt. But this Exception looks suspicous to me:
>
> [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:N
Hi Doug, are you able to connect to Zookeeper thru Zookeeper zkCli.sh or
does Zookeeper.out show anything useful.
Thnx
On Wed, Apr 4, 2018 at 2:13 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Thanks for the responses. Yeah I thought they were weird errors too... :)
>
> Belo
Thanks for the responses. Yeah I thought they were weird errors too... :)
Below are the logs from zookeeper running in foreground after a connection
attempt. But this Exception looks suspicous to me:
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@383] - Exception
causing close of sessio
>
> What's happening under the hood of
> solr in answering query [1] from [2]?
https://github.com/apache/lucene-solr/blob/master/lucene/join/src/java/org/apache/lucene/search/join/ToParentBlockJoinQuery.java#L178
On Wed, Apr 4, 2018 at 3:39 PM, Arturas Mazeika wrote:
> Hi Mikhail et al,
>
> Tha
On 4/4/2018 4:31 AM, Mugdha Varadkar wrote:
The hash ranges are available in each collections state.json file. Here is
the data for the collections:
I thought you said these indexes were sharded? That only shows one
shard. It has the full hash range. If they're all like that, then
don't wo
On 4/4/2018 7:14 AM, Doug Turnbull wrote:
I've been struggling to do a basic upconfig both with embedded and actual
Zookeeper in Solr 7.2.1 using the zkcli script on OSX.
One variable, I recently upgraded to Java 9. I get slightly different
errors on Java 8 vs 9
Java 9:
doug@wiz$~/ws/foo(m
I haven't seen those errors before, so it's puzzling. Is there
any chance there are conflicting _zookeeper_ jars somewhere
in your classpath? This looks like a problem with ZK talking
to itself.
You may find it easier just to use the bin/solr script, we tried
to put useful ZK commands there. Wheth
Using property legacyCloud=true, coreNodeNames are well written by Solr
in core.properties file.
We are wondering if the problem comes from our configuration or the
bugfix https://issues.apache.org/jira/browse/SOLR-11503 ?
_*Without legacyCloud=true:*_
> Our configuration before Solr start:
Many thanks Erick.
I think that we found the issue regarding schemaless. The origin file has
to follow a specific format and we were trying to index with a non solr xml
standard.
Also, thanks for the advice of the field type. These schemaless collections
will all come from the same source, hence n
I've been struggling to do a basic upconfig both with embedded and actual
Zookeeper in Solr 7.2.1 using the zkcli script on OSX.
One variable, I recently upgraded to Java 9. I get slightly different
errors on Java 8 vs 9
This is probably me being dumb, but googling / searching Jira hasn't really
Hi Mikhail et al,
Thanks a lot for a very thorough answer. This is an impressive piece of
knowledge you just shared.
Not surprisingly, I was caught unprepared by the 'v=...' part of the
answer. This brought me to the links you posted (starts with http). From
those links I went to the more updated
q=+{!parent which=ntype:p v='+msg:Hello +person:Arturas'} +{!parent which=
ntype:p v='+msg:ciao +person:Vai'}
On Wed, Apr 4, 2018 at 12:19 PM, Arturas Mazeika wrote:
> Hi Mikhail et al,
>
> It seems to me that the nested documents must include nodes that encode the
> level of nodes (within the d
Hi Shawn,
The hash ranges are available in each collections state.json file. Here is
the data for the collections:
*Solr 5.5.5 collection*
{"ranger_audits":{
"replicationFactor":"1",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"shards":
Hi Mikhail et al,
It seems to me that the nested documents must include nodes that encode the
level of nodes (within the document). Therefore, the minimal example must
include the node type. Is the following structure sufficient?
{
"id":1,
"ntype":"p",
"_childDocuments_":
[
Hello,
We intend to move to PreAnalyzed URP for analysis offloading. Browsing the
Javadocs i came across the SchemaRequest API looking for a way to get a Field
object remotely, which i seem to need for
JsonPreAnalyzedParser.toFormattedString(Field f). But all i can get from
SchemaRequest API i
Hello,
I use Solr Cloud and I test DIH system in cloud, but I have this error :
Full Import
failed:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable
to PropertyWriter implementation:ZKPropertiesWriter
at
org.apache.solr.handler.dataimport.DataImporter.createPropertyWriter(DataI
24 matches
Mail list logo