Thanks so much! Are you also contributing to Solr development?
On Thu, Apr 21, 2016 at 3:33 PM, Alisa Z. wrote:
> Hi Yangrui,
>
> I have summarized some experiments about Solr nesting capabilities
> (however, it does not include precisely pivoting yet more of faceting up to
> parents and down t
From: "Boman [via Lucene]"
mailto:ml-node+s472066n4272073...@n3.nabble.com>>
Date: Thursday, April 21, 2016 at 9:52 PM
To: Boman Irani mailto:bir...@apttus.com>>
Subject: Re: Making managed schema unmutable correctly?
Thanks @Erick. You are right. That collection is not using a managed-schema.
Thanks @Erick. You are right. That collection is not using a managed-schema.
Works now!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Making-managed-schema-unmutable-correctly-tp4264051p4272073.html
Sent from the Solr - User mailing list archive at Nabble.com.
We feel the issue is in RealTimeGetComponent.getInputDocument(SolrCore core,
BytesRef idBytes) where solr calls getNonStoredDVs and add the fields to the
original document without excluding the copyFields.
We made changes to send the filteredList to searcher.decorateDocValueFields and
it star
We are trying to update Field A.
-Karthik
On Thu, Apr 21, 2016 at 10:36 PM, John Bickerstaff wrote:
> Which field do you try to atomically update? A or B or some other?
> On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee" <
> tchatter...@commvault.com>
> wrote:
>
> > Hi,
> > Here is the scenari
(1) So,dispalying the content(traversal of documents) depends on my
pagination ?
If i specify all 500 documents to be dispalyed and first 10 on the first
page and remaining on the other, that implies that all documents traverse
through network ?
(2) In my application, front end of UI is developed
You're mixing managed and non-managed schema terms here.
"schema.xml" is the old default and is (usually) _not_
editable by the managed schema stuff.
Managed schema schemas are usually named just that "managed_schema".
You can hand edit this if you want, but when you do I'd recommend that all
the
Which field do you try to atomically update? A or B or some other?
On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee"
wrote:
> Hi,
> Here is the scenario for SOLR5.5:
>
> FieldA type= stored=true indexed=true
>
> FieldB type= stored=false indexed=true docValue=true
> usedocvalueasstored=false
>
>
Hi,
Here is the scenario for SOLR5.5:
FieldA type= stored=true indexed=true
FieldB type= stored=false indexed=true docValue=true
usedocvalueasstored=false
FieldA copyTo FieldB
Try an Atomic update and we are getting this error:
possible analysis error: DocValuesField "mtmround" appears more t
Yes, that works as well too.
Thank you!
Regards,
Edwin
On 21 April 2016 at 19:00, Bram Van Dam wrote:
> On 21/04/16 03:56, Zheng Lin Edwin Yeo wrote:
> > This is the working one:
> > dataDir=D:/collection1/data
>
> Ah yes. Backslashes are escape characters in properties files.
> C:\\collection1
I have done it by extending the solr join plugin. Needed to override 2
methods from join plugin and it works out.
Thanks,
Susmit
On Thu, Apr 21, 2016 at 12:01 PM, Mikhail Khludnev <
mkhlud...@griddynamics.com> wrote:
> Hello,
>
> There is no much progress on
> https://issues.apache.org/jira/brow
I'm afraid that if the queries are given in such a loose natural language
form, the only way to handle it is to introduce some natural language
processing stage that would form the right query (which is actually a working
strategy, IBM does so).
If your document structure is fixed (i.e., you
Hi Yangrui,
I have summarized some experiments about Solr nesting capabilities (however, it
does not include precisely pivoting yet more of faceting up to parents and down
to children with some statictics) so maybe you could find an idea there:
https://medium.com/@alisazhila/solr-s-nesting-o
Well, it took me 7 milliseconds to index a 100MB dataset on a local Solr. So
you could assume that for 1 GB it would take 70ms= 0.07s which is still pretty
fast.
Yet dealing with network delays is a separate issue.
100 wikipedia article-size documents shouldn't be a big problem.
>Четверг,
Where do I find the shema.xml to hand edit? I can't find it on my node
running ZK.
I'm not sure what's happening, but when I try to add a field to the schema
for one of the collections (I am running in SolrCloud mode), I get:
curl -X POST -H 'Content-type:application/json' --data-binary '{ "add-f
I guess errors like "fsync-ing the write ahead log in SyncThread:5 took
7268ms which will adversely effect operation latency."
and: "likely client has closed socket"
make me wonder if something went wrong in terms of running out of disk
space for logs (thus giving your OS no space for necessary f
Hi.
I am seeing lot of these errors in my current 5.5.0 dev install. Would it
make sense to use 5.5 in production or a different version is recommended ?
I am using DIH, not sure if that matters in this case.
Thanks
On Fri, Mar 11, 2016 at 3:57 AM, Shai Erera wrote:
> Hey Shawn,
>
> I added se
Hello,
There is no much progress on https://issues.apache.org/jira/browse/SOLR-8297
Although it's really achievable.
On Thu, Apr 21, 2016 at 7:52 PM, Shikha Somani wrote:
> Greetings,
>
>
> Background: Our application is using Solr 4.10 and has multiple
> collections all of them sharded equally
Hi Ahmet,
Yes, I have also come to that conclusion, that I need to do one of those things
if I want this function, since Solr/Lucene is lacking in this area. Although
after some discussion with my coworkers, we decided to simply disable norms for
the title field, and not do anything more, for n
Hi Jimi,
Please do either :
1) write your own similarity that saves document length (docValues) in a
lossless way and implement whatever punishment/algorithm you want.
or
2) disable norms altogether add an integer field (title_lenght) and populate it
(outside the solr) with the number of word
Greetings,
Background: Our application is using Solr 4.10 and has multiple collections all
of them sharded equally on Solr. These collections were joined to support
complex queries.
Problem: We are trying to upgrade to Solr 5.x. However from Solr 5.2 onward to
join two collections it is a re
On 4/20/2016 10:06 PM, Zap Org wrote:
> I have 5 zookeeper and 2 solr machines and after a month or two whole
> clustre shutdown i dont know why. The logs i get in zookeeper are attached
> below. otherwise i dont get any error. All this is based on linux VM.
>
> 2016-03-11 16:50:18,159 [myid:5] - W
On 4/21/2016 5:25 AM, Mahmoud Almokadem wrote:
> We have a cluster of solr 4.8.1 installed on tomcat servlet container and
> we’re able to use DIH Schedule by adding this lines to web.xml of the
> installation directory:
>
>
>
> org.apache.solr.handler.dataimport.scheduler.ApplicationListe
hello,what should i do with the question?
http://stackoverflow.com/questions/33073960/solr-no-active-slice-servicing-hash-code
thanks。
_
李宏伟 | 考拉FM技术部
手机:15801483916
QQ: 153563985
邮箱:l...@kaolafm.com
官网:www.autoradio.cn
Hi Li,
Do you see timeouts liek "CLUSTERSTATUS the collection time out:180s"
if its the case, this may be related to
https://issues.apache.org/jira/browse/SOLR-7940,
and i would say either use the patch file or upgrade.
*Thanks,*
*Rajesh,*
*8328789519,*
*If I don't answer your call please leave
Hello,
We have a cluster of solr 4.8.1 installed on tomcat servlet container and we’re
able to use DIH Schedule by adding this lines to web.xml of the installation
directory:
org.apache.solr.handler.dataimport.scheduler.ApplicationListener
No we are planing to migrate to Solr 6 an
On 21/04/16 03:56, Zheng Lin Edwin Yeo wrote:
> This is the working one:
> dataDir=D:/collection1/data
Ah yes. Backslashes are escape characters in properties files.
C:\\collection1\\data would probably work as well.
- bram
Hi all,
Here's something we've just released as open source to help cope with
running out of disk space on a Solr node or cluster. It's pretty early
so we'd welcome contributions and feedback. Although conceived
originally for an Elasticsearch project it's also targetted at Solr:
https://gith
Hi
We have used Solr4.6 for 2 years,If you post more logs ,maybe we can
fixed it.
2016-04-21 6:50 GMT+08:00 Li Ding :
> Hi All,
>
> We are using SolrCloud 4.6.1. We have observed following behaviors
> recently. A Solr node in a Solrcloud cluster is up but some of the cores
> on the nodes are
to concatenating two fields to use it as one field from
http://grokbase.com/t/lucene/solr-user/138vr75hvj/concat-2-fields-in-another-field
,
but the solution whichever is given i tried but its not working. please help
me on it.
i am trying to concat latitude and longitude fields to make it as
30 matches
Mail list logo