All,
I see that SOLR returns status value as 0 for successful searches
org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1] webapp=/solr
path=/user/ping params={} status=0 QTime=0
I do see that the status come's back as 400 whenever the search is invalid
( invoking query with parameters
On 7/21/2016 11:25 PM, Rallavagu wrote:
> There is no other software running on the system and it is completely
> dedicated to Solr. It is running on Linux. Here is the full version.
>
> Linux version 3.8.13-55.1.6.el7uek.x86_64
> (mockbu...@ca-build56.us.oracle.com) (gcc version 4.8.3 20140911 (Re
On 7/21/16 9:16 PM, Shawn Heisey wrote:
On 7/21/2016 9:37 AM, Rallavagu wrote:
I suspect swapping as well. But, for my understanding - are the index
files from disk memory mapped automatically at the startup time?
They are *mapped* at startup time, but they are not *read* at startup.
The map
I met simliar issue before,suggest to use double as field type for this case
Ray
2016年7月21日星期四,Nick Vasilyev 写道:
> Thanks Chris.
>
> Searching for both values and retrieving the documents would be alright as
> long as the data was correct. In this case, the data that I am indexing
> into Solr is
On 7/21/2016 9:37 AM, Rallavagu wrote:
> I suspect swapping as well. But, for my understanding - are the index
> files from disk memory mapped automatically at the startup time?
They are *mapped* at startup time, but they are not *read* at startup.
The mapping just sets up a virtual address space
Thanks Chris.
Searching for both values and retrieving the documents would be alright as
long as the data was correct. In this case, the data that I am indexing
into Solr is not the same data that I am pulling out at query time. That is
the real impact here.
On Thu, Jul 21, 2016 at 6:12 PM, Chris
I don't think there is a flag.
But the bigger question is whether you are exposing Solr directly to
the client? You should not be. You should have a middleware client
that talks to Solr and then generates web UI or whatever.
If you give untrusted access to Solr, there are too many things that
can
: Hi, I am running into a weird rounding issue on Solr 5.2.1. I have a float
: field (also tried tfloat), I am indexing 154035.26 into it (confirmed in
: the data), but at query time, I get back 154035.27 (.01 more).
: Additionally when I query for the document and include this number in the q
:
The primary use case seems to require a SortStream. Ignoring the large join
for now...
1. search main collection with stream (a)
2. search other collection with stream (b)
3. hash join a & b (c)
4. full sort on c
5. aggregate c with reducer
6. apply user sort criteria with top
It's very likely tha
Thanks!
-Original Message-
From: Joel Bernstein [mailto:joels...@gmail.com]
Sent: 21 July 2016 19:51
To: solr-user@lucene.apache.org
Subject: Re: Reference to SolrCore from SearchComponent
-- This email has reached the Bank via an external source --
There is a SolrCoreAware interface
There is a SolrCoreAware interface you can implement, which will provide
access to the SolrCore. From there you can add a closeHook to the core.
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jul 21, 2016 at 2:34 PM, Ellis, Tom (Financial Markets IT) <
tom.el...@lloydsbanking.com.invalid> w
Hi There,
I'm in the process of creating a custom SearchComponent. This component will
have a long running thread performing an action to keep a list updated. As
SearchComponents do not seem to have a destroy/close hook, I was wondering if
there is a way of getting a reference to the SolrCore t
I did a bit more investigating here is something that may help
troubleshooting:
- Seems that numbers above 131071 - are impacted. 131071.26 is fine,
but 131072.26 is not. 131071 is a large prime and also a Mersenne prime.
- 131072.24 gets rounded down to 131072.23. While 131072.26 gets rounded up
Hi, I am running into a weird rounding issue on Solr 5.2.1. I have a float
field (also tried tfloat), I am indexing 154035.26 into it (confirmed in
the data), but at query time, I get back 154035.27 (.01 more).
Additionally when I query for the document and include this number in the q
parameter,
Hi Ahmet!
Thank you for that information. I was wondering whether dismax is kind of
„deprecated“ or - if not - when would I use dismax in preference to edismax.
The documentation sounds to me like „edismax is dismax+ : is does everything
dismax does, and more“.
Chantal
Am 21.07.2016 um 14:43 s
Hi all,
Got a Solr 5.2.1 installation. I am getting following error response when
calling the TERMS component. Now the error is not the point, I know what is
going on in this instance. However, to address security concerns, I am trying
to have Solr truncate the stack trace in the response. Of
Are you getting this error from a test case you've setup or from a manual
call to the /stream handler?
Joel Bernstein
http://joelsolr.blogspot.com/
On Thu, Jul 21, 2016 at 12:28 PM, Timothy Potter
wrote:
> I'm working with 6.1.0 release and I have a single SolrCloud instance
> with 1 shard / 1
The tricky thing you have is a large join coupled with a reduce() group,
which have different sorts.
If you have a big enough cluster with enough workers, shards and replicas
you can make this work.
For example if you partition the large join across 30 workers, the hash
join would fit in the avai
I'm working with 6.1.0 release and I have a single SolrCloud instance
with 1 shard / 1 replica. Somehow I'm triggering this, which from what
I can see, means workers == 0, but how? Shouldn't workers default to 1
I should mention that my streaming expression doesn't include any
workers, i.e. it is
Thanks Erick.
On 7/21/16 8:25 AM, Erick Erickson wrote:
bq: map index files so "reading from disk" will be as simple and quick
as reading from memory hence would not incur any significant
performance degradation.
Well, if
1> the read has already been done. First time a page of the file is
acces
Easy to do.
Glad it's working for you.
Erick
On Wed, Jul 20, 2016 at 6:02 PM, Joe Obernberger
wrote:
> Thank you Erick! I miss-read the webpage.
>
> -Joe
>
>
> On 7/20/2016 7:57 PM, Erick Erickson wrote:
>>
>> autoAddReplicas is _not_ specified in solr.xml. The things you can
>> change in
bq: map index files so "reading from disk" will be as simple and quick
as reading from memory hence would not incur any significant
performance degradation.
Well, if
1> the read has already been done. First time a page of the file is
accessed, it must be read from disk.
2> You have enough physical
Solr 5.4.1 with embedded jetty with cloud enabled
We have a Solr deployment (approximately 3 million documents) with both
write and search operations happening. We have a requirement to have
updates available immediately (NRT). Configured with default
"solr.NRTCachingDirectoryFactory" for dire
Not that I know of, but it is an open source project so its easy to extend.
On Jul 21, 2016 11:01 AM, "Darshan Pandya" wrote:
> Thanks Nick, once again.
> I was able to use Facet panel.
>
> I also wanted to ask the group if there is a repository of custom panels
> for Banana which we can benefit
I can see I may need to rethink some things. I have two joins: one is 1 to 1
(very large) and one is 1 to .03. A HashJoin may work on the smaller one.
The large join looks like it may not be possible. I could get away with
treating it as a filter somehow - I don't need the fields from the
documents
Thanks Nick, once again.
I was able to use Facet panel.
I also wanted to ask the group if there is a repository of custom panels
for Banana which we can benefit from ?
Sincerely,
Darshan
On Wed, Jul 20, 2016 at 11:55 AM, Darshan Pandya
wrote:
> Nick, Thanks for your help. I'll test it out and
Hi,
I am getting below error while converting json to my object. I am using Gson
class (gson-2.2.4.jar) to generate json from object and object from json.
gson fromJson() method throws below error.
Note: This was working fine with solr-solrj-5.2.0.jar but it causing issue when
i uses solr-solrj-
A few other things for you to consider:
1) How big are the joins?
2) How fast do they need to go?
3) How many queries need to run concurrently?
#1 and 2# will dictate how many shards, replicas and parallel workers are
needed to perform the join. #3 needs to be carefully considered because
MapRedu
Hi,
If you want to disable operators altogether please use dismax instead of
edismax.
In dismax, only + and - unary operators are supported, if i am not wrong.
I don't remember the situation of quotations for the phrase query.
Ahmet
On Tuesday, July 19, 2016 8:29 PM, CA wrote:
Just for the r
Hi All -
Could you please help me on spell check on multi-word phrase as a whole...
Scenario -
I have a problem with solr spellcheck suggestions for multi word phrases.
With the query for 'red chillies'
q=red+chillies&wt=xml&indent=true&spellcheck=true&spellcheck.extendedResults=true&spellcheck.c
Vasu -
Try q=field1:value1 OR (*:* -field1:[* TO *])
Pure negative clause only works at the top-level / main query, otherwise need
to do *:* and minus off stuff.
Erik
> On Jul 21, 2016, at 6:44 AM, Vasu Y wrote:
>
> Hi,
> I want to query for all documents where field "field1" cont
Hi,
I want to query for all documents where field "field1" contains a value
"value1" or the document doesn't contain the field "field1" at all.
I tried the query: "field1:value1 OR -field1:[* TO *]" and it didn't return
anything.
When I try them separately they work fine; "field1:value1" or th
Hi.
I am installing a new SolrCloud cluster with Solr 6.1.0 on Debian 8 Jessie.
I am using Zookeeper from official Debian repository (Zookeeperd
3.4.5+dfsg-2).
I configured and bootstrapped the new cluster, but I'm still getting the
"org.apache.solr.common.SolrException: Error processing the reques
Hi,
Sorry for the late reply, Yes above issue is related to
https://issues.apache.org/jira/browse/SOLR-9188 . Could you please let me
know the workaround for this.
Thanks,
Shankar.
On Fri, Jul 8, 2016 at 7:13 PM, Aleš Gregor wrote:
> Hello,
>
> could this be related to https://issues.apache.or
34 matches
Mail list logo