Hi there
I ran into the same problem.
would you please explain how did you solve it.
Thanks,
Darx
On Fri, Aug 29, 2014 at 11:26 PM, Tommaso Teofili wrote:
> Hi,
>
> it'd be good if you could open a Jira issues (with a patch preferably)
> describing your findings.
>
> Thanks,
> Tommaso
>
>
> 201
I kind of think this might be "working as designed", but I'll be happy to
be corrected by others :)
We had a similar issue which we discovered by accident, we had 2 or 3
collections spread across some machines, and we accidentally tried to send
an indexing request to a node in teh cloud that didn'
Hi all,
with Solr 3.3.0 when indexing I get the following errors (sometimes):
org.apache.solr.common.SolrException log
org.apache.solr.common.SolrException: Invalid chunk header
Caused by: com.ctc.wstx.exc.WstxIOException: Invalid chunk header
Caused by: java.io.IOException: Invalid chunk header
Hi, we have a use case where we are trying to create multiple facet ranges
based on a single field.
I have successfully aliased the field by using the fl parameter e.g.
fl=date_decade:date,date_year:date,date_month:date,date_day:date where date is
the original field and the day_decade etc are
I noticed that when you include a function as a result field, the
corresponding key in the result markup includes trailing whitespace,
which seems like a bug. I wonder if anyone knows if there is a ticket
for this already?
Example:
fl="id field(units_used) archive_id"
ends up returning resu
I have a situation that getting response from two cores for a single request.
I need to use custom request handler to get the responses.
For Example:
core1 and core2 both are having the request handler namely "/abc".
If i use individually in the following way i am getting proper results.
http:
> On Oct 28, 2014, at 9:31 AM, Shawn Heisey wrote:
>
> exceed a 15 second zkClientTimeout
Which is too low even with good GC settings. Anyone with config still using 15
or 10 seconds should move it to at least 30.
- Mark
http://about.me/markrmiller
It is indeed possible. Just need to use a different syntax. As far as I know,
the facet parameters need to be local parameters, like this...
&facet.range={!key=date_decade facet.range.start=1600-01-01T00:00:00Z
facet.range.end=2000-01-01T00:00:00Z
facet.range.gap=%2B10YEARS}date&facet.range={!k
On 10/29/2014 3:09 AM, Diego Marconato wrote:
> with Solr 3.3.0 when indexing I get the following errors (sometimes):
>
> org.apache.solr.common.SolrException log
> org.apache.solr.common.SolrException: Invalid chunk header
> Caused by: com.ctc.wstx.exc.WstxIOException: Invalid chunk header
> Caus
You can't AFAIK. Solr treats these cores as completely
separate entities. Plus, scores across the separate
cores cannot be assumed to be comparable. In fact,
there's not even any guarantee that the two
cores have _any_ fields in common, so this isn't
something that can be solved generally.
The app
Hi,
Is there a way to clear the solr admin interface logging page's logs?
I understand that we can change the logging level but incase if I would
want to just clear the logs and say reload collection and expect to see
latest only and not the past?
Manual way or anywhere that I should clear so th
: fl="id field(units_used) archive_id"
I didn't even realize until today that fl was documented to support space
seperated fields. i've only ever used commas...
fl="id,field(units_used),archive_id"
Please go ahead and file a bug in Jira for this, and note in the summary
that using commas i
So I have a few titles like so:
1. When a dog bites fight back : what you need to know, what to do, what not
to do / [prepared by the law firm] Slater & Zurz LLP.
2. First things first [book on cd] : [the rules of being a Warner-- what
works, what doesn't and what really matters most] / Kurt & Bre
I am observing some weird behavior with how Solr is using memory. We are
running both Solr and zookeeper on the same node. We tested memory
settings on Solr Cloud Setup of 1 shard with 146GB index size, and 2 Shard
Solr setup with 44GB index size. Both are running on similar beefy
machines.
Af
First thing is add &debug=query to the URL and see what the parsed
form of the query is to be sure the stop words issue is resolved.
Once that's determined, add the phrase with a high boost, something like
q=title:(what if) OR title:"what if"^10
where the boost factor is TBD.
Or add the title fi
On 10/29/2014 11:43 AM, Vijay Kokatnur wrote:
> I am observing some weird behavior with how Solr is using memory. We are
> running both Solr and zookeeper on the same node. We tested memory
> settings on Solr Cloud Setup of 1 shard with 146GB index size, and 2 Shard
> Solr setup with 44GB index s
Vijay Kokatnur [kokatnur.vi...@gmail.com] wrote:
> For the Solr Cloud setup, we are running a cron job with following command
> to clear out the inactive memory. It is working as expected. Even though
> the index size of Cloud is 146GB, the used memory is always below 55GB.
> Our response times
Yes sure, if you use jetty container to run solr, you can remove solr.log
file from
$SOLR_HOME/example/logs
by using this command for Linux/Unix
rm -rf $SOLR_HOME/example/logs/solr.log
For windows
DEL $SOLR_HOME/example/logs/solr.log
After that, you can check the logging interface.
--
Although this looks like a nice & simple addition to the web interface.
- Original Message -
From: "Ramzi Alqrainy"
To: solr-user@lucene.apache.org
Sent: Wednesday, October 29, 2014 3:18:26 PM
Subject: Re: Clear Solr Admin Interface Logging page's logs
Yes sure, if you use jetty containe
What exactly does this API do?
--Pritesh
Check this out:
http://www.slideshare.net/cloudera/solrhadoopbigdatasearch
On 10/29/14 16:31, Pritesh Patel wrote:
What exactly does this API do?
--Pritesh
Hi Solr User List,
I have started using Solrj (Solr and Solrj 4.1.0, and also 4.10.1) for sending
indexing/update requests to Solr server that is being hosted inside Tomcat, and
the security authentication HTTP BASIC auth is enabled in this Solr server
web.xml.
(1) The client code looks like b
I've inherited some code that filters requests for ACL by implementing a
servlet Filter and wrapping the request to add parameters (user/groups) to
the request as fq, and also handling getParameter()/getParameterMap() that
Solr invokes to get those values from the request.
Solrconfig.xml has place
Hmmm, my first question is whether you really mean 4.1.0 Or 4.10?
Because if it's the former I really have to ask why you'e use such
an old version. I'm assuming that's a typo.
BTW, 4.10.2 is being released as we speak, so you'll really want to
consider using that version assuming you meant 4
No, I mean 4.1.0, not 4.10, although my ultimate goal is to get to 4.10. (And
now 4.10.2 as you suggest!)
I tried 4.0->4.10 first, ran into this issue, and decided to go one step at
a time and try going from 4.0->4.1.
--
View this message in context:
http://lucene.472066.n3.nabble.com/v4-0-up
This command only touches OS level caches that hold pages destined for (or
not) the swap cache. Its use means that disk will be hit on future requests,
but in many instances the pages were headed for ejection anyway.
It does not have anything whatsoever to do with Solr caches. It also is not
frag
Hi - I'm trying to use 4.10.1 with /export. I've defined a field as
follows:
I then call:
http://server:port/solr/COLLECT1/export?q=Collection:COLLECT2000&sort=DocumentId
desc&fl=DocumentId
The error I receive is:
java.io.IOException: DocumentId must have DocValues to use this feature.
at
org.a
Oops. My wording was poor. My reference to those who don't research the
matter was pointing at a large number of engineers I have worked with; not
this list.
-Original Message-
From: Will Martin [mailto:wmartin...@gmail.com]
Sent: Wednesday, October 29, 2014 6:38 PM
To: 'solr-user@lucene.
Hi Erick,
Thanks for your kind reply.
In order to deal with more documents in SolrCloud, we are thinking to use
many collections and each of collection will also have several shards.
Basic idea to deal with much document is that when a collection is filled
with much data, we will create a new col
On 10/29/2014 1:05 PM, Toke Eskildsen wrote:
> We did have some problems on a 256GB machine churning terabytes of data
> through 40 concurrent Tika processes and into Solr. After some days,
> performance got really bad. When we did a top, we noticed that most of the
> time was used in the kernel
OK, I opened SOLR-6672; not sure how I stumbled into using white space;
I would ordinarily use commas too, I think.
-Mike
On 10/29/14 1:23 PM, Chris Hostetter wrote:
: fl="id field(units_used) archive_id"
I didn't even realize until today that fl was documented to support space
seperated fiel
Hi guys
I was wondering is there some smart way to migrate Solr cloud from 1 set
of machines to another?
Specificaly, I have 2 cores, each of them with 2 replicas and 2 shards,
spread across 4 machines.
We bought new HW and are in a process of moving to new 4 machines.
What are my option
Hi/Bok Jakov,
2) sounds good to me. It means no down-time. 1) means stoppage. If
stoppage is not OK, but falling behind with indexing new content is OK, you
could:
* add a new cluster
* start reading from old index and indexing into the new index
* stop old cluster when done
* index new content
33 matches
Mail list logo