ZOMG thanks! This makes the mailing list almost tolerable, in an
only-15-years-behind kind of way instead of a 25-years-behind kind of way.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Mailing-list-subscriptions-tp4294026p4294098.html
Sent from the Solr - User mailing lis
g of documents?
Thanks for the reply.
-Brent
--
View this message in context:
http://lucene.472066.n3.nabble.com/High-load-frequent-updates-low-latency-requirement-use-case-tp4293383p4295225.html
Sent from the Solr - User mailing list archive at Nabble.com.
Is there a way to tell whether or not a node at a specific address is up
using a SolrJ API?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Detecting-down-node-with-SolrJ-tp4295402.html
Sent from the Solr - User mailing list archive at Nabble.com.
explicit
3
750
text
The Solr queries have the following params:
rows=30&qt=search&dismax=true&mm=&fl=&fq=:()&q=
There are no explicit commits happening.
Any
Thanks for the reply.
The overhead you describe is what I suspected, I was just suprised that if
DSE is able to keep that overhead small enough that the overall result is
faster with the extra hardware, Solr doesn't also benefit.
I did try with RF=2 and shards=1, and yep, it's way fast. Really ni
I'm getting periodic errors when adding documents from a Java client app.
It's always a sequence of an error message logged in CloudSolrClient, then
an exception thrown from
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143):
Error message:
[ERROR][i
Whenver I create a collection that's replicated, I get these warnings in the
follower Solr log:
WARN [c:test_collection s:shard1 r:core_node1
x:test_collection_shard1_replica1] o.a.s.u.PeerSync no frame of reference to
tell if we've missed updates
Are these harmless?
--
View this message in c
Okay, I'll try to figure out how to making using to the latest feasible... my
situation is that I need to be able to talk to both DSE and Solr from the
same application, and 5.4.1 is the latest version that works with DSE.
But this issue appears to be stemming from the Solr side. The client logs
In my solrconfig.xml, I have:
30
expire_at
and in my schema, I have:
If I change it to indexed="false", will it still work? If so, is there any
benefit to having the field indexed if I'm not using it in any way except to
allow the expiration processo
Thanks for the reply.
Follow up:
Do I need to have the field stored? While I don't need to ever look at the
field's original contents, I'm guessing that the
DocExpirationUpdateProcessorFactory does, so that would mean I need to have
stored=true as well, correct?
--
View this message in context
I've got a DocExpirationUpdateProcessorFactory configured to periodically
remove expired documents from the Solr index, which is working in that the
documents no longer show up in queries once they've reached expiration date.
But the index size isn't reduced when they expire, and I'm wondering if i
I know that in the sample config sets, the _version_ field is indexed and not
stored, like so:
Is there any reason it needs to be indexed? I'm able to create collections
and use them with it not indexed, but I wonder if it negatively impacts
performance.
--
View this message in context:
http
Is there a difference between an "on deck" searcher and a warming searcher?
>From what I've read, they sound like the same thing.
--
View this message in context:
http://lucene.472066.n3.nabble.com/on-deck-searcher-vs-warming-searcher-tp4309021.html
Sent from the Solr - User mailing list arch
Hmmm, conflicting answers. Given the infamous "PERFORMANCE WARNING:
Overlapping onDeckSearchers" log message, it seems like the "they're the
same" answer is probably correct, because shouldn't there only be one active
searcher at a time?
Although it makes me curious, if there's a warning about hav
I'm using Solr Cloud 6.1.0, and my client application is using SolrJ 6.1.0.
Using this Solr config, I get none of the dreaded "PERFORMANCE WARNING:
Overlapping onDeckSearchers=2" log messages:
https://dl.dropboxusercontent.com/u/49733981/solrconfig-no_warnings.xml
However, I start getting them fr
Okay, I created a JIRA ticket
(https://issues.apache.org/jira/servicedesk/agent/SOLR/issue/SOLR-9841) and
will work on a patch.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Adding-DocExpirationUpdateProcessorFactory-causes-Overlapping-onDeckSearchers-warnings-tp4309155p43
Chris Hostetter-3 wrote
> If you are seeing an increase in "Overlapping onDeckSearchers" when using
> DocExpirationUpdateProcessorFactory, it's becuase you actaully have docs
> expiring quite frequently relative to the autoDeletePeriodSeconds and
> the amount of time needed to warm each of the n
each collection
(like jetty configs, logging properties, etc)?
How much impact do the directives in the solrconfig have on
performance? Do they only take effect if I have something configured that
requires them, and therefore if I'm missing one that I need, I'd get an
error if it's not defined?
Any help will be greatly appreciated. Thanks!
-Brent
Is there any way to set a timeout with a CloudSolrClient?
Is there a way to subscribe to just responses to a question I ask on the
mailing list, without getting emails for all activity on the mailing list?
You'd think it'd be designed in a way that when someone submits a question,
they automatically get emails for any responses.
Using SolrJ, I'm trying to figure out how to include request parameters
when adding a document with SolrCloudClient.add().
Here is what I was doing when using HttpSolrClient instead of
SolrCloudClient:
HttpSolrClient client = new HttpSolrClient.Builder("
http://hostname.com:8983/solr/corename";)
I'm running Solr Cloud 6.1.0, with a Java client using SolrJ 5.4.1.
Every once in awhile, during a query, I get a pair of messages logged in
the client from CloudSolrClient -- an error about a request failing, then a
warning saying that it's retrying after a stale state error.
For this test, the
ce it...
Thanks,
Brent
t;set" option for atomic
> update is only used when you wish to selectively update only some of the
> fields for a document, and that does require that the update log be enabled
> using .
>
> -- Jack Krupansky
>
> -Original Message- From: Brent Ryan
> Sent: F
My schema is pretty simple and has a string field called solr_id as my
unique key. Once I get back to my computer I'll send some more details.
Brent
On Friday, October 18, 2013, Shawn Heisey wrote:
> On 10/18/2013 2:59 PM, Brent Ryan wrote:
>
>> How do I replace a document in
rj ... Anyways,
I've contacted support so lets see what they say.
On Fri, Oct 18, 2013 at 5:51 PM, Shawn Heisey wrote:
> On 10/18/2013 3:36 PM, Brent Ryan wrote:
>
>> My schema is pretty simple and has a string field called solr_id as my
>> unique key. Once I get back to m
DataStax, either as an
> official support ticket or as a question on StackOverflow.
>
> But, I do think the previous answer of avoiding the use of a Map object in
> your document is likely to be the solution.
>
>
> -- Jack Krupansky
>
> -----Original Message- From: Brent R
I'm prototyping a search product for us and I was trying to use the
"commitWithin" parameter for posting updated JSON documents like so:
curl -v
'http://localhost:8983/solr/proposal.solr/update/json?commitWithin=1'
--data-binary @rfp.json -H 'Content-type:application/json'
However, the com
orks fine.
>
>Can you reproduce your problem using the standard Solr example in Solr
>4.4?
>
>-- Jack Krupansky
>
>From: Ryan, Brent
>Sent: Thursday, September 05, 2013 10:39 AM
>To: solr-user@lucene.apache.org
>Subject: JSON update request handler & commitWithin
>
&g
Has anyone ever hit this when adding documents to SOLR? What does it mean?
ERROR [http-8983-6] 2013-09-06 10:09:32,700 SolrException.java (line 108)
org.apache.solr.common.SolrException: Invalid CRLF
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:175)
at
org.apache.solr.handle
Thanks. I realized there's an error in the ChunkedInputFilter...
I'm not sure if this means there's a bug in the client library I'm using
(solrj 4.3) or is a bug in the server SOLR 4.3? Or is there something in
my data that's causing the issue?
On Fri, Sep 6, 2013 at 1:02 PM, Chris Hostetter w
For what it's worth... I just updated to solrj 4.4 (even though my server
is solr 4.3) and it seems to have fixed the issue.
Thanks for the help!
On Fri, Sep 6, 2013 at 1:41 PM, Chris Hostetter wrote:
>
> : I'm not sure if this means there's a bug in the client library I'm using
> : (solrj 4.3)
is and if you look at debug output you see http header
of Connection: Close
Keep-Alive requests:0
Any ideas? I'm seeing the same behavior when using http client.
Thanks,
Brent
thanks guys. I saw this other post with curl and verified it working.
I've also used apache bench for a bunch of stuff and keep-alive works fine
with things like Java netty.io servers ... strange that tomcat isn't
respecting the http protocol or headers
There must be a bug in this version o
We ran into 1 snag during development with SOLR and I thought I'd run it by
anyone to see if they had any slick ways to solve this issue.
Basically, we're performing a SOLR query with grouping and want to be able
to sort by the number of documents found within each group.
Our query response from
ya, that's the problem... you can't sort by "numFound" and it's not
feasible to do the sort on the client because the grouped result set is too
large.
Brent
On Wed, Sep 25, 2013 at 6:09 AM, Erick Erickson wrote:
> Hmmm, just specifying &sort= is _almost_ what yo
I'm not really sure how to title this but here's what I'm trying to do.
I have a query that creates a rather large dictionary of codes that are shared
across multiple fields of a base entity. I'm using the
cachedsqlentityprocessor but I was curious if there was a way to join this
multiple time
SOLR-2382 you can specify your own SortedMapBackedCache
subclass which is able to share your Dictionary.
Regards
On Tue, Dec 6, 2011 at 12:26 AM, Brent Mills wrote:
> I'm not really sure how to title this but here's what I'm trying to do.
>
> I have a query that cr
used to create the EgranaryIndexReader.
So the second questions is: Does anybody have other ideas about how we
might solve this problem? Is distributed search still our best bet?
Thanks for your thoughts!
Brent
,
Brent
On 9/28/2010 5:40 PM, Jonathan Rochkind wrote:
Honestly, I think just putting everything in the same index is your best bet. Are you sure your
"particular needs of your project" can't be served by one combined index? You can certainly still
query on just a portion of
This is an issue we've only been running into lately so I'm not sure what to
make of it. We have 2 cores on a solr machine right now, one of them is about
10k documents, the other is about 1.5mil. None of the documents are very
large, only about 30 short attributes. We also have about 10 requ
I've read some things in jira on the new functionality that was put into
caching in the DIH but I wouldn't think it should break the old behavior. It
doesn't look as though any errors are being thrown, it's just ignoring the
caching part and opening a ton of connections. Also I cannot find any
previous build of 4.0 so it
looks like you fixed it with one of those patches. Thanks for all your work on
the DIH, the caching improvements are a big help with some of the things we
will be rolling out in production soon.
-Brent
-Original Message-
From: Dyer, James [mailto:james.d
We're having an issue when we add or change a field in the db-data-config.xml
and schema.xml files in solr. Basically whenever I add something new to index
I add it to the database, then the data config, then add the field to the
schema to index, reload the core, and do a full import. This has
Thanks for the explanation and bug report Robert!
-Original Message-
From: Robert Muir [mailto:rcm...@gmail.com]
Sent: Monday, July 09, 2012 3:18 PM
To: solr-user@lucene.apache.org
Subject: Re: problem adding new fields in DIH
Thanks again for reporting this Brent. I opened a JIRA issue
should be noted that we would only be adding documents to one of the
indexes. I can give more info about the context of this application if
necessary.
Thank you for any suggestions!
--
Brent Palmer
Widernet.org
University of Iowa
319-335-2200
asons that make this a bad idea.
Brent
Chris Hostetter wrote:
If you've been using a MultiSearcher to query multiple *remote* searchers,
then Distributed searching in solr should be a appropriate.
if you're use to useing MultiSearcher as a way of aggregating from
multiple
Thanks Hoss. I haven't had time to try it yet, but that is exactly the
kind of help I was looking for.
Brent
Chris Hostetter wrote:
: As for the second part, I was thinking of trying to replace the standard
: SolrIndexSearcher with one that employs a MultiSearcher. But I'
s, that it will do what I want...
Many thanks!
Brent
project, but I suspect I'll take advantage of some of the other
bits in the future!
cheers!
Brent
have other hints, I'm eager to
| know what they are.
I've recently used solr's FunctionQuery to do just this with great
success. Take a look at FunctionQuery and ReciprocalFloatFunction.
If I have some time later today, I'll put together a simple example.
cheers!
Brent
51 matches
Mail list logo