I'm using SOLR 4 for an application, where I need to search the index soon
after inserting records.
I'm using the solrj code below to get the last ID in the index. However, I
noticed that the last id I see when I execute a query through the solr web
admin is often lagging behind this. And that my
I have a multi-threaded application using solrj 4. There are a maximum of 25
threads. Each thread creates a connection using HttpSolrServer, and runs one
query. Most of the time this works just fine. But occasionally I get the
following exception:
Jan 10, 2013 9:29:07 AM
org.apache.http.impl.
I have a multi-threaded application in solrj 4. The threads (max 25) share
one connection to HttpSolrServer. Each thread is running one query. This
worked fine for a while, until it finally crashed with the following
messages:
Jan 12, 2013 12:52:15 PM org.apache.http.impl.client.DefaultRequestDir
Hi,
I'm using the solrj API to query my SOLR 3.6 index. I have multiple text
fields, which I would like to weight differently. From what I've read, I
should be able to do this using the dismax or edismax query types. I've
tried the following:
SolrQuery query = new SolrQuery();
query.setQuery( "ti
Thanks Kuli. I tried this, but then it only returns hits for the query in the
title field. I managed to get this work, by making edismax the default query
type in the request handler in solrconfig.xml. This is still a bit of a
hack, since I can't select different query types from solrj. If I add a
Thanks Ryan. I created a second requestHandler, which works fine in the
browser.
In solrj, how do I tell the SolrQuery which request handler to use? It
always seems to default to another requestHandler.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-fields-in-SOLR-u
I finally figured this out. The answer is here (see my comment to the
answer):
http://stackoverflow.com/questions/10324969/boosting-fields-in-solr-using-solrj
--
View this message in context:
http://lucene.472066.n3.nabble.com/Boosting-fields-in-SOLR-using-Solrj-tp3939789p3945626.html
Sent from t
I have an instance of SOLR 3.6 running, with JSON as the default
updateHandler.
I am able to delete individual documents with the following:
curl "http://myURL/update?commit=true"; -H
'Content-type:application/json' -d '{"delete": {"id":"1730887464"}}'
What is the right way to delete a rang
Thanks for the response Jack! That wasn't exactly right. But the following
modification does work:
curl "http://myURL/update?commit=true"; -H 'Content-type:application/json' -d
'{"delete": {"query":"id:[0 TO 1730887464]"}}'
--
View this message in context:
http://lucene.472066.n3.nabble.com/Un
tory class
[org.apache.solr.handler.component.HttpShardHandlerFactory]: Unable to
build KeyStore from file: null"
I don't really see any changes from 5 to 6 that cause this. Any clues? Here
is the code:
https://github.com/healthonnet/hon-lucene-synonyms/tree/solr-6.0.0
Thanks for the help,
Joe Lawson
Check for example tests here too:
https://github.com/apache/lucene-solr/tree/master/solr/core/src/test/org/apache/solr
On Mon, Apr 11, 2016 at 12:24 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Please use MiniSolrCloudCluster instead of EmbeddedSolrServer for
> unit/integration te
Thanks for the insight. I figured that it was something like that and
perhaps I has thread contention on a resource that wasn't really thread
safe.
I'll give your suggestions a shot tomorrow.
Regards,
Joe Lawson
On Apr 11, 2016 8:24 PM, "Chris Hostetter" wrote:
>
>
, 2016 at 8:45 PM, Chris Hostetter
wrote:
>
> https://issues.apache.org/jira/browse/SOLR-8970
> https://issues.apache.org/jira/browse/SOLR-8971
>
> : Date: Mon, 11 Apr 2016 20:35:22 -0400
> : From: Joe Lawson
> : Reply-To: solr-user@lucene.apache.org
> : To: solr-user@lucen
t;
> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> ~[na:1.8.0_92]
>
> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> ~[na:1.8.0_92]
>
> at java.lang.Thread.run(Thread.java:745) ~[na:1.8.0_92]
>
>
Any help suggestions is appreciated.
Cheers,
Joe Lawson
This appear to be a bug that'll be fixed in 6.1:
https://issues.apache.org/jira/browse/SOLR-7729
On Fri, Apr 22, 2016 at 8:07 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Joe this might be _version_ as in Solr's optimistic concurrency used in
> atomic
Yes they are both 6.0.
On Apr 25, 2016 1:07 PM, "Anshum Gupta" wrote:
> Hi Joe,
>
> Can you confirm if the version of Solr and SolrJ are in sync ?
>
> On Mon, Apr 25, 2016 at 10:05 AM, Joe Lawson <
> jlaw...@opensourceconnections.com> wrote:
>
> > This
bc/def/123 and /abc/def/456 but not index
/abc/def/789 and /abc/def/xyz etc.
We can currently index all the files under /abc/def. That works fine but
I can't figure out how to exclude entire subdirectories that have the
same file types in them as the directories that we do want to index
Hi Epo,
We aren't using Zookeeper or the SolrCloud stuff on docker yet but it looks
like Vincenzo was using three ZK containers, each with a different port.
Sincerely,
Joe Lawson
On Wed, Sep 23, 2015 at 1:28 PM, Epo Jemba wrote:
> Hi Doug,
>
> thank you for your git repo. I
we get to run commands like, docker run solr and have solr working!
containers make new application deployments a breeze.
On Wed, Sep 23, 2015 at 4:35 PM, Ugo Matrangolo
wrote:
> Hi,
>
> just curious: what you get by running Solr into a Docker container ?
>
> Best
> Ugo
>
> On Wed, Sep 23, 2015
Formation template.
>
> Best
> Ugo
>
>
> On Wed, Sep 23, 2015 at 10:01 PM, Joe Lawson <
> jlaw...@opensourceconnections.com> wrote:
>
> > we get to run commands like, docker run solr and have solr working!
> >
> > containers make new application de
The docs are out of date for the synonym_edismax but it does work. Check
out the tests for working examples. I'll try to update it soon. I've run
the plugin on Solr 5 and 6, solrcloud and standalone. For running in
SolrCloud make sure you follow
https://cwiki.apache.org/confluence/display/solr/Addi
;John Bickerstaff" wrote:
> @Joe:
>
> Is it possible that the jar's package name does not match the entry in the
> sample solrconfig.xml file?
>
> The solrconfig.xml example file in the test directory contains the
>
I mean the 5.0 namespace is different from the 2.0 not 3.0.
On Jun 1, 2016 5:43 PM, "Joe Lawson"
wrote:
2.0 is different from 3.0 so check the test config that is associated with
the 2.0 release. Ie
https://github.com/healthonnet/hon-lucene-syn
fig to use esp
when I linked the latest 5.0.4 test config prior.
You can get the older jars from the links off the readme.md.
On Jun 1, 2016 6:14 PM, "Shawn Heisey" wrote:
On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> @Joe:
>
> Is it possible that the jar's package name
com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin
The features are the same for all versions.
Hope this clears things up.
-Joe
On Jun 1, 2016 8:11 PM, "John Bickerstaff" wrote:
> Just to be clear, I got version 2.0 of the jar from github... should I be
> look for something in a maven repository? A bit confused at th
Flume and Logstash can both ship to Solr.
On Jun 5, 2016 2:11 PM, "Otis Gospodnetic"
wrote:
> You can ship SOLR logs to Logsene or any other log management service and
> not worry too much about their storage/size.
>
> Otis
>
> > On Jun 5, 2016, at 02:08, Anil wrote:
> >
> > Hi ,
> >
> > i would
Mary Jo.
It appears to be working correctly but you have a very complex query going
on so it can be confusing. Assuming you are using the queryParser as
provided in examples your query would look like "+sbc" when it enters the
queryParser and would look like "+((sbc)^2.0 (sb)^0.5 (small block)^0.5
>
> Advice: make sure on the schema that none of the fields your are running
> queries against do any complex query operations, especially make sure they
> aren't doing additional synonym resolution against the same file.
>
BTW. I'd do this first before messing with MM
the help!
>
> Mary Jo
>
> Sent with MailTrack
> <
> https://mailtrack.io/install?source=signature&lang=en&referral=mjsmin...@gmail.com&idSignature=22
> >
>
> On Mon, Jun 6, 2016 at 4:57 PM, Joe Lawson <
> jlaw...@opensourceconnections.com> wro
t;
> >
> > Sent with MailTrack
> > <
> https://mailtrack.io/install?source=signature&lang=en&referral=mjsmin...@gmail.com&idSignature=22
> >
> >
> > On Mon, Jun 6, 2016 at 9:39 PM, MaryJo Sminkey
> > wrote:
> >
> &g
I'm sorry I wasn't more specific, I meant we were hijacking the thread with
the question, "Anyone used a different method of
handling multi-term synonyms that isn't as global?" as the original thread
was about getting synonym_edismax running.
On Tue, Jun 7, 2016 at 2:24 PM, MaryJo Sminkey wrote:
ore precise while HLS is more flexible.
-Joe
FYI it's released
On Jun 16, 2016 11:06 AM, "Steve Rowe" wrote:
> Tomorrow-ish.
>
> --
> Steve
> www.lucidworks.com
>
> > On Jun 16, 2016, at 4:14 AM, Ramesh shankar wrote:
> >
> > Hi,
> >
> > Yes, i used the solr-6.1.0-79 nightly builds and [subquery] transformer
> is
> > working fine in, any i
FYI everyone, I've updated the README.md to be fully up to date for Solr
6.0 and the latest plugin release.
https://github.com/healthonnet/hon-lucene-synonyms/blob/master/README.md
On Fri, Jun 17, 2016 at 2:34 PM, MaryJo Sminkey wrote:
> > OK - Slapping forehead now... D'oh!
> >
> > 1.2 >
> > Fl
ing the field's query-time analyzer
> > - Create an OR query with the tokens that come out of the analysis
> >
> > You can look at the field query parser as something of a starting point
> for
> > this.
> >
> > I usually do this in the context of a boost q
Hi - I'm not sure how to enable autoAddReplicas to be true for
collections. According to here:
https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS
it is specified in solr.xml, but I tried adding:
true
and that results in an error. What am I doing wrong?
Thanks!
-Joe
Thank you Erick! I miss-read the webpage.
-Joe
On 7/20/2016 7:57 PM, Erick Erickson wrote:
autoAddReplicas is _not_ specified in solr.xml. The things you can
change in solr.xml are some of the properties used in dealing with
collections _created_ with autoAddReplicas. See the CREATE action
have
> heard rumors occasionally about someone, also Lucene, has been working on a
> port to other languages?
>
> --
> Best regards,
>
> Eirik
>
--
-Joe
with a .NET/C# lib is a wrapper for the REST API.
> >
> >
> >
> > On 16 August 2016 at 09:08, Joe Lawson opensourceconnections.com>
> > wrote:
> >
> >> All I have seen is SolrNET, forks of SolrNET and people using RestSharp.
> >>
> >>
On Tue, Aug 16, 2016 at 12:24 PM, GW wrote:
> Interesting, I managed to do Solr SQL
>
> It is true that pretty much all operations still work by calling a
collection API directly. The benefits I'm referring to are dynamic cluster
state discovery, routing of requests automatically based on the sta
gracefully. If it does not, you end up with write.lock files for some
(if not all) of the shards, and have to delete them manually before
restarting.
-Joe
On 10/21/2016 9:01 AM, Shawn Heisey wrote:
On 10/21/2016 6:56 AM, Hendrik Haddorp wrote:
I'm running solrcloud in foreground mode
Vincenzo - we do this in our environment. Zookeeper handles, HDFS,
HBase, Kafka, and Solr Cloud.
-Joe
On 7/11/2017 4:18 AM, Vincenzo D'Amore wrote:
Hi All,
in my test environment I've two Zookeeper instances one for SolrCloud
(6.6.0) and another for a Kafka server (2.11-0.10.1.0)
ideas on what the problem could be? Thank you!
-Joe
was getting the log ready for you, but it was overwritten in the
interim. If it happens again, I'll get the log file ready.
-Joe
On 7/12/2017 9:25 AM, Shawn Heisey wrote:
On 7/12/2017 7:14 AM, Joe Obernberger wrote:
Started up a 6.6.0 solr cloud instance running on 45 machines
yest
;+tuple.fields.toString());
if (tuple.EOF) {
break;
}
}
} catch (IOException ex) {
logger.error("Solr stream error: " + ex);
ex.printStackTrace();
} finally {
if (stream != null) {
try {
stream.close();
} catch (IOException ex) {
logger.error("Could not close stream: "+ex);
}
}
}
I'm stuck! Thanks!
-Joe
:
stream = new
CloudSolrStream(props.getProperty("hbase.zookeeper.solr.quorum"),
solrCollectionName, params);
stream.setStreamContext(context);
Did the trick. I suspect it will be a problem if multiple programs use
the name workerID; will do more reading.
-Joe
On 7/13
class)
.withFunctionName("facet", FacetStream.class)
.withFunctionName("sum", SumMetric.class)
.withFunctionName("unique", UniqueStream.class)
.withFunctionName("uniq", UniqueMetric.class)
.withFunctionName("
0 0 /solr6.6.0/MODEL1007_1499971618545
635.8 G 2.0 T /solr6.6.0/UNCLASS
241.5 K 724.5 K /solr6.6.0/models
-Joe
heck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at
org.apache.solr.client.solrj.io.stream.TupleStream.getShards(TupleStream.java:133)
at
org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:393)
Thanks for any ideas
)
at java.lang.Thread.run(Thread.java:748)
The whole log can be found here:
http://lovehorsepower.com/solr.log
the GC log is here:
http://lovehorsepower.com/solr_gc.log.3.current
-Joe
On 7/12/2017 9:25 AM, Shawn Heisey wrote:
On 7/12/2017 7:14 AM, Joe Obernberger wrote:
Started up a 6.6.0
for a short time before
getting more indexing errors. Several of the nodes show as down in the
cloud view. Any help would be appreciated! Thank you!
-Joe
t for SolrCloud?
Thank you!
-Joe
On 7/17/2017 8:36 AM, Joe Obernberger wrote:
We've been indexing data on a 45 node cluster with 100 shards and 3
replicas, but our indexing processes have been stopping due to
errors. On the server side the error is "Error logging add". Stack
tr
checking the HDFS version (we're using
Cloudera CDH 5.10.2), and the HDFS logs.
-Joe
On 7/17/2017 10:16 AM, Susheel Kumar wrote:
There is some analysis error also. I would suggest to test the indexer on
just one shard setup first, then test for a replica (1 shard and 1 replica)
and then t
rsion and the version shipped with 6.6.0; correcting that.
Thanks again!
-Joe
On 7/17/2017 11:53 AM, Erick Erickson wrote:
Joe:
I agree that 46 million docs later you'd expect things to have settled
out. However, I do note that you have
"add-unknown-fields-to-the-schema" in yo
nodes won't come up.
Is there a way around this? Re-creating these files manually isn't
realistic; do I need to re-index?
-Joe
On 7/17/2017 12:07 PM, Susheel Kumar wrote:
and there is document id mentioned above when it failed with analysis
error. You can look how those documents
Hi All - does SolrCloud support using Short Circuit Reads when using HDFS?
Thanks!
-Joe
ception ex) {
System.out.println("Error writting: "+ex);
}
}
}
Then I copied the files to the 45 servers and restarted solr 6.6.0 on
each. It came back up OK, and it has been indexing all night long.
-Joe
On 7/17/2017 3:15 PM,
results. Another question is if there is a way to
parallelize the classify call to other worker nodes? Thank you!
-Joe
lassify a lot of docs, but I actually only want to return docs that
have a probability of n or higher.
-Joe
On 8/14/2017 10:46 PM, Joel Bernstein wrote:
My math was off again ... If you have 20 results from 50 shards that would
produce the 1000 results.
Joel Bernstein
http://joelsolr.blogspot
e for which
application threads were stopped: 0.0010934 seconds, Stopping threads
took: 0.0001659 seconds
-Joe
spent? It can be very helpful for debugging
this sort of problem.
On Fri, Aug 18, 2017 at 12:37 PM, Joe Obernberger <
joseph.obernber...@gmail.com> wrote:
Indexing about 15 million documents per day across 100 shards on 45
servers. Up until about 350 million documents, each of the solr ins
.
On Fri, Aug 18, 2017 at 12:37 PM, Joe Obernberger <
joseph.obernber...@gmail.com> wrote:
Indexing about 15 million documents per day across 100 shards on 45
servers. Up until about 350 million documents, each of the solr instances
was taking up about 1 core (100% CPU). Recently, they all
usage stayed low for a while, but then
eventually comes up to ~800% where it will stay.
Please let me know if there is other information that I can provide, or
what I should be looking for in the GC logs. Thanks!
-Joe
On 8/18/2017 2:25 PM, Shawn Heisey wrote:
On 8/18/2017 10:37 AM, Joe
what was requested to be allocated, not what was actually allocated;
that is RES. Unless my understanding of top is wrong.
http://www.lovehorsepower.com/Vesta/VestaSolr6.6.0_htop.jpg
atop:
http://www.lovehorsepower.com/Vesta/VestaSolr6.6.0_atop.jpg
-Joe
On 8/18/2017 3:12 PM, Walter Underwood
Ah! Yes - that makes much more sense:
CPU: http://www.lovehorsepower.com/Vesta/VestaSolr6.6.0_CPU.jpg
Mem: http://www.lovehorsepower.com/Vesta/VestaSolr6.6.0_Mem.jpg
-Joe
On 8/18/2017 3:35 PM, Michael Braun wrote:
When I recommended JVisualVM, specifically the "Sampling" portion o
0.3 0.3 0:00.93 java
Note that the OS didn't actually give PID 29566 80G of memory, it
actually gave it 275m. Right? Thanks again!
-Joe
On 8/18/2017 4:15 PM, Shawn Heisey wrote:
On 8/18/2017 1:05 PM, Joe Obernberger wrote:
Thank you Shawn. Please see:
http://www.lovehorsepower.com/V
ls,id="WeatherModel",cacheMillis=5000),search(COL1,df="FULL_DOCUMENT",q="Hawaii
AND DocTimestamp:[2017-07-23T04:00:00Z TO
2017-08-23T03:59:00Z]",fl="ClusterText,id",sort="id
asc",rows="1"),field="ClusterText")
This sends this to all the shards who can return at most 10,000 docs each.
Thanks!
-Joe
I guess it's a toss up between what is more important -
high probability from the classifier vs high rank from the search engine.
Thanks Joel.
-Joe
On 8/23/2017 3:08 PM, Joel Bernstein wrote:
Can you describe the weather model?
In general the idea is to rerank the top N docs, because it
Very nice article - thank you! Is there a similar article available
when the index is on HDFS? Sorry to hijack! I'm very interested in how
we can improve cache/general performance when running with HDFS.
-Joe
On 9/18/2017 11:35 AM, Erick Erickson wrote:
This is suspicious too.
indexing is
going on. In reviewing the Reference Guide and doing various searches, I
haven't found anything that clearly references adding replicas to a cluster
when the cores already contain data.
Thank you for any insights,
Joe
Joe Heasly, Systems Analyst I
L.L.Bean, Inc. ~ Direct Ch
ay we
should have built the solr6 cluster?
Thank you for any insight!
-Joe
ent.java:354)
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1021)
Any idea what those could be? Those shards are not coming back up.
Sorry so many questions!
-Joe
On 11/21/2017 12:11 PM, Erick Erickson wrote:
How are you stopping Solr? Nodes should not go into recovery on
ed:
http://lovehorsepower.com/SolrClusterErrors.jpg
-Joe
On 11/21/2017 1:34 PM, Hendrik Haddorp wrote:
Hi,
the write.lock issue I see as well when Solr is not been stopped
gracefully. The write.lock files are then left in the HDFS as they do
not get removed automatically when the client d
'll try a lower
hard commit time. Thanks again Erick!
-Joe
On 11/21/2017 2:00 PM, Erick Erickson wrote:
Frankly with HDFS I'm a bit out of my depth so listen to Hendrik ;)...
I need to back up a bit. Once nodes are in this state it's not
surprising that they need to be forceful
right now is that 6 of the 100 shards are not coming
back because of no leader. I've never seen this error before. Any
ideas? ClusterStatus shows all three replicas with state 'down'.
Thanks!
-joe
On 11/21/2017 2:35 PM, Hendrik Haddorp wrote:
We actually also have some perfo
tarts. Is that expected? Sometimes this can take longer than 20
minutes. No new data was added to the index between the restarts.
-Joe
On 11/21/2017 3:43 PM, Erick Erickson wrote:
bq: We are doing lots of soft commits for NRT search...
It's not surprising that this is slower than loc
xecutor.java:60)
at
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:354)
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1021)
... 9 more
Can I modify zookeeper to force a leader? Is there any other way to
recover from this? Thanks very much!
stack trace repeats for a long while; looks like a recursive call.
-Joe
On 11/21/2017 3:24 PM, Hendrik Haddorp wrote:
We sometimes also have replicas not recovering. If one replica is left
active the easiest is to then to delete the replica and create a new
one. When all replicas are down it helps
utor.java:60)
at
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:354)
at
org.apache.solr.cloud.ZkController.getLeaderProps(ZkController.java:1021)
... 9 more
Please help. Thank you!
-Joe
lders. Usually what I do is:
hadoop fs -ls -R /solr6.6.0 | grep write.lock > out.txt
then
cat out.txt | cut --bytes 57-
to get a list of files to delete
Glad these shards have come up! Thanks very much.
-Joe
On 11/22/2017 5:20 AM, Hendrik Haddorp wrote:
Hi Joe,
sorry, I have not seen that pr
/etc/hadoop/conf.cloudera.hdfs1
Thanks for reviewing!
-Joe
On 11/22/2017 8:20 AM, Kevin Risden wrote:
Joe,
I have a few questions about your Solr and HDFS setup that could help
improve the recovery performance.
* Is HDFS part of a distribution from Hortonworks, Cloudera, etc?
* Is Solr coloca
his happened, I
believe there was high network contention for specific nodes in the
cluster and their network interfaces became pegged and requests for HDFS
blocks timed out. When that happened, SolrCloud went into recovery
which caused more network traffic. Fun stuff.
-Joe
On 11/22/2017 11:
.
-Joe
On 11/22/2017 8:17 PM, Erick Erickson wrote:
Hmm. This is quite possible. Any time things take "too long" it can be
a problem. For instance, if the leader sends docs to a replica and
the request times out, the leader throws the follower into "Leader
Initiated Recovery&qu
led, retry loop. Anyone else run into this?
Thanks.
-Joe
On 11/27/2017 11:28 AM, Joe Obernberger wrote:
Thank you Erick. Right now, we have our autoCommit time set to
180 (30 minutes), and our autoSoftCommit set to 12. The
thought was that with HDFS we want less frequent, but lar
Good idea? Thank you!
-Joe
On 11/22/2017 8:17 PM, Erick Erickson wrote:
Hmm. This is quite possible. Any time things take "too long" it can be
a problem. For instance, if the leader sends docs to a replica and
the request times out, the leader throws the follower into "Leader
Init
Anyone have any thoughts on this? Will TLOG replicas use less network
bandwidth?
-Joe
On 12/4/2017 12:54 PM, Joe Obernberger wrote:
Hi All - this same problem happened again, and I think I partially
understand what is going on. The part I don't know is what caused any
of the replic
regular startup cycle (replaying logs etc.) similar to the
auto add replicas capability. Not sure how one would handle a node
coming back.
I think there could be a lot to be gained by taking advantage of a
global file system with Solr. Would be fun!
-Joe
On 12/9/2017 10:26 PM, Erick
new data? If so, then re-indexing is not necessary.
-Joe
On 1/2/2018 10:41 AM, bhavin v wrote:
Hi Guys,
How often do I need to run full reindex on SolrCloud? It takes more than 12
hours for full reindex to run and we run it every night but is it really
necessary to do it as delta runs corre
Job(QueuedThreadPool.java:654)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)\n\tat
java.lang.Thread.run(Thread.java:745)\n",
"EOF": true,
"RESPONSE_TIME": 10
}
]
}
}
Thank you!
-Joe
ASS'
http://cordelia:9100/solr/UNCLASS/sql?aggregationMode=map_reduce
Any idea what I'm doing wrong?
Thank you!
-Joe
Thank you Joel - that was it; or rather a miss-understanding of how this
works on my end!
-Joe
On 11/26/2016 10:17 PM, Joel Bernstein wrote:
Hi,
It looks like the outcome field my not be correct or it may have missing
values. You'll need to populate this field for all records i
OF":true,"RESPONSE_TIME":1391}]}}
Thank you Damian and Joel!
-Joe
On 11/29/2016 9:11 AM, Joel Bernstein wrote:
I'll take a look at the StatsStream and see what the issue is.
Joel Bernstein
http://joelsolr.blogspot.com/
On Mon, Nov 28, 2016 at 8:32 PM, Damien Kamerman wro
n' succeeds
'stmt=SELECT like_count, DocumentId from UNCLASS where like_count>0'
succeeds
'stmt=SELECT like_count, DocumentId from main where like_count>0' fails
Hope that helps.
-Joe
On 11/29/2016 9:11 AM, Joel Bernstein wrote:
I'll take a look at the StatsStream
EOF":true}]}}
When trying to use the streaming random function. I'm using curl with:
curl --data-urlencode
'expr=random(MAIN,q="FULL_DOCUMENT:obamacare",rows="100",fl="DocumentId")'
http://cordelia:9100/solr/MAIN/stream
Any ideas? Thank you!
-Joe
Thanks! I'll give this a shot.
-Joe
On 1/3/2017 8:52 PM, Joel Bernstein wrote:
Luckily https://issues.apache.org/jira/browse/SOLR-9103 is available in
Solr 6.3
So you can register the random expression through the solrconfig. The
ticket shows an example.
Joel Bernstein
licas each (600 in all) with the goal being to
withstand a server going out, and future expansion as more hardware is
added? I know this is very general question. Thanks very much in advance!
-Joe
, going with multiple shards per machines sounds like the
way to go here.
I do have a test instance, and can do some bench-marking there. Thanks
again!
-Joe
On 1/13/2017 4:16 PM, Toke Eskildsen wrote:
Joe Obernberger wrote:
[3 billion docs / 16TB / 27 shards on HDFS times 3 for replication
ad.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)
Thanks for any ideas!
-Joe
Thank you Hrishikesh,
I did try recreating with async, but that just ran and ran. When I
called for the overseer status, that too hung up. I restarted the
cluster and now it is working.
-Joe
On 1/16/2017 3:01 PM, Hrishikesh Gadre wrote:
Based on the stack trace, it looks like the Solr
Thread.run(ConcurrentMergeScheduler.java:626)
-Joe
1 - 100 of 417 matches
Mail list logo