Hi,
To my best knowledge the getopt luke is not supported anymore.
Use this instead:
https://github.com/DmitryKey/luke
Regards,
Dmitry
Hi Prabaharan,
You can use Luke to open an index. http://www.getopt.org/luke/
-Original Message-
From: Rajendran, Prabaharan [mailto:rajendra...@d
Not if I'm reading this right. You want the docs from June 1 with a
timestamp between 13:00 and 16:00 but not one from, say, 11:00. Ditto
for the other days, right?
If it's a predictable interval or a predictable granularity (i.e. the
resolution you want is always going to be even hours) you could
bq: Zookeeper seems a step backward.
For stand-alone Solr, I tend to agree it's a bit awkward. But as Shawn
says, there's no _need_ to run Zookeeper with a more recent Solr.
Running Solr without Zookeeper is perfectly possible, we call that
"stand alone". And, if you have no need for sharding
On 22/07/2016 5:22pm, Aristedes Maniatis wrote:
> But then what? In the production cluster it seems I then need to
>
> 1. Grab the latest configuration bundle for each core and unpack them
> 2. Launch Java
> 3. Execute the Solr jars (from the production server since it must be the
> right version
Hi Alex,
Thanks for confirming my finding.
When it comes to Solr interfacing to a client, I agree completely. However,
I was hoping to limit the noise at Solr and not have to add extra code to
filter out the exceptions. Just wondering, wouldn't it be a cleaner RESTFUL
interface if instead
Hi all,
Is there a way to query solr between dates and query like "intraday",
between hours in those days? Something like: I want to search field "text"
with value: "test" and field "date" between 20160601 AND 20160610 and
between only hours of those days: 1PM AND 4PM?
I know I could loop over th
On 7/22/16 9:56 AM, Erick Erickson wrote:
OK, scratch autowarming. In fact your autowarm counts
are quite high, I suspect far past "diminishing returns".
I usually see autowarm counts < 64, but YMMV.
Are you seeing actual hit ratios that are decent on
those caches (admin UI>>plugins/stats>>cac
Since I'm using SolrJ as a conduit to Solr, to have the searches processed on
a Solr server I need to wrap everything in a ParallelStream object. Got it,
thanks!
Joel Bernstein wrote
>
> If you just use the Java API directly, the code executes in the VM where
> the code is run. You could use
Also, here is the link to screenshot.
https://dl.dropboxusercontent.com/u/39813705/Screen%20Shot%202016-07-22%20at%2010.40.21%20AM.png
Thanks
On 7/21/16 11:22 PM, Shawn Heisey wrote:
On 7/21/2016 11:25 PM, Rallavagu wrote:
There is no other software running on the system and it is completely
Here is the snapshot of memory usage from "top" as you mentioned. First
row is "solr" process. Thanks.
PID USER PR NIVIRTRESSHR S %CPU %MEM TIME+
COMMAND
29468 solr 20 0 27.536g 0.013t 3.297g S 45.7 27.6 4251:45 java
21366 root 20 0 14.499g 217824
Hi all - please help me here
On Thursday, July 21, 2016, SRINI SOLR wrote:
> Hi All -
> Could you please help me on spell check on multi-word phrase as a whole...
> Scenario -
> I have a problem with solr spellcheck suggestions for multi word phrases.
With the query for 'red chillies'
>
>
q=r
Streaming Expression can be sent to any SolrCloud node in any collection.
You can setup collections that have no data and just execute the
expressions. The expressions reference other collections that hold data.
Collections that only execute expressions we can call "Worker Collections".
Collection
OK, scratch autowarming. In fact your autowarm counts
are quite high, I suspect far past "diminishing returns".
I usually see autowarm counts < 64, but YMMV.
Are you seeing actual hit ratios that are decent on
those caches (admin UI>>plugins/stats>>cache>>...)
And your cache sizes are also quite h
On 7/22/16 8:34 AM, Erick Erickson wrote:
Mostly this sounds like a problem that could be cured with
autowarming. But two things are conflicting here:
1> you say "We have a requirement to have updates available immediately (NRT)"
2> your docs aren't available for 120 seconds given your autoSoft
Mostly this sounds like a problem that could be cured with
autowarming. But two things are conflicting here:
1> you say "We have a requirement to have updates available immediately (NRT)"
2> your docs aren't available for 120 seconds given your autoSoftCommit
settings unless you're specifying
-Dsol
Thanks Shawn for your insight!
On Fri, Jul 22, 2016 at 6:32 PM, Shawn Heisey wrote:
> On 7/22/2016 12:41 AM, Shyam R wrote:
> > I see that SOLR returns status value as 0 for successful searches
> > org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1]
> > webapp=/solr path=/user/ping par
Well, if it is a bug you can spoof it by not issuing any commits until
the indexing
is completed. Certainly not elegant, and you risk having to re-index
from scratch
if your machine dies.
Or take explicit control over it, which in your case might be preferable through
the replication API, see:
htt
The streaming API looks like it's meant to be run from the client app server
- very similar to a standard Solr search. When I run a basic streaming
operation the memory consumption occurs on the app server jvm, not the solr
server jvm. The opposite of what I was expecting.
(pseudo code)
Stream A
Thanks for your answer Shawn,
If I got you, you are saying that regardless the "replicateAfter" directive is
"commit" or "optimize", a replication is triggered whenever a segments merge
occurs. Is that right?
Or is it triggered only when a full index merge occurs, which could happen
after a com
On 7/22/2016 4:02 AM, Alessandro Bon wrote:
> Issue: Full index replicas occur sometimes on master startup and after
> commits, despite only the optimize
> directive is specified. In the case of replica on commit, it occurs
> only for sufficiently big commits. Replica correctly starts again at
> th
On 7/22/2016 1:22 AM, Aristedes Maniatis wrote:
> I'm not new to Solr, but I'm upgrading from Solr 4 to 5 and needing to
> use the new Zookeeper configuration requirement. It is adding a lot of
> extra complexity to our deployment and I want to check that we are
> doing it right.
Zookeeper is not
On 7/22/2016 12:41 AM, Shyam R wrote:
> I see that SOLR returns status value as 0 for successful searches
> org.apache.solr.core.SolrCore; [users_shadow_shard1_replica1]
> webapp=/solr path=/user/ping params={} status=0 QTime=0 I do see that
> the status come's back as 400 whenever the search is in
Hi everyone,
I am experiencing a replication issue on a master/slave configuration,
Issue: Full index replicas occur sometimes on master startup and after commits,
despite only the optimize directive is
specified. In the case of replica on commit, it occurs only for sufficiently
big commits. Re
Thanks for all the responses...
I have checked these options, none of the option has worked so far. The
option is only giving only two results not the third one. I am checking some
more options and if you can share more ideas, that would be great.
Thanks,
Surender Singh
--
View this message in
Hi everyone
I'm not new to Solr, but I'm upgrading from Solr 4 to 5 and needing to use the
new Zookeeper configuration requirement. It is adding a lot of extra complexity
to our deployment and I want to check that we are doing it right.
1. We are using Saltstack to push files to deployment ser
25 matches
Mail list logo