Hi,
Where are the logs fetched from on solr admin ui page?
http://localhost:8983/solr/#/~logging. I am unable to see any logs there.
Its just showing the 'loading' symbol but no logs fetched. What could be
the reason? Any logging setting that has to be made?
Thanks.
thanks, Shalin! We have survived by passing our custom structure string in
Json. Still to be tested for performance.
On Sat, Aug 8, 2015 at 5:22 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Or use the XsltResponseWriter :)
>
> On Sat, Aug 8, 2015 at 7:51 PM, Shalin Shekhar Mangar
Hi,
Any help on this please?
Thanks & Regards
Vijay
From: Vijay Bhoomireddy [mailto:vijaya.bhoomire...@whishworks.com]
Sent: 14 August 2015 18:03
To: solr-user@lucene.apache.org
Subject: Issue while setting Solr on Slider / YARN
Hi,
We have a requirement of setting up of Solr
Hello,
My Goal: Search solr and group the result based on a field , paginate through
the grouped result.
The query I used:
group=true&group.field=customer_company_name&group.ngroups=true
I get a result set of:
{
*
responseHeader:
{
*
status: 0,
*
QTime: 20,
*
params:
{
Hi,
I have a Solr cluster which hosts around 200 GB of index on each node and
are 6 nodes. Solr version is 5.2.1.
When a huge query is fired, it times out *(The request took too long to
iterate over terms.)*, which I can see in the log but at same time the one
of the Solr node goes down and the lo
How much memory does each server have? How much of that memory is
assigned to the JVM? Is anything reported in the logs (e.g.
OutOfMemoryError)?
On Mon, Aug 17, 2015, at 12:29 PM, Modassar Ather wrote:
> Hi,
>
> I have a Solr cluster which hosts around 200 GB of index on each node and
> are 6 nod
The servers have 32g memory each. Solr JVM memory is set to -Xms20g
-Xmx24g. There are no OOM in logs.
Regards,
Modassar
On Mon, Aug 17, 2015 at 5:06 PM, Upayavira wrote:
> How much memory does each server have? How much of that memory is
> assigned to the JVM? Is anything reported in the logs
Hoping that others will chime in here with other ideas. Have you,
though, tried reducing the JVM memory, leaving more available for the OS
disk cache? Having said that, I'd expect that to improve performance,
not to cause JVM crashes.
It might also help to know what version of Java you are running
Thanks Upayavira fo your inputs. The java vesrion is 1.7.0_79.
On Mon, Aug 17, 2015 at 5:57 PM, Upayavira wrote:
> Hoping that others will chime in here with other ideas. Have you,
> though, tried reducing the JVM memory, leaving more available for the OS
> disk cache? Having said that, I'd expe
Hi,
We are using solr cloud 5.2 version.
We have observed that Intermittently querying become slower when documentCache
become empty. The documentCache is getting flushed whenever new document added
to the collection.
Is there any way by which we can ensure that newly added documents are visi
On 8/17/2015 5:45 AM, Modassar Ather wrote:
> The servers have 32g memory each. Solr JVM memory is set to -Xms20g
> -Xmx24g. There are no OOM in logs.
Are you starting Solr 5.2.1 with the included start script, or have you
installed it into another container?
Assuming you're using the download's
When using result grouping, Solr specs state the following about the "rows"
and "group.limit" params:
rows - The number of groups to return.
group.limit - Number of rows to return in each group.
We are using Solr cloud with a single collection and 64 shards.
When grouping by field (i.e. using the
On 8/17/2015 7:04 AM, Maulin Rathod wrote:
> We have observed that Intermittently querying become slower when
> documentCache become empty. The documentCache is getting flushed whenever new
> document added to the collection.
>
> Is there any way by which we can ensure that newly added documents
Hi Vijay,
Verify the ResourceManager URL and try passing the --manager param to
explicitly set the ResourceManager URL during the create step.
Cheers,
Tim
On Mon, Aug 17, 2015 at 4:37 AM, Vijay Bhoomireddy
wrote:
> Hi,
>
>
>
> Any help on this please?
>
>
>
> Thanks & Regards
>
> Vijay
>
>
>
>
Yes - adding to my post, I actually have a python script that verifies that
handleSelect="false" for each core's solrconfig.xml.
-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Saturday, August 15, 2015 11:57 PM
To: solr-user@lucene.apache.org
Subject: Re:
I place Apache Solr behind Apache httpd with a pure HTTP reverse proxy, since
most of the time it will be used as an API. I use mod_auth_cas to protect the
general /solr URL, requiring a login that refers to our common Jasiq CAS
server, which in turns connects to our Microsoft Active Directory
Hi,
I have SOLR cloud running on SOLR 4.7.2
2 shards one replica each.
The size of the index directories is odd on ps03
ps01 shard1 leader
41G
/opt/solr/solr-4.7.2/example/solr/collection1/data/index.20150815024352580
ps03 shard 2 replica
59G
/opt/solr/solr-4.7.2/example/solr/collection1/da
I am a new user to solr. I want to start my solr service without creating any
core/collection and using the existing core/ collection. Can you let me know
how?
I have already created a collection and have indexed my data in that. My
server is stopped and i want to restart the server using the exis
Something like this works:
bin/solr start -c -z localhost:2181 -p 8981 -s example/cloud/node1/solr
When you first start Solr with bin/solr you get a prompt asking for how many
servers you want to run and what their ports are. The script will echo
the command
you can use to start the Solr instance
A couple of things:
1> Be a little careful looking at deletedDocs, maxDocs and numDocs
when you're done. Deleted (or updated) docs are "merged away" as
segments merge. deletedDocs isn't an count of all docs that _have_
been deleted, it's just a count of the docs that have been
delete/updated but no
True, I haven't looked at it closely. Not sure where it is in the
priority list though.
However, I would recommend you _really_ look at _why_ you
think you need cross-collection joins. Likely they will be expensive,
and whether they're performant in your situation will be a question.
If at all p
When you say "the solr node goes down", what do you mean by that? From your
comment on the logs, you obviously lose the solr core at best (you do
realize only having a single replica is inherently susceptible to failure,
right?)
But do you mean the Solr Core drops out of the collection (ZK timeout)
The question as I read it was composite documents, not cross-collection
joins.
If the joined core is small enough to be replicated across all replicas
of your main collection, then cross-core joining works well, as it is
all within one instance.
As to composite documents, I have sometimes wondere
Hi,
So it turns out that the index directory has nothign to do with what index
is actually in use
I found that we had mismatched version numbers on our shards so this is
what we had to do to fix that.
In production today we discovered that our shard replicas were on different
version numbers.
t
Hi,
I have data that is coming in everyday. I need to query the index for a time
range and give the facet counts ordered by different months.
For this, I just have a solr date field, entryDate which captures the time.
How do I make this query? I need the results like below.
Jan-2015 (2000)
I want to update the index of a file only if last_modified has changed in the
file. I am running post.jar with fileTypes="*", i would want to update the
index of the files only if there is any change in them since the last update
of index. Can you let me know how to achieve this?
--
View this me
The JSON Facet API can embed any type of facet within any other type:
http://yonik.com/json-facet-api/
json.facet={
dates : {
type : range,
field : entryDate,
start : "2001-...", // use full solr date format
end : "2015...",
gap : "+1MONTH",
facet : {
type:terms,
This will certainly force-replicate the entire index but I
question whether it was necessary.
These are basically snapshots that are different versions and
do not necessarily have any correlation to the _current_ index.
If you're not having any search problems, i.e. if all the replicas
are returni
There's no way that I know of with post.jar. Post.jar was never really intended
as a production tool, and sending all the files to Solr for parsing (pdf, word
and the like) is putting quite a load on the Solr server.
What is your use-case? You might consider a SolrJ program, it would be
simple eno
I have a file system. I have a scheduler which will call solr in scheduled
time interval. Any updates to the file system must be indexed by solr. Only
changes must be re-indexed as file system is huge and cannot be re-indexed
every time.
--
View this message in context:
http://lucene.472066.n3.
Just to open the can of worms, it *can* be possible to have very low commit
times, we have 250ms currently and are in production with that. But it
does come with pain (no such thing as a free lunch!), we had to turn off
ALL the Solr caches (warming is useless at that kind of frequency, it will
tak
On Mon, Aug 17, 2015 at 11:36 PM, Daniel Collins
wrote:
> Just to open the can of worms, it *can* be possible to have very low commit
> times, we have 250ms currently and are in production with that. But it
> does come with pain (no such thing as a free lunch!), we had to turn off
> ALL the Solr
On Mon, Aug 17, 2015 at 4:36 PM, Daniel Collins wrote:
> we had to turn off
> ALL the Solr caches (warming is useless at that kind of frequency
Warming and caching are related, but different. Caching still
normally makes sense without warming, and Solr is generally written
with the assumption th
Hi Yonik,
Thank you for the reply. I followed your link and this feature is really
awesome to have.
But, unfortunately I am using solr 4.4 on cloudera right now.
I tried this. Looks like it does not work for this version.
Sorry, I forgot to mention that in my original mail.
Thanks,
Lewin
-
Well, you'll have to have some kind of timestamp that you can
reference and only re-send
files that have a newer timestamp. Or keep a DB around with file
path/last indexed timestamp
or
Best,
Erick
On Mon, Aug 17, 2015 at 12:36 PM, coolmals wrote:
> I have a file system. I have a scheduler wh
Folks:
Question regarding SolrCloud Shard Number (Ex: shard) & associated hash
ranges. We are in the process of identifying the best strategy to merge
shards that belong to collections that are chronologically older which sees
very low volume of searches compared to the collections with most recen
On Mon, Aug 17, 2015 at 8:00 PM, Sathiya N Sundararajan
wrote:
> Folks:
>
> Question regarding SolrCloud Shard Number (Ex: shard) & associated hash
> ranges. We are in the process of identifying the best strategy to merge
> shards that belong to collections that are chronologically older which see
Hello,
Have 4 nodes participating solr cloud. After indexing about 2 mil
documents, only two nodes are "Active" (green) while other two are shown
as "down". How can I "initialize" the replication from leader so other
two nodes would receive updates?
Thanks
Great thanks Yonik.
On Mon, Aug 17, 2015 at 5:16 PM, Yonik Seeley wrote:
> On Mon, Aug 17, 2015 at 8:00 PM, Sathiya N Sundararajan
> wrote:
> > Folks:
> >
> > Question regarding SolrCloud Shard Number (Ex: shard) & associated
> hash
> > ranges. We are in the process of identifying the best stra
Is this 4 shards? Two shards each with a leader and follower? Details
matter a lot
What, if anything, is in the log file for the down nodes? I'm assuming
that when you
start, all the nodes are active
You might review:
http://wiki.apache.org/solr/UsingMailingLists
Best,
Erick
On Mon, Aug
response inline..
On 8/17/15 8:40 PM, Erick Erickson wrote:
Is this 4 shards? Two shards each with a leader and follower? Details
matter a lot
It is a single collection single shard.
What, if anything, is in the log file for the down nodes? I'm assuming
that when you
start, all the node
By the time the last email was sent, other node also caught up. Makes me
wonder what happened and how does this work.
Thanks
On 8/17/15 9:53 PM, Rallavagu wrote:
response inline..
On 8/17/15 8:40 PM, Erick Erickson wrote:
Is this 4 shards? Two shards each with a leader and follower? Details
Shawn! The container I am using is jetty only and the JVM setting I am
using is the default one which comes with Solr startup scripts. Yes I have
changed the JVM memory setting as mentioned.
Kindly help me understand, even if there is a a GC pause why the solr node
will go down. At least for other
Hi Shawn,
Thanks for your feedback.
In our scenario documents are added frequently (Approx 10 documents added in 1
minute) and we want to make it available for search near realtime (within 5
second). Even if we set autosoftcommit 5 second (so that document will be
available for search afte
I tried to profile the memory of each solr node. I can see the GC activity
going higher as much as 98% and there are many instances where it has gone
up at 10+%. In one of the solr node I can see it going to 45%.
Memory is fully used and have gone to the maximum usage of heap which is
set to 24g. D
Any suggestions please.
Regards,
Modassar
On Thu, Aug 13, 2015 at 4:25 PM, Modassar Ather
wrote:
> Hi,
>
> I am getting following exception for the query :
> *q=field:query&stats=true&stats.field={!cardinality=1.0}field*. The
> exception is not seen once the cardinality is set to 0.9 or less.
>
46 matches
Mail list logo