Hi,
I frequently getting solr heap out once or twice a day. what could be the
possible reasons for the same and is there any way to log memory used by
the query in solr.log .
Thanks ,
Abhishek Tiwari
Hi Surender,
Please go through Stemmer documentation which will give you idea on how
stemmer works.
I see below issues in configured field types,
1. You have added porter stemmer awa english minimal stemmer also. You can
remove one of those based on your requirement. Minimal stemmer is
conservati
Hi,
I do not want to use Synonyms.txt as this would require to a big library and
that will be time consuming.
Thanks,
Surender Singh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Searching-Home-s-Homes-and-Home-tp4286341p4286897.html
Sent from the Solr - User mailing lis
Hi,
The following is the analyzer information and let me know what I am missing.
There are 16 Solr Nodes (Solr 5.2.1) & 5 Zookeeper Nodes (Zookeeper 3.4.6)
in our production cluster. We had to restart Solr nodes for some reason and
we are doing it after 3 months. To our surprise, none of the solr nodes
came up. We can see the Solr process running the machine, but, the Solr
Admi
Dear Mr. Heisey.
It seems that we can not send the picture or attachments to solr-user, so I
send the screen shot to your personal email, sorry to disturb!
Thanks!
Kent
2016-07-13 8:13 GMT+08:00 Shawn Heisey :
> On 7/12/2016 8:30 AM, Kent Mu wrote:
> > We have configed the maxThreads in JBOSS,
we have 5 shards and each shard with one leader and one replica.
the "3300" connections is only for one JVM. please see the follow analysis
in zabbix.
and we the solrj code as follow:
public synchronized static CloudSolrServer getSolrCloudReadServer() {
if (reviewSolrCloudReadS
Dear Mr. Wartes,
Thanks for your reply. well, I see. for solr we do have replicas, and for
solrcloud, we have 5 shards and each shards with one leader and one
replica. and the data number is nearly 100 million, you mean we do not need
to optimize the index data?
Thanks!
Kent
2016-07-12 23:02 GMT+
Thank you very much for your prompt response.
I really appreciate it!
Rachid
On Jul 12, 2016 17:13, "Shawn Heisey" wrote:
> On 7/12/2016 5:54 PM, Rachid Bouacheria wrote:
> > I am running solr 4.10.4 and I would like to upgrade to the latest
> version
> > 6.1.0
> >
> > The documentation I found p
On 7/12/2016 9:45 AM, Jason wrote:
> I'm using optimize because it's a option for fast search. Our index
> updates one or more weekly. If I don't use optimize, many index files
> should be kept. Any performance issues in that case? And I'm wondering
> relation between index file size and heap size.
On 7/12/2016 8:30 AM, Kent Mu wrote:
> We have configed the maxThreads in JBOSS, and the good news is solrcloud
> now running OK. but I another issue came across. We find the number of the
> HTTP connections is very high, and the number can be around 3300. and
> solrcloud does no release the connec
On 7/12/2016 5:54 PM, Rachid Bouacheria wrote:
> I am running solr 4.10.4 and I would like to upgrade to the latest version
> 6.1.0
>
> The documentation I found provides steps to upgrade from 4.10.4 to 5.x
> And it seems like going from 4.x to 5.x is pretty consequent.
> Going from 5.x to 6.1.0 se
Hi All,
I am running solr 4.10.4 and I would like to upgrade to the latest version
6.1.0
The documentation I found provides steps to upgrade from 4.10.4 to 5.x
And it seems like going from 4.x to 5.x is pretty consequent.
Going from 5.x to 6.1.0 seems to be less effort but still non negligible.
Hi,
we developed a custom QParserPlugin for Solr 4.3. This QParser is for
comparing the numeric values of the documents with numeric values of the
search query. The first step was to reduce the number of documents by
pre parsing the request and creating a lucene query:
final String query
Heap: start small and increase as necessary. Leave as much RAM for FS cache,
don't give it to the JVM until it starts crying. SPM for Solr will help you see
when Solr and JVM are starting to hurt.
Otis
> On Jul 12, 2016, at 11:45, Jason wrote:
>
> I'm using optimize because it's a option for
It's more a matter of "is unoptimized fast enough"? If so, why bother?
The background merging will keep segment counts relatively
reasonable.
If you're updating your index only once a week, it's reasonable to
optimize. Anecdotal reports are of on the order of a 10% speedup
_at best_.
As Yonik sa
copy in your analyzer from your schema.xml
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Tue, Jul 12, 2016 at 8:10 AM, Surender
wrote:
> Hi,
>
> I have checked the results and I am not getting desire
Hi,
I have checked the results and I am not getting desired results. Please
suggest.
Thanks,
Surender Singh
--
View this message in context:
http://lucene.472066.n3.nabble.com/Searching-Home-s-Homes-and-Home-tp4286341p4286757.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm using optimize because it's a option for fast search.
Our index updates one or more weekly.
If I don't use optimize, many index files should be kept.
Any performance issues in that case?
And I'm wondering relation between index file size and heap size.
In case of running as master server that
Let me know the guide reference address which is mentioned reasonable index
size is around 15G.
--
View this message in context:
http://lucene.472066.n3.nabble.com/High-cpu-and-gc-time-when-performing-optimization-tp4286704p4286790.html
Sent from the Solr - User mailing list archive at Nabble.
Well, two thoughts:
1. If you’re not using solrcloud, presumably you don’t have any replicas. If
you are, presumably you do. This makes for a biased comparison, because
SolrCloud won’t acknowledge a write until it’s been safely written to all
replicas. In short, solrcloud write time is max(per
I’m not sure you need a custom component. Try using the standard highlighter.
Configure hl.simple.pre and hl.simple.post to be empty strings. Configure it to
return one maximum length snippet. That should return the entire matching
fields, though I haven’t tested it.
wunder
Walter Underwood
wun
hello, does anybody also come across the issue? can anybody help me?
2016-07-11 23:17 GMT+08:00 Kent Mu :
> Hi friends!
>
> solr version: 4.9.0.
>
> we use solr and solrcloud in our project, that means we use sorl and
> solrcloud at the same time.
> but we find a phenomenon that sorlcoud consumes
Dear Mr. Heisey.
We have configed the maxThreads in JBOSS, and the good news is solrcloud
now running OK. but I another issue came across. We find the number of the
HTTP connections is very high, and the number can be around 3300. and
solrcloud does no release the connections.
I understand that, t
Optimize is a very expensive operation. It involves reading the
entire index and merging and rewriting at a single segment.
If you find it too expensive, do it less often, or don't do it at all.
It's an optional operation.
-Yonik
On Mon, Jul 11, 2016 at 10:19 PM, Jason wrote:
> hi, all.
>
> I'
You should be able to send a POST to Solr that would work with larger
requests.
Postfilter performance is driven by three things:
1) How much overhead is involved in the handling of fq parameter, turning
into data structures etc...
2) How many documents the post filter needs to look at.
3) How fa
as I said before. we also come across the issue. and I just guess the
possible reason. let's wait the expert to explain for us.
on the other hand. I find that your index data is 68G, that is too large, I
recommend you to use solrcloud, as the guide reference, the reasonable size
is around 15G.
now
Hi all,
Tested on Solr 6.1.0 (as well as 5.4.0 and 5.5.0) using the "techproducts"
example the following query throws the same exception as in my original
question:
To reproduce:
1) set up the techproducts example:
solr start -e techproducts -noprompt
2) go to Solr Admin:
http://local
Hi Surender,
Can you share your current field configuration so that we can debug it from
there.. ?
Share your field and fieldtype definition from schema.xml .
--
View this message in context:
http://lucene.472066.n3.nabble.com/Searching-Home-s-Homes-and-Home-tp4286341p4286768.html
Sent from
Or you can build a file called synonym.txt in your directory config of your
core.
Le 11 juil. 2016 17:06, "Surender" a écrit :
> Thanks...
>
> I am applying these filters and will share update on this issue. It will
> take couple of days.
>
> Thanks,
> Surender Singh
>
>
>
> --
> View this messag
Hi,
I am implementing a custom post filter for permission checks along the
lines described by Erik at
https://lucidworks.com/blog/2012/02/22/custom-security-filtering-in-solr/
Is there a limit to the length (number of characters) of the custom post
filter? In our case, length of this "fq" could b
I started this a while ago, but haven't found the time to finish:
https://issues.apache.org/jira/browse/SOLR-7830
-Yonik
On Tue, Jul 12, 2016 at 7:29 AM, Aditya Sundaram
wrote:
> Does solr support multilevel grouping? I want to group upto 2/3 levels
> based on different fields i.e 1st group on
Does solr support multilevel grouping? I want to group upto 2/3 levels
based on different fields i.e 1st group on field one, within which i group
by field 2 etc.
I am aware of facet.pivot which does the same but retrieves only the count.
Is there anyway to get the documents as well along with the c
Hi Josium,
Have to try something like this
http://localhost:8983/solr/mycollection/select?fq=Hobbit:*&indent=on&q=*:*&wt=json
This will return all documents that contain the field Hobbit ONLY.
Well, I'm not very sure to understand what you seeking for, excuse me if my
answer is out of topic.
Be
hi, Kent
thanks your reply.
I think that I need more explain to my server status.
I'm using solr 4.2.1 and master-slave replication model.
On master server many solr(tomcat) instances are running.
(server has 64 cores, 128G ram.)
Now 4 solr(tomcat) instances are running and are allocated 32, 16, 1
we also came across this issue. I think it is not caused by gc time, but
the optimize action, though I did not read the source code, I think when
optimize the index in master internally, it will produce the replicate log
file, and the replicates synchronize the log file, just like the DB master
and
Hi all,
My requirement is in line with https://issues.apache.org/jira/browse/SOLR-3955
I'm working on a project that has very low network bandwidth for the clients.
I'm using Solr 4.10
The problem:
I have ~ 1M documents with multiple fields(~50), many of them are indexed,
stored and some of
37 matches
Mail list logo