Hi,
Maybe, you have a wrong path. Try below.
$ sudo solr-7.1.0/bin/install_solr_service.sh
Thanks,
Yasufumi.
2017-10-24 12:11 GMT+09:00 Dane Terrell :
> Hi I'm new to apache solr. I'm looking to install apache solr 7.1.0 on my
> localhost computer. I downloaded and extracted the tar file in my
Thanks Emir and Zisis.
I added the maxRamMB for filterCache and reduced the size. I could the
benefit immediately, the hit ratio went to 0.97. Here's the configuration:
It seemed to be stable for few days, the cache hits and jvm pool utilization
seemed to be well within expected range. But th
Pinging again. Anyone has ideas on this? Thanks
On Sat, Oct 14, 2017 at 4:52 PM, Sundeep T wrote:
> Hello,
>
> In our scale environment, we see that the deep paging queries using
> cursormark are running really slow. When we traced out the calls, we see
> that the second query which queries the
Hi I'm new to apache solr. I'm looking to install apache solr 7.1.0 on my
localhost computer. I downloaded and extracted the tar file in my tmp folder.
But when I try to run the script... sudo:
solr-7.1.0/solr/bin/install_solr_service.sh: command not found
or
solr-7.1.0/solr/bin/install_solr_ser
i need to implement some rather customized sort in SOLR, i would appreciate
if you could give some high-level pointer on the following:
1/ could i make use of customized Collector (lucene level) in SOLR?
2/ could i make use of functional query (lucene level) for customized sort
in SOLR?
3/ i wou
Thanks for the quick reply, Erick.
To follow up:
“
Well, first you can explicitly set legacyCloud=true by using the
Collections API CLUSTERPROP command. I don't recommend this, mind you,
as legacyCloud will not be supported forever.
“
Yes, but like you say: we’ll have to deal with at so
Well, first you can explicitly set legacyCloud=true by using the
Collections API CLUSTERPROP command. I don't recommend this, mind you,
as legacyCloud will not be supported forever.
I'm not following something here though. When you say:
"The desired final state of a such a deployment is a fully co
Hi,
I'm trying to extract several similarity measures from Solr for use in a
learning to rank model. Doing this mathematically involves taking the dot
product of several different matrices, which is extremely fast for non-huge
data sets (e.g., millions of documents and queries). However, to extrac
Hi everyone,
I'm working on upgrading a set of clusters from Solr 4.10.4 to Solr 7.1.0.
Our deployment tooling no longer works given that legacyCloud defaults to false
(SOLR-8256) and I'm hoping to get some advice on what to do going forward.
Our setup is as follows:
* we run in AWS with mult
Thanks for your reply.
can the recip function be used to boost a numeric field here:
recip(ord(rating),100,1,1)
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
You can pass additional bq params in the query.
~Aravind
On Oct 23, 2017 4:10 PM, "ruby" wrote:
> If I want to boost multiple fields using Edismax query parser, is following
> the correct way of doing it:
>
>
>
> edismax
> field1:(apple)^500
> field1:(orange)^400
> field1:(pear)^300
> field2:
If I want to boost multiple fields using Edismax query parser, is following
the correct way of doing it:
edismax
field1:(apple)^500
field1:(orange)^400
field1:(pear)^300
field2:(4)^500
field2:(2)^100
recip(ms(NOW,mod_date),3.16e-11,1,1)
recip(ms(NOW,creation_date),3.16e-11,1,1)
And
If I want to boost multiple fields using Edismax query parser, is following
the correct way of doing it:
edismax
field1:(apple)^500
field1:(orange)^400
field1:(pear)^300
field2:(4)^500
field2:(2)^100
recip(ms(NOW,mod_date),3.16e-11,1,1)
recip(ms(NOW,creation_date),3.16e-11,1,1)
And
John Davis wrote:
> We are seeing really slow facet performance with new solr release.
> This is on an index of 2M documents.
I am currently running some performance experiments on simple String faceting,
comparing Solr 4 & 6. There is definitely a performance difference, but it is
not trivial
1> merging takes place up until the max segment size is reached (5G in
the default TieredMergePolicy).
2> there are a couple of options, again config changes for TieredMergePolicy
10
might help.
You could also try upping this (the default is 5G).
5000
Best,
Erick
On Mon, Oct 23, 2017 at 10:34
Docvalues don't work for multivalued fields. I just started a separate
thread with more debug info. It is a bit surprising why facet computation
is so slow even when the query matches hundreds of docs.
On Mon, Oct 23, 2017 at 6:53 AM, alessandro.benedetti
wrote:
> Hi John,
> first of all, I may
Thanks eric.
(Beginner in solr). Few questions.
1. Does merging take place only when we have deleted docs?
When my segments reach a count of 35+ the search is getting slow.Only on
performing force merge to index the search is efficient.
2. Is there any way we can reduce the number of segments
Amrit,
Thanks for your reply. I have removed that
1000
1
15
false
1024
2
2
hdfs
1
0
--
Sent from: http://lucene.472066.n3.nabble.com
I find the Lucene segments in the backend is not merging and the segment
count increases to a lot. I changed the merge policy from
LogByteSizeMergePolicy to TieredMergePolicy
I tried altering properties according to the solr documentation but still,
my segments are high.
I am using solr 6.1.X. **
Hello,
We are seeing really slow facet performance with new solr release. This is
on an index of 2M documents. A few things we've tried:
1. method=uif however that didn't help much (the facet fields have
docValues=false since they are multi-valued). Debug info below.
2. changing query (q=) that
And please define what you mean by "merging is not working". One
parameter is max segments size, which defaults to 5G. Segments
at or near that size are not eligible for merging unless they have
around 50% deleted docs.
Best,
Erick
On Mon, Oct 23, 2017 at 3:11 AM, Amrit Sarkar wrote:
> Chandru,
Great, thanks for bringing closure to this!
oh, and one addendum. I wrote:
It'll probably be around forever since replication is used as a fall-back
Forget the "probably" there. In 7.x there are new replica types that
use this as their way of distributing the index, see the PULL replica
type
In addition : bf=recip(ms(NOW/DAY,unixdate),3.16e-11,5,0.1)) is an additive
boost.
I tend to prefer multiplicative ones but that is up to you [1].
You can specify the order of magnitude of the values generated by that
function.
This means that you have control of how much the date will affect the
It strictly depends on the kind of features you are using.
At the moment there is just one cache for all the features.
This means that even if you have 1 query dependent feature and 100 document
dependent feature, a different value for the query dependent one will
invalidate the cache entry for the
Hi John,
first of all, I may state the obvious, but have you tried docValues ?
Apart from that a friend of mine ( Diego Ceccarelli) was discussing a
probabilistic implementation similar to the hyperloglog[1] to approximate
facets counting.
I didn't have time to take a look in details / implement
Has anyone had experience tuning feature caches? Do any of the values below
look unreasonable?
--Brian
-Original Message-
From: Brian Yee [mailto:b...@wayfair.com]
Sent: Friday, October 20, 2017 1:41 PM
To: solr-user@lucene.apache.org
Subject: LTR feature extraction performance issues
Hi Erick,
sorry for the slow reply. You are right, the information is not
persisted. Once I do a restart there is no information about the
replication source anymore. That explains why I could not find it
anywhere persisted ;-) I thought I had tested that last week but must
have not done so a
You mentioned hat you are on v. 6.6, but in case someone else uses this, just
to add that maxRamMB is added to FastLRUCache in version 6.4.
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
> On 23 Oct 201
Hi!
I have one main collection of people and a few more collections with
additional data. All search queries are on the main collection with
joins to one or more additional collections. A simple example would
be:
(*:* {!join from=people_person_id to=people_person_id
fromIndex=fundraising_donor_in
I was able to resolve the issue. I was adding the certificate and then I had
combined my certificate and private key. So when I added the certificate plus
the certificate and private key it was breaking. I removed just the
certificate and it resolved the issue. So I had my root certificates
shamik wrote
> I was not aware of maxRamMB parameter, looks like it's only available for
> queryResultCache. Is that what you are referring to? Can you please share
> your cache configuration?
I've setup filterCache entry inside solrconfig.xml as follows
**
I had a look inside FastLRUCache code
On Thu, 2017-10-19 at 08:56 -0700, Nawab Zada Asad Iqbal wrote:
> I see three colors in the JVM usage bar. Dark Gray, light Gray,
> white. (left to right). Only one dark and one light color made sense
> to me (as i could interpret them as used vs available memory), but
> there is light gray betwee
Hi Shamik,
I agree that your filter cache is not the reason for OOMs. Can you confirm that
your fieldCache and filedValueCache sizes are not consuming too much memory.
The next on the list would be some heavy faceting with pivots, but you
mentioned that all fields are low cardinality. Do you see
Hi,
i have a problem with JOIN-Function (query of two collections)
I have two collections
"ColAAA" and "ColBBB"
ColAAA => field ABC fieldtype "text_general" or "string"
ColBBB => fields XYZ and DEF fieldtype "string"
Example of field "ABC" ->"SomeWord 250kg"
With the JOIN-Function I want t
Chandru,
Didn't try the above config bu whyt have you defined both "mergePolicy" and
"mergePolicyFactory"? and pass different values for same parameters?
> 10
> 1
>
>
> 10
> 10
>
>
Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://
35 matches
Mail list logo