On Fri, Sep 5, 2014 at 9:34 PM, Walter Underwood
wrote:
> What would be a high mm value, 75%?
Walter, I suppose that the length of the search result influence the run
time. So, for particular query and an index, the high mm value is that
one, which significantly reduces the search result lengt
What we need is a function like scale(field,min,max) but only operates on
the results that come back from the search results.
scale() takes the min, max from the field in the index, not necessarily
those in the results.
I cannot think of a solution. max() only looks at one field, not across
field
Whoa! First, you should really NOT be using sfloat (or pfloat) or any
of their variants unless you're waaay back on 1.4. Those were fine in
their time, but numeric types (float/tfloat and the rest) are vastly
preferred. Also more efficient in terms of CPU cycles and storage.
second, and assuming y
firewall effect this, try and test. good luck.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Access-solr-cloud-via-ssh-tunnel-tp4159224p4159305.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have a requirement to preserve 0 after decimal point, currently with the
below field type
27.50 is stripped as 27.5
27.00 is stripped as 27.0
27.90 is stripped as 29.9
27.5
I also tried using double but even then the 0's are getting stripped.
27.5
Input data:
27.50
--
View this
Hello,
I think the documentation and example files for Solr 4.x need to be
updated. If someone will let me know I'll be happy to fix the example
and perhaps someone with edit rights could fix the reference guide.
Due to dirty OCR and over 400 languages we have over 2 billion unique
terms in our
In a test scenario, I used stunnel for connections between some
zookeeper observers and the central ensemble, as well as between a SolrJ
4.9.0 client and the central zookeepers. This is entirely transparent
modulo performance penalties due to network latency and ssl overhead. I
finally ended up wit
Remove
On Tue, Sep 16, 2014 at 2:19 PM, Xavier Morera
wrote:
> I think what some people are actually saying is "burn in hell Aaron Susan
> for using a solr apache dl for marketing purposes"?
>
> On Tue, Sep 16, 2014 at 8:31 AM, Suman Ghosh
> wrote:
>
> > Remove
> >
> > On Mon, Sep 15, 2014 at 1
Not sure if this will work, but try to use ssh to setup a SOCKS proxy via
the -D command option.
Then use the socksProxyHost and socksProxyPort via the java command line
(ie java -DsocksProxyHost="localhost") or
System.setProperty("socksProxyHost","localhost") from your code. Make sure
to sp
I am in a situation where I need to access a solrcloud behind a firewall.
I have a tunnel enabled to one of the zookeeper as a starting points and
the following test code:
CloudSolrServer server = new CloudSolrServer("localhost:2181");
server.setDefaultCollection("test");
SolrPingResponse p =
I have a very weird problem that I'm going to try to describe here to see
if anyone has any "ah-ha" moments or clues. I haven't created a small
reproducible project for this but I guess I will have to try in the future
if I can't figure it out. (Or I'll need to bisect by running long Hadoop
jobs...
I think what some people are actually saying is "burn in hell Aaron Susan
for using a solr apache dl for marketing purposes"?
On Tue, Sep 16, 2014 at 8:31 AM, Suman Ghosh
wrote:
> Remove
>
> On Mon, Sep 15, 2014 at 11:35 AM, Aaron Susan
> wrote:
>
> > Hi,
> >
> > I am here to inform you that we
I checked and these 'insanity' cached keys correspond to fields we use for
both grouping and faceting. The same behavior is documented here:
https://issues.apache.org/jira/browse/SOLR-4866, although I have single
shards for every replica which the jira says is a setup which should not
generate thes
Are you asking about the consultant or about the product itself? The
product itself is free and open source, unless you want to get one of the
several commercial distributions. In later case, you may want to reach out
to their sales team directly.
If you are looking for a consultant/company to sup
We wrote a script which queries each Solr instance in cloud
(http://$host/solr/replication?command=details) and subtracts the
‘replicableVersion’ number from the ‘indexVersion’ number, converts to minutes,
and alerts if the minutes exceed 20. We get alerted many times a day. The soft
commit set
Depending on the size of the individual records returned, I'd use a
decent size window (to minimize network and marshalling/unmarshalling
overhead) of maybe 1000-1 items sorted by id, and use that in
combination with cursorMark. That will be easier on the server side in
terms of garbage collect
Performance would be better getting them all at the same time, but the
behavior would kind of stink (long pause before a response, big results
stuck in memory, etc).
If you're using a relatively up-to-date version of Solr, you should check
out the "cursormark" feature:
https://wiki.apache.org/solr
If I query for IDs and I do not care about order, should I still expect
better performance paging the results? (e.g. rows=1000 or rows=1) The
use case is that I need to get all of the IDs regardless (there will be
thousands, maybe 10s of thousands, but not millions)
Example query:
http://doma
Hi Team - I want to recommend the Apache Solr - Enterprise Search engineer for
one of our client. Could you please send the license/support cost & features of
the product?
Rgds,
Nitin Kumar Gupta
Accenture Technology - IDC
3rd to 5th floor, Tower-B, SP Infocity, Plot No. 243,
Udyog Vihar, Phase-
Remove
On Mon, Sep 15, 2014 at 11:35 AM, Aaron Susan wrote:
> Hi,
>
> I am here to inform you that we are having a contact list of *Mongo DB
> Users *would you be interested in it?
>
> Data Field’s Consist Of: Name, Job Title, Verified Phone Number, Verified
> Email Address, Company Name & Addre
remove please
On 16.09.14 15:42, Karolina Dobromiła Jeleń wrote:
remove please
On Tue, Sep 16, 2014 at 9:35 AM, Amey Patil wrote:
Remove.
On Tue, Sep 16, 2014 at 12:58 PM, Joan wrote:
Remove please
2014-09-16 6:59 GMT+02:00 Patti Kelroe-Cooke :
Remove
Kind regards
Patti
On Mon, Sep
Anyone can help me about this? or Solr not support adding additional children
documents to existed parent document?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Append-children-documents-for-nested-document-tp4157087p4159152.html
Sent from the Solr - User mailing list arc
Thanks for the response, I've been working on solving some of the most
evident issues and I also added your garbage collector parameters. First of
all the Lucene field cache is being filled with some entries which are
marked as 'insanity'. Some of these were related to a custom field that we
use fo
remove please
On Tue, Sep 16, 2014 at 9:35 AM, Amey Patil wrote:
> Remove.
>
> On Tue, Sep 16, 2014 at 12:58 PM, Joan wrote:
>
> > Remove please
> >
> > 2014-09-16 6:59 GMT+02:00 Patti Kelroe-Cooke :
> >
> > > Remove
> > >
> > > Kind regards
> > > Patti
> > >
> > > On Mon, Sep 15, 2014 at 5:35
Remove.
On Tue, Sep 16, 2014 at 12:58 PM, Joan wrote:
> Remove please
>
> 2014-09-16 6:59 GMT+02:00 Patti Kelroe-Cooke :
>
> > Remove
> >
> > Kind regards
> > Patti
> >
> > On Mon, Sep 15, 2014 at 5:35 PM, Aaron Susan
> > wrote:
> >
> > > Hi,
> > >
> > > I am here to inform you that we are havi
Remove please
2014-09-16 6:59 GMT+02:00 Patti Kelroe-Cooke :
> Remove
>
> Kind regards
> Patti
>
> On Mon, Sep 15, 2014 at 5:35 PM, Aaron Susan
> wrote:
>
> > Hi,
> >
> > I am here to inform you that we are having a contact list of *Mongo DB
> > Users *would you be interested in it?
> >
> > Data
26 matches
Mail list logo