Hello:
We are using export handler in Solr Cloud to get some data, we only
request for one field, which type is tdouble, it works well at the
beginning, but recently we saw high CPU issue in all the solr cloud nodes,
we took some thread dump and found following information:
java.lang.Thread.
It really sounds like you're re-inventing SolrCloud, but
you know your requirements best.
Erick
On Wed, Nov 2, 2016 at 8:48 PM, Kent Mu wrote:
> Thanks Erick!
> Actually, similar to solrcloud, we split our data to 8 customized shards(1
> master with 4 slaves), and each with one ctrix and two apa
Thanks Erick!
Actually, similar to solrcloud, we split our data to 8 customized shards(1
master with 4 slaves), and each with one ctrix and two apache web server to
reduce server pressure through load balancing.
As we are running an e-commerce site, the number of reviews of selling
products grows v
@Rohit you can look into this.
http://www.javaworld.com/article/2074996/hashcode-and-equals-method-in-java-object---a-pragmatic-concept.html
A good article for hashcode and equals
--
View this message in context:
http://lucene.472066.n3.nabble.com/Sorting-Problem-with-custom-ValueSourceParser
Here is a wild guess. Whenever I see a 5 second delay in networking, I
think DNS timeouts. YMMV, good luck.
cheers -- Rick
On 2016-11-01 04:18 PM, Dave Seltzer wrote:
Hello!
I'm trying to utilize Solr Cloud to help with a hash search problem. The
record set has only 4,300 documents.
When I r
On 11/2/2016 4:00 PM, Rishabh Patel wrote:
> Hello, I downloaded a tar, tagged as 6.2.1 from Github. Upon doing
> "ant test" from lucene, all tests pass, however they repeatedly fail
> from the Solr folder. The tests are running on a Ubuntu 14.04 box with
> Java 8. These are the truncated statement
Then I can only guess that in current configuration decrypted password is empty
string.
Try to manually replace some characters in encpwd.txt file to see if you get
different errors; try to delete this file completely to see if you get
different errors. Try to add new line in this file; try to
Hello,
I downloaded a tar, tagged as 6.2.1 from Github. Upon doing "ant test" from
lucene, all tests pass, however they repeatedly fail from the Solr folder.
The tests are running on a Ubuntu 14.04 box with Java 8.
These are the truncated statements from the output:
[junit4] 2> 685308 INFO
I should have mentioned that I verified connectivity with plain passwords:
>From the same machine that Solr's running on:
solr@000650cbdd5e:/opt/solr$ mysql -uroot -pOakton153 -h local.mysite.com
mysite -e "select 'foo' as bar;"
+-+
| bar |
+-+
| foo |
+-+
Also, if I add the plain-te
In MySQL, this command will explicitly allow to connect from remote ICZ2002912
host, check MySQL documentation:
GRANT ALL ON mysite.* TO 'root’@'ICZ2002912' IDENTIFIED BY ‘Oakton123’;
On November 2, 2016 at 4:41:48 PM, Fuad Efendi (f...@efendi.ca) wrote:
This is the root of the problem:
"Acce
This is the root of the problem:
"Access denied for user 'root'@'ICZ2002912' (using password: NO) “
First of all, ensure that plain (non-encrypted) password settings work for you.
Check that you can connect using MySQL client from ICZ2002912 to your MySQL &
Co. instance
I suspect you need to a
I'm at a brick wall. Here's the latest status:
Here are some sample commands that I'm using:
*Create the encryptKeyFile and encrypted password:*
encrypter_password='this_is_my_encrypter_password'
plain_db_pw='Oakton153'
cd /var/docker/solr_stage2/credentials/
echo -n "${encrypter_password}" >
My 2 cents (rounded):
Quote: "the size of our index data is more than 30GB every year now”
- is it the size of *data* or the size of *index*? This is super important!
You can have petabytes of data, growing terabytes a year, and your index files
will grow only few gigabytes a year at most.
Not
Ref:
https://cwiki.apache.org/confluence/display/solr/Shards+and+Indexing+Data+in+SolrCloud
If an update specifies only the non-routed id, will SolrCloud select the
correct shard for updating?
If an update specifies a different route, will SolrCloud delete the
previous document with the same
You need to move to SolrCloud when it's
time to shard ;).
More seriously, at some point simply adding more
memory will not be adequate. Either your JVM
heap will to grow to a point where you start encountering
GC pauses or the time to serve requests will
increase unacceptably. "when?" you ask?
Hi Rafael,
I suggest to check all the fields present in your qf looking for one (or
ore) where the stopwords filter is missing.
Very likely there is a field in your qf where the stopword filter is
missing.
The issue you're experiencing is caused by an attempt to match a stopword
on a "non-stopwor
Hi guys,
I came across the following issue. I configured an edixmax query parser
where *mm=100%* and when the user types in a stopword, no result is being
returned (stopwords are filtered before indexing, but, somehow, either they
are not being filtered before searching or they are taken into acco
Thank you
2016-11-02 7:35 GMT+03:00 David Smiley :
> I plan on adding this in the near future... hopefully for Solr 6.4.
>
> On Mon, Oct 31, 2016 at 7:06 AM Никита Веневитин <
> venevitinnik...@gmail.com>
> wrote:
>
> > I've built query as described in https://cwiki.apache.org/confluence/x/ZYDxAQ
18 matches
Mail list logo