Hello,
How does it work now? Do you have a list of slaves configured on a client
app? Btw what do you use to call Solr from .net?
01 июня 2016 г. 14:08 пользователь "shivendra.tiwari" <
shivendra.tiw...@arcscorp.net> написал:
> Hi,
>
> I have to configure SolrCloud for loadbalance on .net applica
On Fri, Jun 3, 2016 at 1:25 AM, Erick Erickson
wrote:
> We can always use more documentation. One of the
> valuable things about people getting started is that it's an
> opportunity to clarify documents. Sometimes the people who
> develop/write the docs jump into the middle and assume
> the reade
In my question i confusef you, there are 2 shards and 2 nodes on each
shard, one leader and one not. When created the collection num of shards
was 2 and replication factor was 2.
Now the status is shard 1 has 2 out of sync nodes, so it is needed to
merge/sync them. Do you still suggest same? Add re
Thanks Erick,
Accually, I am using fork provided by SolrNet for Cloud from here
https://github.com/vladen/SolrNet but unable to communicate from zookeeper.
Do you have any idea it is stable for SolrCloud. I am using SolrNet for
simple master and slave it is working fine but for cloud mode una
We can always use more documentation. One of the
valuable things about people getting started is that it's an
opportunity to clarify documents. Sometimes the people who
develop/write the docs jump into the middle and assume
the reader has knowledge they couldn't be expected to have
Hint, hint.
i am looking for lucidwork documentation.
ok chris I will contact lucidwork then.
thank you.
On Friday, June 3, 2016, Chris Hostetter wrote:
>
> Lucidworks Fusion is a commercial product, not a part of the Apache
> Software Foundation - questions about using it are not really appropriate
> for t
Yeah even though I'm still fairly new to this, I'm generally a good problem
solver or I'd never have gotten as far as I have already on my own (really
wanted to hire a Solr consultant and pushed VERY hard for it, but my boss
really likes us to figure things out on our own!) Just wish I'd found this
Lisheng:
I'm not too up on the details of Lucene block join, but I don't
think it really applies to access control. You'd have to
have documents grouped by access control (i.e. every
child doc of doc X has the same access control). If you
can do that, you can put an "authorization token" in the
do
One of the most valuable things I did when I started out
(way back in the Lucene-only days) was try to answer _one_
question every so often. Even if someone else beat me to the
punch, I benefitted from the research. And the rest of the time
I discovered things I never knew about Solr/Lucene!
I thi
Well thanks for asking the question because I had no idea what Andrew
posted was even possible... and I most definitely will be using that
myself! Totally brilliant stuff. I am so loving Solr... well, when it's not
driving me bonkers.
Mary Jo
On Thu, Jun 2, 2016 at 2:33 PM, Jamal, Sarfaraz <
sar
I think the confusion stems from the legacy implementation partially
conflating q.op with mm for users, when they are very different things.
q.op tells Solr how to insert boolean operators before they are converted
into occurs flags, and then downstream, mm applies on _only_ the SHOULD
occurs flags
Fantastic! I'm sorry I couldn't find that JIRA before and for getting you
to track it down.
Yup, I noticed that for the docvalues with the ordinal map and I'm
definitely leveraging all that but I'm hitting the terms limit now and that
ends up pushing me over. I'll see about giving Zing/Azul a try
A pedantic nit... leader/replica is not much like
"old master/slave".
That out of the way, here's what I'd do.
1> use the ADDREPLICA to add a new replica for the shard
_on the same node as the bad one_.
2> Once that had recoverd (green in the admin UI) and you
were confident of
its int
Basically it never reached consensus, see the discussion at:
https://issues.apache.org/jira/browse/SOLR-6638
If you can afford it I've seen people with very good results
using Zing/Azul, but that can be expensive.
DocValues can help for fields you facet and sort on,
those essentially move memory
Most people just put a hardware load balancer in front of their Solr
cluster and leave it at that. Since you're using .net, you can't
use CloudSolrClient which has a software LB built in so you'll
have to do something external.
Best,
Erick
On Wed, Jun 1, 2016 at 4:10 AM, shivendra.tiwari
wrote:
In a word "no". However, there is a _third_ option which is to explicitly
build the suggesters on whatever schedule you want by issuing
(using cURL or the like, perhaps with a cron job) where the
URL looks something like
http://localhost:8983/solr/techproducts/suggest?suggest=true&suggest.build=tru
Lucidworks Fusion is a commercial product, not a part of the Apache
Software Foundation - questions about using it are not really appropriate
for this mailing list. You should contact Lucidworks support directly...
https://lucidworks.com/company/contact/
...with that in mind, the docs
Is the Solr Reference Guide what you are looking for?
https://www.apache.org/dyn/closer.cgi/lucene/solr/ref-guide/apache-solr-ref-guide-6.0.pdf
I don't know how to find older versions.
From: Aman Tandon [amantandon...@gmail.com]
Sent: Thursday, June 02, 2
Hi,
How could I download the Fusion documentation pdf ? If anyone is aware,
please help me!!
With Regards
Aman Tandon
Hi Jamal,
I assume you are using the Synonym token filter.
>From the observation I can assume you are using it only at indexing time.
This means that when you index you are :
1) given a row in the synonym.txt you index all the terms per row in place
of any of the term in the row .
2) given any o
In addition to a separate proxy you could use iptables, I use this
technique for another app (running on port 5000 but requests come in
port 80)...
*nat
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 5000
On 6/2/2016 12:51 PM, Teague James wrote:
> Thanks for that suggestion, but I had found that file and I had
> changed it to 80, but still no luck. Solr isn't running because it
> never started in the first place. I also tried the -p 80 flag using
> the install script and it failed.
Something I jus
Hi Shawn!
Thanks for that suggestion, but I had found that file and I had changed it to
80, but still no luck. Solr isn't running because it never started in the first
place. I also tried the -p 80 flag using the install script and it failed.
Tried: ./install_solr_service.sh solr-6.0.0.tgz –
I am having some difficulty understanding how to do something and if it is even
possible
I have tried the following sets of Synonyms:
1. sarfaraz, sas, sasjamal
2. sasjamal,sas => Sarfaraz
In the second instance, any searches with the world 'sasjamal' do not appear in
the results, as it has
On 6/2/2016 11:56 AM, Robert Brown wrote:
> My question is whether sending batches of 1,000 documents to Solr is
> still beneficial (thinking about docs that may not change), or if I
> should look at the MongoDB connector for Solr, based on the volume of
> incoming data we see.
>
> Would the connec
Hi all,
We are in the processing of streamlining our indexing process and trying to
increase some performance. We came across an issue where zookeeper seems to
hang for 10+ minutes (we've seen it as high as 40 min) after committing.
See the portion of the logs below.
Our indexing is being done us
Thank you Andrew, that looks like exactly what I am looking for =)
Thank you Robert, it looks like we are both doing it in similar fashion =)
Thank you MaryJo for jumping right in!
Sas
-Original Message-
From: Andrew Chillrud [mailto:achill...@opentext.com]
Sent: Thursday, June 2, 201
Ah yes I did misunderstand the question, I thought he was just saying the
count was not the same as what the facet in the first query had returned.
MJ
On Thu, Jun 2, 2016 at 2:11 PM, Robert Brown wrote:
> MaryJo, I think you've mis-understood. The counts are different simply
> because the 2n
It is possible to get the original facet counts for the field you are filtering
on (we have been using this since Solr 3.6). Don't know if this can be extended
to get the original counts for all fields however.
This syntax is described here:
https://cwiki.apache.org/confluence/display/solr/Fac
MaryJo, I think you've mis-understood. The counts are different simply
because the 2nd query contains an filter of a facet value from the 1st
query - that's completely expected.
The issue is how to get the original facet counts (with no filters but
same q) in the same call as also filtering b
And you're saying the count for the second query is different than what was
returned in the facet? You may need to check for any defaults you have set
up in the solrconfig for the select parser, if for instance you have any
grouping going on, but aren't doing grouping in your facet, that could
resu
Absolutely,
Here is what it looks like:
This brings the right counts as it should
http://**select?q=video&hl=true&hl.fl=*&hl.snippets=20&facet=true&facet.field=team
Then when I specify which team
http://**select?q=video&hl=true&hl.fl=*&hl.snippets=20&facet=true&facet.field=team&f
In other words... to diagnose such a problem it would really help to see
the exact parameters and filters you are using on each of the searches.
Mary Jo
On Thu, Jun 2, 2016 at 1:47 PM, Jamal, Sarfaraz <
sarfaraz.ja...@verizonwireless.com.invalid> wrote:
> Hello Everyone,
>
> I am working on impl
Hi,
Currently we import data-sets from various sources (csv, xml, json,
etc.) and POST to Solr, after some pre-processing to get it into a
consistent format, and some other transformations.
We currently dump out to a json file in batches of 1,000 documents and
POST that file to Solr.
Rough
Jamai - what is your q= set to? And do you have a fq for the original
query? I have found that if you do a wildcard search (*.*) you have to be
careful about other parameters you set as that can often result in the
numbers returned being off. In my case, my defaults had things like edismax
settings
Hello Everyone,
I am working on implementing some basic faceting into my project.
I have it working the way I want to, but I feel like there is probably a better
way the way I went about it.
* I want to show a category and its count.
* when someone clicks a category, it sets a FQ= to that categ
I am looking for to define multi field for example the field links to
extract all links from the field text of each file.
I define in tika.config.xml a regex for the expression of links but when
the prossesor of indexation is finish I get just one value even if in
schema.xml I define the field link
sure.
the processes we run to do linkage take hours. we're processing ~600k
records, bouncing our users data up against a few data sources that act as
'sources of truth' for us for the sake of this linkage. we get the top 3
results and run some quick checks on it algorithmically to determine if we
Without having a lot more data it's hard to say anything helpful.
_What_ is slow? What does "data linkage" mean exactly? Etc.
Best,
Erick
On Thu, Jun 2, 2016 at 9:33 AM, John Blythe wrote:
> hi all,
>
> having lots of processing happening using multiple solr cores to do some
> data linkage with
hi all,
having lots of processing happening using multiple solr cores to do some
data linkage with our customers' transactional data. it runs pretty slowly
at the moment. we were wondering if there were some solr or jetty tunings
that we could implement to help make it more powerful and efficient.
hi i would like to create a new field structure(tika-config.xml) for my
indexing files using tika (ExtractingRequestHandler) and i just want a
working example to follow so that i can create my file thank you
I forgot to mention another issue I run into. Looks like "docValues" is
not supported with DateRangeField, is this true?
If I have:
Solr will fail to start, reporting the following error:
org.apache.solr.core.CoreContainer; Error creating core [openpages]:
Could not load conf for
Hi everyone,
This is two part question about date in Solr.
Question #1:
My understanding is, in order for me to index date types, the date data
must be formatted and indexed as such:
-MM-DDThh:mm:ssZ
What if I do not have the time part, should I be indexing it as such and
still get all
On Thu, 2016-06-02 at 09:26 -0400, Yonik Seeley wrote:
> My guess would be that the smaller limit causes large facet refinement
> requests to be sent out on the second phase.
> It's not clear what's happening after that though (i.e. why that
> causes things to crash)
The facet refinement can be a
hey Arcadius,
sorry I missed your reply and just saw it now. Thanks for the answers! I
will need to use some of those advanced settings for the suggesters, so
I'll have more questions/comments, and hopefully some fixes too (for
example for SOLR-8928 if I have the time)
xavi
On Thu, May 12, 2016
My guess would be that the smaller limit causes large facet refinement
requests to be sent out on the second phase.
It's not clear what's happening after that though (i.e. why that
causes things to crash)
-Yonik
On Thu, Jun 2, 2016 at 8:47 AM, Markus Jelsma
wrote:
> Hello,
>
> I ran accros an a
On 6/2/2016 1:28 AM, Selvam wrote:
> We need to run a heavy SOLR with 300 million documents, with each
> document having around 350 fields. The average length of the fields
> will be around 100 characters, it may have date and integers fields as
> well. Now we are not sure whether to have single se
Thanks for many people answer my question and gave me considerable opionions
and suggestion. I'd like to share what these helps come out here.
Installing Solarium
=
# Under Windows
1. Solarium needs php 5.3.3 or up (5.3.4 is recommended). Make sure you install
correct php en
Hello,
I ran accros an awkward situation where is collect all ~7.000.000 distinct
values for a field via facetting. To keep things optimized and reduce memory
consumption i don't do setFacetLimit(-1) but a reasonable limit of 10.000 or
100.000.
To my surprise, Solr just stops or crashes. So, i
[Aside] Your quote style is confusing, leaving my lines unquoted and your new
lines quoted?? [/Aside]
> So in relation to the OP's sample queries I was pointing out that 'q.op=OR
> + mm=2' and 'q,op=AND + mm=2' are treated as identical queries by Solr 5.4,
> but 5.5+ will manipulate the occurs fl
I have investigated different Solr versions. I have found that 4.10.3 is
the last version that completely strips the HTML to text as expected.
4.10.4 starts introducing some HTML comments and Javascript and anything
over 5.0 is full of mangled HTML and attribute artefacts such as
"X-Parsed-By".
Hi,
On a note, we also need all 350 fields to be stored and indexed.
On Thu, Jun 2, 2016 at 12:58 PM, Selvam wrote:
> Hello all,
>
> We need to run a heavy SOLR with 300 million documents, with each
> document having around 350 fields. The average length of the fields will be
> around 100 cha
Erick you were right. I Dont know why there is difference when using SSL.
When i excplicitly added commit=true it did enforced Last modified to be
updated. Case is closed thank you
On Jun 1, 2016 8:55 PM, "Erick Erickson" wrote:
> Issue an explicit commit to be sure.
>
> And as to whether the SSL
Hello all,
We need to run a heavy SOLR with 300 million documents, with each document
having around 350 fields. The average length of the fields will be around
100 characters, it may have date and integers fields as well. Now we are
not sure whether to have single server or run multiple servers (
54 matches
Mail list logo