Hi Erick,
Thanks for the reply. It seems I have not done all my homework yet.
We used to use Solr 3.6.2 on the old environment (we're using it in
conjunction with Drupal). When I got connectivity problems on the new
server, I decided to rather implement the latest version of Solr (5.5.0). I
read
Hello everyone
I am running SOLR 4.10 on port 8984 by changing the default port in
etc/jetty.xml. I am now trying to index all my JSON files to Solr running
on 8984. The following is the command
curl 'http://localhost:8984/solr/update?commit=true' --data-binary *.json
-H 'Content-type:applicatio
Hi Shyam,
Yes, I've stopped and restarted the process a number of times. I get the
same result every time.
J
"Getting information off the Internet is like taking a drink from a fire
hydrant." - Mitchell Kapor
.---. .-. .-..-. .-.,'|"\.---.,--,
/ .-. ) ) \_/ / \ \_/ )/| |\ \ / .
Absolutely. You haven't said which version of Solr you're using,
but there are several possibilities:
1> create the collection with replicationFactor=1, then use the
ADDREPLICA command to specify exactly what node the replicas
for each shard are created on with the 'node' parameter.
2> For recent
Do not, repeat NOT try to "cure" the "Overlapping onDeckSearchers"
by bumping this limit! What that means is that your commits
(either hard commit with openSearcher=true or softCommit) are
happening far too frequently and your Solr instance is trying to do
all sorts of work that is immediately thro
Good to meet you!
It looks like you've tried to start Solr a time or two. When you start
up the "cloud" example
it creates
/opt/solr-5.5.0/example/cloud
and puts your SolrCloud stuff under there. It also automatically puts
your configuration
sets up on Zookeeper. When I get this kind of thing, I u
Hi Toke
The number of collection is just 10.One of collection has 43 shards,each
shard has two replicas.We continue importing data from oracle all the time
while our systems provide searching service.
There are "Overlapping onDeckSearchers" in my solr.logs. What is the
meaning about the "Ove
Hi Jarus,
Have you tried stopping the solr process and restarting the cluster again?
Thanks
Shyam
On Tue, Mar 29, 2016 at 8:36 PM, Jarus Bosman wrote:
> Hi,
>
> Introductions first (as I was taught): My name is Jarus Bosman, I am a
> software developer from South Africa, doing development in J
bq: where I see that the number of deleted documents just
keeps on growing and growing, but they never seem to be deleted
This shouldn't be happening. The default TieredMergePolicy weights
segments to be merged (which happens automatically) heavily as per
the percentage of deleted docs. Here's a
I'm trying to index some pages of the medium. But I get error 403. I
believe it is because the medium does not accept the user-agent solr. Has
anyone ever experienced this? You know how to change?
I appreciate any help
500
94
Server returned HTTP response code: 403 for URL:
https://medium.com
Medium switches from http to https, so you would need the logic for dealing
with https security handshakes.
-- Jack Krupansky
On Tue, Mar 29, 2016 at 7:54 PM, Jeferson dos Anjos <
jefersonan...@packdocs.com> wrote:
> I'm trying to index some pages of the medium. But I get error 403. I
> believe
I'm trying to index some pages of the medium. But I get error 403. I
believe it is because the medium does not accept the user-agent solr. Has
anyone ever experienced this? You know how to change?
I appreciate any help
500
94
Server returned HTTP response code: 403 for URL:
https://medium.com
I thought I had sent this reply over the weekend. I had it all ready to
go, but it's still here waiting in my Drafts folder, so I'll send it now.
On 3/25/2016 11:05 AM, Victor D'agostino wrote:
> I am trying to set up a Solr Cloud environment of two Solr 5.4.1 nodes
> but the data are always inde
Hi Max,
Why not implement org.apache.lucene.analysis.util.ResourceLoaderAware?
Existing implementation all load/read text files.
Ahmet
On Wednesday, March 30, 2016 12:14 AM, Max Bridgewater
wrote:
HI,
I am facing the exact issue described here:
http://stackoverflow.com/questions/25623797
Alright, based on https://issues.apache.org/jira/browse/SOLR-5743 I can
assume that limit and mincount for the BlockJoin part stay an open issue for
some time ...
Therefore, the answer is no as of Solr 5.5.0.
Thanks to Mikhail Khludnev for working on the subject.
>Вторник, 29 марта 2016,
HI,
I am facing the exact issue described here:
http://stackoverflow.com/questions/25623797/solr-plugin-classloader.
Basically I'm writing a solr plugin by extending SearchComponent class. My
new class is part of a.jar archive. Also my class depends on a jar b.jar. I
placed both jars in my own fo
On 3/29/2016 1:58 AM, Victor D'agostino wrote:
> Thanks for your help, here is what I've done.
>
> 1. I deleted zookeepers and Solr installations.
> 2. I setup zookeepers on my two servers.
> 3. I successfully setup Solr Cloud node 1 with the same API call (1
> collection named db and two cores) :
Mikhail,
I totally see the point: the corresponding wiki page (
https://cwiki.apache.org/confluence/display/solr/BlockJoin+Faceting ) does not
mention it and says it's an experimental feature.
Is it correct that no additional options ( limit, mincount, etc.) can be set
anyhow?
Or more s
Alisa,
There is no such thing as child.facet.limit, etc
On Tue, Mar 29, 2016 at 6:27 PM, Alisa Z. wrote:
> So the first issue eventually solved by adding facet: {top_terms_by_doc:
> "unique(_root_)"} AND sorting the outer facet buckets by this faceting:
>
> curl http://localhost:8985/solr/enro
So the first issue eventually solved by adding facet: {top_terms_by_doc:
"unique(_root_)"} AND sorting the outer facet buckets by this faceting:
curl http://localhost:8985/solr/enron_path_w_ts/query -d
'q={!parent%20which="type_s:doc"}type_s:doc.userData%20%2BSubject_t:california&rows=0&
json
Hello everyone,
I apologise beforehand if this is a question that has been visited
numerous times on this list, but after hours spent on Google and
talking to SOLR savvy people on #solr @ Freenode I'm still a bit at a
loss about SOLR and deleted documents.
I have quite a few indexes in both produ
Hi,
Introductions first (as I was taught): My name is Jarus Bosman, I am a
software developer from South Africa, doing development in Java, PHP and
Delphi. I have been programming for 19 years and find out more every day
that I don't actually know anything about programming ;).
My problem:
We re
On Tue, 2016-03-29 at 20:12 +0800, YouPeng Yang wrote:
> Our system still goes down as times going.We found lots of threads are
> WAITING.Here is the threaddump that I copy from the web page.And 4 pictures
> for it.
> Is there any relationship with my problem?
That is a lot of commitScheduler-
Hi
Our system still goes down as times going.We found lots of threads are
WAITING.Here is the threaddump that I copy from the web page.And 4 pictures
for it.
Is there any relationship with my problem?
https://www.dropbox.com/s/h3wyez091oouwck/threaddump?dl=0
https://www.dropbox.com/s/p3ctuxb3
Hi,
I believe the default behavior of creating collections distributed across
shards through the following command
http://
[solrlocation]:8983/solr/admin/collections?action=CREATE&name=[collection_name]&numShards=2&replicationFactor=2&maxShardsPerNode=2&collection.configName=[configuration_name]
Hi guys
It seems I tried to add two additional shards on a existing Solr
ensemble and this is not supported (or I didn't find how).
So after setting ZooKeeper I first setup my node n°2 and then setup my
node n°1 with
wget --no-proxy
"http://node1:8983/solr/admin/collections?&collection.confi
Moreover, I have created those new collections as a work around as my past
collections were not coming up after a complete restart for machines
hosting zookeepers and Solr. I would be interested to know what is the
proper procedure of bringing old collections up after a restart of
zookeeper ensembl
Thanks Reth for your response. It did work.
Regards,
Salman
On Mon, Mar 28, 2016 at 8:01 PM, Reth RM wrote:
> I think it should be "zkcli.bat" (all in lower case) that is shipped with
> solr not zkCli.cmd(that is shipped with zookeeper)
>
> solr_home/server/scripts/cloud-scripts/zkcli.bat -zkho
Hi Erick
Thanks for your help, here is what I've done.
1. I deleted zookeepers and Solr installations.
2. I setup zookeepers on my two servers.
3. I successfully setup Solr Cloud node 1 with the same API call (1
collection named db and two cores) :
wget --no-proxy
"http://$HOSTNAME:8983/s
29 matches
Mail list logo