Hi,
Version of Solr : Solr-5.2.1
I am sending large number of HTTP GET requests to Solr server for querying
indexes. These requests to Solr are generated via a Node.js service.
When the number of requests to Solr are ~250, I am intermittently facing these
kinds of issues:
* Some times Soc
Thanks Eric for pointing out. Actually I was sending “fq” parameter with a
field name called “deprecated” and that is why it was throwing the exception.
Your help is very much appreciated.
On 2/16/18, 12:23 PM, "Erick Erickson" wrote:
2> your solrconfig.xml file has a field called "depreca
Hi everyone,
I am using solr 5.2.1 and below is my schema.xml file for a core I am trying to
query:
Thanks Emir!
Deeksha Sharma
Software Engineer
215 2nd St #2,
San Francisco, CA 94105. United States
Desk: 6316817418
Mobile: +64 21 084 54203
dsha...@flexera.com
www.flexera.com<http://www.flexera.com>
CONFIDENTIALITY NOTICE: This email message (including any attachments) is for
th
Hi everyone,
I have created a core and index data in Solr using dataImportHandler.
The schema for the core looks like this:
This is my data in mysql database:
md5:"376463475574058bba96395bfb87"
rules:
{"fileRules":[{"file_id":1321241,"md5":"376463475574058bba96395bfb87",
I am trying to create indexes using dataimport handler (Solr 5.2.1). Data is in
mysql db and the number of records are more than 3.5 million. My solr server
stops due to OOM (out of memory error). I tried starting solr by giving 12GB of
RAM but still no luck.
Also, I see that Solr fetches all
Hi,
I indexed documents in Solr using dataImportHandler.
Now when I want to query using the below URL, it gives me the results i want.
http://localhost:8983/solr/mycore/select?indent=on&q=id:7fd326e23ffa8d1cb9c0a7b4fc5c4269&wt=json
Can Solr handle bulk queries if I send over more than 10,000
By default its 10 rows on Admin UI and indeed its gray. But did you tried
writing a number into the text field for:
Start,rows. See the screen shots attached.
On 2/28/17, 10:08 AM, "OTH" wrote:
Hello,
In the browser-based Solr Admin, in the 'Query' page, the "start" and
"row
Hi,
I have an index with a field that looks like below:
Below are some examples of version values:
"version":"2.0.5"},
{
"version":"1.10-b04"},
{
"version":"2.3.3"},
{
"version":"2.0-M5.1"},
{
"version":"0.4.0"},
{
"v
BTW its Apache Solr 4 Cookbook
From: Deeksha Sharma
Sent: Tuesday, November 15, 2016 2:06 PM
To: solr-user@lucene.apache.org
Subject: Re: book for Solr 3.4?
Apache solr cookbook will definitely help you get started. This is in addition
to the Apache Solr
Apache solr cookbook will definitely help you get started. This is in addition
to the Apache Solr official documentation.
Thanks
Deeksha
From: HelponR
Sent: Tuesday, November 15, 2016 2:03 PM
To: solr-user@lucene.apache.org
Subject: book for Solr 3.4?
H
So if you are looking for the information on collections that you created on
solrCloud, then you may do so via API calls that are listed here. All you need
is the host and port of one of the machines in the cluster.
https://cwiki.apache.org/confluence/display/solr/Collections+API
Thanks
Deeksha
rovide a screen in the new Admin UI to allow you
to add replicas to a collection and the like, but that code hasn't
been added yet.
Best,
Erick
On Fri, Jul 1, 2016 at 12:18 PM, Deeksha Sharma wrote:
> Currently I am building a SolrCloud cluster with 3 Zookeepers (ensemble) and
> 4
Currently I am building a SolrCloud cluster with 3 Zookeepers (ensemble) and 4
solr instances. Cluster is hosting 4 collections and their replicas.
When one Solr node say Solr1 goes down (hosting 2 replicas of collection1 and
collection2), I add a new node to the cluster and that node in Admin
rmittent failures as,
say, your indexing operation happened to coincide with a segment
merge and failed. But by the time you get around to looking, the amount
of disk occupied by seg3 in the example above will be reclaimed
Best,
Erick
On Sat, Jun 25, 2016 at 3:07 PM, Deeksha Sharma wrote:
> H
Hi,
I am currently using JSON Index Handler to upload documents to a specific
collection on SolrCloud. Now what I need to know is:
If I upload documents to SolrCloud collection and the machines hosting Shards
for this collection have no storage left, will Solr reject the commit request?
?
I am new to Solr and already have Lucene indexes that I want to serve through
SolrCloud.
I have SolrCloud setup with external zookeeper and 2 Solr Instances - solr1 and
solr2registered with this zookeeper.
On solr1 I add a symlink to my existing Lucene indexes. (and not on solr2)
I create the
17 matches
Mail list logo