Hi All,
Sorry for asking the same question again, but could someone please advise me on
this.
Thanks,
Preeti
From: Preeti Bhat
Sent: Wednesday, June 22, 2016 1:58 PM
To: 'solr-user@lucene.apache.org'
Subject: Can we directly move the solr instance running as standalone to SOLR
CLOUD?
Hi,
I h
Hi,
I have Solr 6.0.0 installed on my PC (windows 7), I was
experimenting with 'Streaming Expression' by using Oracle jdbc as the stream
source, following is the http command I am using:
http://localhost:8988/solr/document5/stream?expr=jdbc(connection="jdbc:oracle:thin:qa_docrep/a
-- Message transféré --
De : "kostali hassan"
Date : 22 juin 2016 14:00
Objet : how collect a list of damaged file they can not be indexed
À :
Cc :
I start solr 5.4.1 to indexe rich data pdf and msword using data import
handler.
the file tika-config.xml I wrote: onError="skip"
I
Oh - gotcha... Thanks for taking the time to reply. My use of the phrase
"sub query" is probably misleading...
Here's the XML (below). I'm calling the Boost Query and Boost Function
statements "sub queries"...
The thing I was referencing was this -- where I create an "alias" for the
query (tit
John:
I'm not objecting to the XML, but to the very presence of "more than
one query in a request handler". Request handlers don't have, AFAIK,
"query chains". They have a list of defaults for the _single_ query
being sent at a time to that handler. So having
blah blah
is something I've never
Never mind, this was a dependency issue between Solr and JAX-RS. Managed
the dependency to 4.4.1 fixed it
On Wed, Jun 22, 2016 at 2:44 PM, Webster Homer
wrote:
> I tried adding the solr-core dependency but that caused the app to fail to
> deploy entirely.
>
> On Wed, Jun 22, 2016 at 2:36 PM, Web
I tried adding the solr-core dependency but that caused the app to fail to
deploy entirely.
On Wed, Jun 22, 2016 at 2:36 PM, Webster Homer
wrote:
> I have an application that we wrote to support solr cloud collections. It
> is a rest service that uses solrj. I am in the process of upgrading it t
I have an application that we wrote to support solr cloud collections. It
is a rest service that uses solrj. I am in the process of upgrading it to
use 6.1 Solr
The application builds with maven.
I get the following error:
Caused by: java.lang.NoClassDefFoundError:
org/apache/http/impl/client/Clo
Hi Erick -
I was trying to simplify and not waste anyone's time parsing my
requestHandler... That is, as you imply, bogus xml.
The basic question is: If I have two "sub queries" in a single
requestHandler, do they both run independently against the entire index?
Alternatively, is there some ki
Hello,
I am developing a 'Solr management tool' that can configure Solr using its REST
APIs and hit a small problem: I am able to configure pretty much everything I
need except the updateRequestProcessorChain elements. According to the Config
API page this is by design and I was wondering if
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Multiple-context-field-filters-in-Solr-suggester-tp4283739p4283894.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you!!
Okay, I think I have that all squared away.
*SpanLastQuery*:
I need something like SpanFirstQuery, except that it would be
SpanLastQuery. Is there a way to get that to work?
*Proximity weighting getting ignored*:
I also need to get span term boosting working.
Here's my query:
"one t
This is better! At list the classifier is invoked!
How many docs in the index have the class assigned?
Take a look to the stacktrace and you should find the cause!
I am now on mobile, I will check the code tomorrow!
Cheers
On 22 Jun 2016 5:26 pm, "Tomas Ramanauskas"
wrote:
>
> I also tried with
I also tried with this config (adding **):
classification
And I get the error:
$ curl http://localhost:8983/solr/demo/update -d '
[
{"id" : "book15",
"title_t":["The Way of Kings"],
"author_s":"Brandon Sanderson",
"cat_s": null,
"pubyear_i":2010,
"ISBN_s":"978-0-765
Thanks for the response, Alessandro.
I tried this and it didn’t work either:
$ curl http://localhost:8983/solr/demo/update -d '
[
{"id" : "book14",
"title_t":["The Way of Kings"],
"author_s":"Brandon Sanderson",
"cat_s": null,
"pubyear_i":2010,
"ISBN_s":"978-0-7653-2635-5"
}
]’
{"responseHeade
Thank you Marcus - they are indeed set to 1024 for the hdfs user. We'll
re-configure limits.conf and try again.
-Joe
On Tue, Jun 21, 2016 at 10:38 AM, Markus Jelsma
wrote:
> Hello Joseph,
>
> Your datanodes are in a bad state, you probably overwhelmed it when
> indexing. Check your max open fi
Scott's got it right. There's nothing special about a Lucene index at
that level,
Lucene doesn't have any clue it's operating in SolrCloud. So a single-shard
index is exactly the same at the _Lucene_ level in stand-alone and in
SolrCloud. You can freely copy it back and forth, even between Windows
Hi Tomas,
first consideration :
an empty string is different from a NULL string.
This is controversial, I would suggest you to never use the empty String as
this can cause some others side effect.
Apart from that, the plugin will add the class only if the class field is
without any value
> Object
I also tried this configuration, but could get the feature to work:
classification
title_t,author_s
cat_s
bayes
Tomas
On 22 Jun 2016, at 13:46, Tomas Ramanauskas
mailto:tomas.ramanaus...@springer.com>> wrote:
P.S. The version I use:
I start solr 5.4.1 to indexe rich data pdf and msword using data import
handler.
the file tika-config.xml I wrote: onError="skip"
I want recover corrupted file
Where does the 0145 come from? Is that the previous value of the
field? Or do you have a copyfield somewhere perhaps?
Regards,
Alex.
Newsletter and resources for Solr beginners and intermediates:
http://www.solr-start.com/
On 22 June 2016 at 22:40, Rajendran, Prabaharan wrote:
> Thanks
Could it be because of how the DirectDocValuesProducer populates the entire
bytes array for each request? i.e. for each call of getNumericDocValues(),
it reads all the bytes into an array first. The get() method itself seems to
be a simple array lookup.
Link to the loadNumeric() of DirectDocValue
P.S. The version I use:
6.1.0-68
Also, earlier I said “If I modify an existing record, I think the functionality
works:”, but I think it doesn’t work for me at all.
$ curl http://localhost:8983/solr/demo/get?id=book1
{
"doc":
{
"id":"book1",
"title_t":["The Way of Kings"],
"auth
Thanks for all the kind words everyone! I look forward to seeing you at
Revolution
It probably wasn't clear, but the discount code is in the image on my
announcement blog post.
http://opensourceconnections.com/blog/2016/06/21/relevant-search-published/
But I'll also just paste it here: *mlturnbul
Thanks Alex and Erik.
After using "example/files/update-script.js" , I can able to update document
value.
Here is my code
function processAdd(cmd) {
doc = cmd.solrDoc;
var id = doc.getFieldValue("user_number");
doc.setField("user_number ", "Hello");
}
But setField append the value ins
Hi
I was wondering if there is support for running JettySolrRunner
including the solr webapp. We have a use case where we start an
embedded solr (including webui) in a mini-version of our application
where we simply run everything in a single JVM. JettySolrRunner is OK
but lacks the webapp. With t
Hello,
I recommend using the langdetect language detector, it supports many more
languages and has much higher precission than Tika's detector.
Markus
-Original message-
> From:Alexandre Rafalovitch
> Sent: Wednesday 22nd June 2016 12:32
> To: solr-user
> Subject: Re: Automatic Lan
And also be sure your JVM is compatible with the script you’re using - looks
fine besides that “use strict” which I’ve used in an update script.
There have been some tribulations with JVM’s and some of the update-script.js’s
(specifically under example/files as it uses some tricky JavaScript/Jav
On 22 June 2016 at 21:28, Rajendran, Prabaharan wrote:
> function processAdd(cmd) {
> "use strict";
> var doc = cmd.solrDoc;
> }
Did you try not using "use strict"? Seems like a kind of Javascript
feature that may not be implemented by the Java's implementation of
the interpreter.
Regar
Hi, everyone,
would someone be able to share a working example (step by step) that
demonstrates the use of Naive Bayes classifier in Solr?
I followed this Blog post:
https://alexbenedetti.blogspot.co.uk/2015/07/solr-document-classification-part-1.html?showComment=1464358093048#c248990230208500
like alexandre already touched on, you technically *can* "keep existing
data and still apply new schema." you just can't expect index level changes
to be applied retroactively to the already existing index. the deeper your
changes to the schema, the deeper the incongruities between expectation and
Joel Bernstein wrote:
> I've tested with the Direct docValuesFormat which is uncompressed
> in-memory. But I haven't seen any noticeable performance gain. I've been
> meaning to dig into exactly why I wasn't seeing a performance gain, but
> haven't had the chance to do this yet.
If this is about
Hi,
I am using Solr-5.3.1 and I am trying to use updateRequestProcessorChain.
I used https://wiki.apache.org/solr/UpdateRequestProcessor &
https://github.com/pannapat/solr-script-update-processor-example as my
reference.
Please refer below for the js which I used (placed in collection1/conf/).
I've tested with the Direct docValuesFormat which is uncompressed
in-memory. But I haven't seen any noticeable performance gain. I've been
meaning to dig into exactly why I wasn't seeing a performance gain, but
haven't had the chance to do this yet.
If you test out the Direct docValuesFormant, I'd
Hi Renaud,
I apologize for a typo. I have the jars in directory called "lib" instead of
"Lib".(Mistyped while I posted).
I have also added the Encrypted codec Jar to the directory -
[$SolrDir]\server\solr-webapp\webapp\WEB-INF\lib and all of a sudden it seems
to be working fine now.
I don’
Hi,
I was looking for some help with the more like this query parser syntax.
Concretely, using the select handler I can do a query like:
mlt=true&
mlt.fl=name,message&
mlt.mintf=1&
mlt.qf=name^10 message^1
which lets me boost individual fields.
However, with the query parser:
{!mlt mintf=1 qf=
In both cases, the issues seems to be related to the library not being
loaded. For Tika identifier, I believe it is
solr-langid-.jar, for the sia.* it is whatever the book
recommended.
Are you running SolrCloud? Additional libraries are slightly
complicated with that, you need to make sure they ar
Well, my reading of that page is that they are creating a *fake*
_val_ field which bypasses Sitecore's logic to allow it to use Solr's
_val_ query syntax without actually touching that fake _val_ field.
Since you don't actually populate that field, nor do you actually
search against it, you do no
This is what I guess you can try: If your SolrCloud has only 1 shard, you can
first build your Solrcloud collection (with same schema as your standalone
core), say its name is 'mycol', and then go to the node folder, normally you'll
see some subfolder named 'mycol_shard1_replica#. In this folder
Hi
We found the source of the problem. We are using Nutch crawler to crawl a
website and insert web pages and had the scoring-opic plugin on. This plugin
was adding a boost for each document at index time.
We have removed this plugin and the boost added by Nutch has become consistent
and reduc
Hi everyone,
I have 2 million documents in my Solr index. I have enabled docValues on one
of the integer fields, and set its docValuesFormat to Memory. This is
because I want to have very quick forward lookups on this field in my custom
component.
I am running my Solr installation on a 35GB RAM
Hi,
I will make the collection in the collection solrcloud and used for
"automatic language identification" but when they failed to make a
collection in his process:
1. The automatic language identification
ERROR: Failed to create collection 'coba' due to:
org.apache.solr.client.solrj.impl
Hi,
I have a standalone SOLR instance running as a service in Windows, I would like
to move towards the SOLR CLOUD, as it provides various benefits like DR,
scalability etc. I would like to use the same instance or core if possible from
the standalone instance to SOLR cloud. Can we do it? If ye
What if I add new field to schema, do I have to rebuild existing index data? In
this article:
Sitecore Blogger: Boost newer documents in Sitecore 7 and Solr 4
http://www.sitecoreblogger.com/2014/09/publication-date-boosting-in-sitecore-7.html
, it add a new field "_val" to schema and in the pa
44 matches
Mail list logo