Hi,
Snakeyaml jar file located in jetty libs folder.
(/opt/solr/server/solr-webapp/webapp/WEB-INF/lib) But I suppose to use wrong
driver. Please could you suggest available driver for Cassandra + solr
integration.
Thx
Best.
Can Ezgi Aydemir
Oracle Veri Tabanı Yöneticisi & Oracle Database Ad
Problem is resolved now .
Thanks a lot Erick .! :)
Will catch you in next problem soon :) . You have saved a lot of time
for me .
On Thu, Nov 9, 2017 at 2:06 AM, Erick Erickson
wrote:
> Why are you extending TokenizerFactory? What you have is a filter
> factory which should extend TokenFi
Why are you extending TokenizerFactory? What you have is a filter
factory which should extend TokenFilterFactory and optionally be
MultiTermAware. I'd use LowerCaseFitlerFactory as a model. Tokenizers
break up the incoming stream, filters do something with the tokens
emitted by the tokenizer.
On W
Alessandro, thanks for your reply.
What do you mean by "In case you decide to use an entire new index for the
autosuggestion, you
can potentially manage that on your own".
Is this duplicate issue a problem with the DocumentDictionaryFactory?
--
Sent from: http://lucene.472066.n3.nabble.com/S
Got it to work. Thanks. I was messing up with the tar xzf command Dane Michael
Terrell
On Tuesday, November 7, 2017 11:27 AM, Shawn Heisey
wrote:
On 11/7/2017 11:51 AM, Dane Terrell wrote:
> I'm afraid that method doesn't work either. I am still perplexed as to how to
> install Solr 7
When I talk about efficiency I'm talking about computational and network
round trip efficiency. In it's regular setup Solr allows you to get the
documents and facets in a single request in one efficient call. If that's
the main use case using the standard Solr makes.
Streaming Expressions sacrific
Very much thanks for reply Erick !
Now ClassCastException is gone . It have corrected my fault .
So I am loading plugin correctly because is not giving me no class found
exception
in solrconfig.xml
in managed-schema
Now I am facing new error that is foll
Hi Ruby,
I partecipated at the discussion at the time,
It's definitely still open.
It's on my long TO DO list, I hope I will be able to contribute a solution
sooner or later.
In case you decide to use an entire new index for the autosuggestion, you
can potentially manage that on your own.
But out
Apart from the performance, to get a "word cloud" from a subset of documents
it is a slighly different problem than getting the facets out of it.
If my understanding is correct, what you want is to extract the "significant
terms" out of your results set.[1]
Using faceting is a rough approximation
I opened a ticket for RankLib long time ago to provide support for Solr Model
Json format[1]
It is on my TO DO list but unfortunately very low on priority.
Anyone that want to contribute is welcome, I will help and commit it when
ready.
Cheers
[1] https://sourceforge.net/p/lemur/feature-requests
We have a web site with traditional search capabilities, faceting, sorting
and so on. It has many problems before we took over and we need to
refectory it all. It has a single index for different type of documents,
and it has a very small amount of data.
The situation is that we are developing a P
Hello isspek,
Unfortunately no, it would be nice to patch RankLib to output the model in
json.
Jfyi, I've a script to convert the xml into the json format
https://github.com/bloomberg/lucene-solr/blob/ltr-demo-lucene-solr/py-solr-buzzwords/tree_model.py
Cheers,
Diego
From: solr-user@lucene.a
Hi,
I want to know the best option for getting word cloud in SOLR.
Is it saving the data as multivalued, using vector, JSON faceting(didn't
work with me)? Terms doesn't work because I can't provide any criteria.
I don't mind changing the design but I need to know the best feasible way
that won't
OK, if you're compiling and running against the same versions, then
that error means that you haven't set your paths correctly so Solr can
find your custom jar. In solrconfig.xml you should add a
directive that points to your custom jar file.
I usually start by using an absolute path here until I
Hi all,
Is it possible to give path of model file trained by Ranklib directly to
Solr 7.0.0 without converting that model to json.
I try to upload Solr Model as below example, I saw the param "model-file"
given this link
http://lucene.472066.n3.nabble.com/jira-Comment-Edited-SOLR-8542-Integrate
It would be useful if you could describe your use case more fully. For
example are the users looking mainly for search results with facets? Or are
they looking for more flexibility and data analysis capabilities.
Streaming Expressions really lends itself to non-traditional search use
cases. If you
Amrit,
as far as I understand, in your example I have resulted documents
aggregated by the rollup function, but to get the documents themselves I
need to make another query that will get fq´s cached results, is that
correct?
And thanks for pointing about fq in Streaming Expression. I was looking
I'm not sure this is what's affecting you, but you might try upgrading to
Lucene/Solr 7.1; in 7.0 there were big improvements in using multiple
threads to resolve deletions:
http://blog.mikemccandless.com/2017/07/lucene-gets-concurrent-deletes-and.html
Mike McCandless
http://blog.mikemccandless.c
Hi Wael,
You can try out JSON faceting - it’s not just about rq/resp format, but it uses
different implementation as well. In any case you will have to index documents
differently in order to be able to use docValues.
HTH
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr &
19 matches
Mail list logo