Im trying to use spell check component.
My *schema* is:(i have included only fields necessary for spell check not
the entire schema)
My *solrconfig* is:
text
direct
contents
solr.DirectSolrSpellChecker
internal
0.8
1
1
5
3
0.01
wordbreak
solr.Wor
I had to do a double take when i read this sentence...
: Even with any improvements to 'scale', all function queries will add a
: linear increase to the Qtime as index size increases, since they match all
: docs.
...because that smelled like either a bug in your methodology, or a bug in
Solr.
On 12/6/2013 8:58 AM, Peri Stracchino wrote:
I'm trying to upgrade a solr installation from 1.4 (yes, really) to 4.6.0,
and I find our requesthandler was solr.DisMaxRequestHandler, which is now
not only deprecated but deleted from solr-core-4.6.0.jar. Can anyone
advise on suitable alternatives,
Try edismax it's an improved dismax. Warning, though, it behaves a bit
differently than dismax so you'll have to look again at the results and
perhaps tweak.
Best,
Erick
On Dec 6, 2013 10:58 AM, "Peri Stracchino"
wrote:
> Hi
> I'm trying to upgrade a solr installation from 1.4 (yes, really) to
Hi Erwin;
If you want to run Solr within a servlet container and if you are new to
Solr you should examine the example folder of Solr. Run it and configure
its files. You can start reading from here:
http://lucene.apache.org/solr/4_6_0/tutorial.html If you look at that
example you can customize it
I looked at SOLR-4465 and SOLR-5045, where it appears that there is a goal
to be able to do custom sorting and ranking in a PostFilter. So far, it
looks like only custom aggregation can be implemented in PostFilter (5045).
Custom sorting/ranking can be done in a pluggable collector (4465), but
this
Thanks all. Yes, we can differentiate between content types by URL.
Everything else being equal, Wiki posts should always be returned higher
than blog posts, and blog posts should always be returned higher than forum
posts.
Within forum posts, we want to rank Verified answered and Suggested answer
In my previous posting, I said:
"Subsequent calls to ScaleFloatFuntion.getValues bypassed
'createScaleInfo and added ~0 time."
These subsequent calls are for the remaining segments in the index reader
(21 segments).
Peter
On Fri, Dec 6, 2013 at 2:10 PM, Peter Keegan wrote:
> I added some
I added some timing logging to IndexSearcher and ScaleFloatFunction and
compared a simple DisMax query with a DisMax query wrapped in the scale
function. The index size was 500K docs, 61K docs match the DisMax query.
The simple DisMax query took 33 ms, the function query took 89 ms. What I
found wa
I would like to know how to set the Velocity Template
Directory in Solr.
About 6 months ago I asked this question on this list:
http://lucene.472066.n3.nabble.com/Change-Velocity-Template-Directory-td4078120.html
At that time Erik Hatcher advised me to use
the v.base_dir in solrconfig.xml. Th
pool-1-thread-4" java.lang.NoSuchMethodError:
org.apache.solr.util.SimplePostTool
I am getting this error while posting Data to Solr from XML generated file.
Although the Solr post.jar is present in the Library Class Path and I also
tried keeping the Source class of the Post Tool.
Urgent Call.
Hi Daniel
Thanks for the heads up. I'll try to get the patch integrated.
Regards
Puneet
On 6 Dec 2013 16:39, "Daniel Collins" wrote:
> You are right that the XmlQueryParser isn't completely/yet implemented in
> Solr. There is the JIRA mentioned above, which is still WIP, so you could
> use that
Hi
I'm trying to upgrade a solr installation from 1.4 (yes, really) to 4.6.0,
and I find our requesthandler was solr.DisMaxRequestHandler, which is now
not only deprecated but deleted from solr-core-4.6.0.jar. Can anyone
advise on suitable alternatives, or was there any form of direct
replacement
Thanks Michael. I didn't realize the cores and collections APIs were
interchangeable like that. I'd assumed that the cores API was meant for
vanilla Solr, while Collections was specific to SolrCloud. I appreciate
you clarifying that. Thanks.
--
View this message in context:
http://lucene.4
Use Core API, which provides the "UNLOAD" operation.
Simply just unload the cores you don't need and they'll be automatically
removed from SolrCloud. You can also specify options like "deleteDataDir" or
"deleteIndex" to cleanup the disk space or you can do it in your script.
http://wiki.apache.org
I'm writing a script so that when my SolrCloud setup is slowing down, I can
add a new physical machine and run a script to split the shard with the most
data and send half of the shard to the new machine. Here's the general
thinking I'm following:
- Pick the machine with the most data currently i
+1 on this.
- Mensaje original -
De: "Otis Gospodnetic"
Para: solr-user@lucene.apache.org
Enviados: Viernes, 6 de Diciembre 2013 9:35:25
Asunto: Re: Introducing Luwak for high-performance stored Lucene queries
Hi Charlie,
Very nice - thanks!
I'd love to see a side-by-side comparison wi
Hi Andrea,
i been looking for a archetype because i am using eclipse and the specific
solr config must be easy to deploy (now we are using maven), mvn package
mvn assembly or simliars, the idea is leave solr config easy to package and
well stored on svn.
I saw that is a very common way to use sol
On 06/12/2013 14:35, Otis Gospodnetic wrote:
Hi Charlie,
Very nice - thanks!
I'd love to see a side-by-side comparison with ES percolator. got
something like that in your blog topic queue?
It's a good idea, I'll add it to the list. May need some more roundtuits
C
Otis
--
Performanc
Hi Charlie,
Very nice - thanks!
I'd love to see a side-by-side comparison with ES percolator. got
something like that in your blog topic queue?
Otis
--
Performance Monitoring * Log Analytics * Search Analytics
Solr & Elasticsearch Support * http://sematext.com/
On Fri, Dec 6, 2013 at 9:29
Hi all,
We've now released the library we mentioned in our presentation at
Lucene Revolution: https://github.com/flaxsearch/luwak
You can use this to apply tens of thousands of stored Lucene queries to
an incoming document in a second or so on relatively modest hardware. We
use it for media
Hi,
if you want to deploy the SOLR war on tomcat you should do once so why
do you need a maven archetype? You can just get the war from the website
and deploy to your server.
If you need to use maven because you are, for example, developing in
eclipse and you want to just launch jetty:run and
Hi to everybody.
Im not going to say that im new in solr, but im new in solr.
I been googling a lot of things to start with solr, but i would like to
know if there is a maven archetype for the 4.6 version (to deploy in
tomcat).
Also i would like to know (based in best practices) what the comunity
termvectors have nothing to do with any of this.
please, fix your analyzer first. if you want to add a synonym, it
should be position increment of zero.
i bet exact phrase queries aren't working correctly either.
On Fri, Dec 6, 2013 at 12:50 AM, Isaac Hebsh wrote:
> 1) positions look all right
Obviously, there is the option of external parameter ({...
v=$nestedq}&nestedq=...)
This is a good solution, but it is not practical, when having a lot of such
nested queries.
Any ideas?
On Friday, December 6, 2013, Isaac Hebsh wrote:
> We want to set a LocalParam on a nested query. When querin
We want to set a LocalParam on a nested query. When quering with "v" inline
parameter, it works fine:
http://localhost:8983/solr/collection1/select?debugQuery=true&defType=lucene&df=id&q=TERM1AND
{!lucene df=text v="TERM2 TERM3 \"TERM4 TERM5\""}
the parsedquery_toString is
+id:TERM1 +(text:term2 t
You are right that the XmlQueryParser isn't completely/yet implemented in
Solr. There is the JIRA mentioned above, which is still WIP, so you could
use that as a basis and extend it. If you aren't familiar with Solr and
Java, you might find that a struggle, in which case you might want to
conside
Hi,
I am using solr 3.3 for index generation with sql server, generating index
successfully, now I am trying to generate with Oracle DB. I am using
"*UDP_Getdetails*" procedure to generate the required indexes. In this
procedure its taking 2 inputs and 1 output parameters.
*input params :
id
na
I guess you refer to this post?
http://1opensourcelover.wordpress.com/2013/07/02/solr-external-file-fields/
If so .. he already provides at least one possible use case:
*snip*
We use Solr to serve our company’s browse pages. Our browse pages are similar
to how a typical Stackoverflow tag page
1) positions look all right (for me).
2) fieldNorm is determined by the size of the termVector, isn't it? the
termVector size isn't affected by the positions.
On Fri, Dec 6, 2013 at 10:46 AM, Robert Muir wrote:
> Your analyzer needs to set positionIncrement correctly: sounds like its
> broken.
Thank you, Chris.
I notice crontabs are performed at different time in replicas(delayed for 10
minutes against its leader), and these crontabs is to reload dic files.
Therefore, the terms are slightly different between replicas.
So the maxScore shows difference.
Best,
Sling
--
View this messa
Your analyzer needs to set positionIncrement correctly: sounds like its broken.
On Thu, Dec 5, 2013 at 1:53 PM, Isaac Hebsh wrote:
> Hi,
> we implemented a morphologic analyzer, which stems words on index time.
> For some reasons, we index both the original word and the stem (on the same
> positi
32 matches
Mail list logo