aryFactory
title
price
string
Sure would like to see it work!
Regards,
Lajos Moczar
On 17/03/2014 22:11, Steve Huckle wrote:
Hi,
The Suggest Search Component that comes preconfigured in Solr 4.7.0
solrconfig.xml seems to thread dump when I call it:
http://localhost
have access to the fields
you want.
Regards,
Lajos
On 17/03/2014 14:05, omer sonmez wrote:
I am using Solr 4.5.1 to suggest movies for my system. What i need solr to return not
only the move_title but also the movie_id that belongs to the movie. As an example;
this is kind of what i
Hi Hamish,
Are you running Jetty?
In Tomcat, I've put jts-1.13.jar in the WEB-INF/lib directory of the
unpacked distribution and restarted. It worked fine.
Maybe check file permissions as well ...
Regards,
Lajos
On 16/03/2014 10:18, Hamish Campbell wrote:
Hey all,
Trying t
and collections, but via different strategies.
More inline ...
On 15/03/2014 19:17, shushuai zhu wrote:
Hi Lajos, thanks again.
Your suggestion is to support multi-tenant via collection in a Solr Cloud:
putting small tenants in one collection and big tenants in their own
collections.
My
x27;s 6,000 shards. Hence my argument for multiple low-end
tenants per collection, and then only give your higher-end tenants their
own collections. Just to make things simpler for you ;)
Regards,
Lajos
____
From: Lajos
To: solr-user@lucene.apache.org
Sent: Satu
e
developed my own set of tools over the years and they work quite well.
Finally, I would (in general) argue for cloud-based implementations to
give you data redundancy, but that decision would require more information.
HTH,
Lajos Moczar
theconsultantcto.com
Enterprise Lucene/Solr
On 1
alOperations.getFiniteStrings(SpecialOperations.java:259)^M
etc etc
Any ideas??
Thanks,
Lajos
t the replicas for that node, as listed in
clusterstate.json, are present and accounted for.
HTH,
Lajos
On 28/02/2014 16:17, Jan Van Besien wrote:
Hi,
I am a bit confused about how solr cloud disaster recovery is supposed
to work exactly in the case of loosing a single node completely.
t the replicas for that node, as listed in
clusterstate.json, are present and accounted for.
HTH,
Lajos
On 28/02/2014 16:17, Jan Van Besien wrote:
Hi,
I am a bit confused about how solr cloud disaster recovery is supposed
to work exactly in the case of loosing a single node completely.
Thanks Hoss, that makes sense.
Anyway, I like the new paradigm better ... it allows for more
intelligent elevation control.
Cheers,
L
On 25/02/2014 23:26, Chris Hostetter wrote:
: What is seems that is happening is that excludeIds or elevateIds ignores
: what's in elevate.xml. I would hav
Hit the send button too fast ...
What is seems that is happening is that excludeIds or elevateIds ignores
what's in elevate.xml. I would have expected (hoped) that it would layer
on top of that, which makes a bit more sense I think.
Thanks,
Lajos
On 25/02/2014 22:58, Lajos wrote:
hing or should I open a JIRA? Looking at the
source code I can't immediately see what would be wrong.
Thanks,
Lajos
o verify whether this is a bug or intended behavior. Lots of older docs
say to use UNLOAD for these situations.
Thanks,
Lajos
There's always http://projects.apache.org/feeds/rss.xml.
L
On 03/02/2014 14:59, Arie Zilberstein wrote:
Hi,
Is there a mailing list for getting just announcements about new versions?
Thanks,
Arie
ailed nodes to running nodes.
- Mark
On Jan 22, 2014, 12:57:46 PM, Lajos wrote: Thanks Mark ...
indeed, some doc updates would help.
Regarding what seems to be a popular question on sharding. It seems that
it would be a Good Thing that the shards for a collection running HDFS
essentially be poi
been following your work recently, would be interested in helping
out on this if there's the chance.
Is there a JIRA yet on this issue?
Thanks,
lajos
On 22/01/2014 16:57, Mark Miller wrote:
Right - solr.hdfs.home is the only setting you should use with SolrCloud.
The documentat
Uugh. I just realised I should have take out the data dir and update log
definitions! Now it works fine.
Cheers,
L
On 22/01/2014 11:47, Lajos wrote:
Hi all,
I've been running Solr on HDFS, and that's fine.
But I have a Cloud installation I thought I'd try on HDFS. I uploa
Hi all,
I've been running Solr on HDFS, and that's fine.
But I have a Cloud installation I thought I'd try on HDFS. I uploaded
the configs for the core that runs in standalone mode already on HDFS
(on another cluster). I specify the HdfsDirectoryFactory, HDFS data dir,
solr.hdfs.home, and HDF
layer I can use to validate
input; if ever the situation warranted, I could use a filter to check
for anything malicious. I can also layer security on top as well.
Cheers,
Lajos
On 22/01/2014 06:45, Alexandre Rafalovitch wrote:
So, everybody so far is exposing Solr directly to the web, but
Just go for Tomcat. For all its problems, and I should know having used
it since it was originally JavaWebServer, it is perfectly capable of
handling high-end production environments provided you tune it
correctly. We use it with our customized Solr 1.3 version without any
problems.
Lajos
considered that but our
requirements essentially mean we have to do the heavy lifting within the
filter itself. Not that I'm opposed, it is just that I'm apparently
missing something simple still.
Thanks for the replies.
Lajos
Smiley, David W. wrote:
Although this is not a direct answer
Hi all,
I've been writing some custom synonym filters and have run into an issue
with returning a list of tokens. I have a synonym filter that uses the
WordNet database to extract synonyms. My problem is how to define the
offsets and position increments in the new Tokens I'm returning.
For a
22 matches
Mail list logo