Hi
I am new to solr and trying to run through the quick start guide (
http://lucene.apache.org/solr/quickstart.html).
The installation seems fine but then I run:
bin/solr start -e cloud -noprompt
I get:
Welcome to the SolrCloud example!
Starting up 2 Solr nodes for your example SolrCloud clu
I just started up a two shard cluster on two machines using HDFS. When I
started to index documents, the log shows errors like this. They repeat
when I execute searches. All seems well - searches and indexing appear
to be working.
Possibly a configuration issue?
My HDFS config:
true
Erick,
Thank you very much, the ms(NOW) was all I needed.
Best,
Fabricio
Em sex, 27 de mar de 2015 às 15:26, Erick Erickson [via Lucene] <
ml-node+s472066n4195883...@n3.nabble.com> escreveu:
> Why do you want to in the first place? I ask because it's a common
> trap to think the server time is
Dear SOLR users,
I have been using the /terms component to find low occurrence terms in a large
SOLR index, and this works very well, but it is not possible to filter (fq) the
results so you are stuck analyzing the whole index.
Other options might be to use SOLR faceting, but I don't see how t
To pile on: If you're talking about pointing two Solr instances at the
_same_ index, it doesn't matter whether you are on NFS or not, you'll
have all sorts of problems. And if this is a SolrCloud installation,
it's particularly hard to get right.
Please do not do this unless you have a very good r
You can simplify things a bit by indexing a "batch number" guaranteed
to be different between two runs for the same keyField. In fact I'd
make sure it was unique amongst all my runs. Simplest is a timestamp
(assuming you don't start two batches within a millisecond!). So it
looks like this.
get a
You say you re-indexed, did you _completely_ remove the data directory
first, i.e. the parent of the "index" and, maybe, "tlog" directories?
I've occasionally seen remnants of old definitions "pollute" the new
one, and since the key is so fundamental I can see it
being a problem.
Best,
Erick
On
Why do you want to in the first place? I ask because it's a common
trap to think the server time is something that is useful...
That said, it would require a little fiddling, but you can return the
number of milliseconds since January 1, 1970 (standard Unix epoch) by
adding ms(NOW) to your fl para
There's a JIRA ( https://issues.apache.org/jira/browse/SOLR-4722 )
describing a highlighter which returns term positions rather than
snippets, which could then be mapped to the matching words in the indexed
document (assuming that it's stored or that you have a copy elsewhere).
-Simon
On Wed, M
You could pre-process the field values in an update processor. You can even
write a snippet in JavaScript. You could check one field and then redirect
a field to an alternate field which has a different analyzer.
What expectations do you have as to what analysis should occur at query
time?
-- Jac
The main goal to allow each user use own stop words list. For example user
type "th"
now he will see next results in his terms search:
the
the one
the then
then
then and
But user has stop word "the" and he want get next results:
then
then and
--
View this message in context:
http://lucene.4
I am trying to write a custom analyzer , whose execution is determined by
the value of another field within the document.
For example if the locale field in the document has 'de' as the value, then
the analizer would use the German set of tokenizers/filters to process the
value of a field.
My que
Several years ago, I accidentally put Solr indexes on an NFS volume and it was
100X slower.
If you have enough RAM, query speed should be OK, but startup time (loading
indexes into file buffers) could be really long. Indexing could be quite slow.
wunder
Walter Underwood
wun...@wunderwood.org
ht
On 3/27/2015 7:45 AM, afrooz wrote:
> my main issue is that, I am a .net developer , but i need to use this class
> within solr and call it somehow in .net. The issue is that i want the jar
> file from this source code, as my searches I think i have to install Ant and
> run it within eclipse...
>
Thanks,
my main issue is that, I am a .net developer , but i need to use this class
within solr and call it somehow in .net. The issue is that i want the jar
file from this source code, as my searches I think i have to install Ant and
run it within eclipse...
I tried this with creating a jar file
On 3/27/2015 8:10 AM, phi...@free.fr wrote:
>> You must send indexing requests to Solr,
>
> Are you referring to posting queries to SOLR, or to something
> else?
>
>> If you can set up multiple threads or processes...
>
> How do you do that?
Yes, I am referring to posting requests to the
On 3/27/2015 12:30 AM, abhi Abhishek wrote:
> i am trying to use ZFS as filesystem for my Linux Environment. are
> there any performance implications of using any filesystem other than
> ext-3/ext-4 with SOLR?
That should work with no problem.
The only time Solr tends to have problems is
Hi Shawn,
> You must send indexing requests to Solr,
Are you referring to posting queries to SOLR, or to something
else?
> If you can set up multiple threads or processes...
How do you do that?
> https://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.LengthFilterFactory
Can y
Yes that works and now I have a better understanding of the soft and hard
commits to boot.
Thanks again Shawn.
Russ.
-Original Message-
From: Shawn Heisey [mailto:apa...@elyograg.org]
Sent: 27 March 2015 13:22
To: solr-user@lucene.apache.org
Subject: Re: Replacing a group of documents
On 3/27/2015 4:14 AM, phi...@free.fr wrote:
> Hi,
>
> my SOLR 5 solrconfig.xml file contains the following lines:
>
>
>on
> text
>100
>
>
> where the 'text' field contains thousands of words.
>
> When I start SOLR, the search engin
On 3/27/2015 7:07 AM, Russell Taylor wrote:
> Hi Shawn, thanks for the quick reply.
>
> I've looked at both methods and I think that they won't work for a number of
> reasons:
>
> 1)
> uniqueKey:
> I could use the uniqueKey and overwrite the original document but I need to
> remove the documen
Hi Shawn, thanks for the quick reply.
I've looked at both methods and I think that they won't work for a number of
reasons:
1)
uniqueKey:
I could use the uniqueKey and overwrite the original document but I need to
remove the documents which
are not on my new input list and the issue with the
Alex - that’s definitely possible, with performance being the main
consideration here.
But since this is for query time stop words, maybe instead your fronting
application could take the users list and remove those words from the query
before sending it to Solr?
I’m curious what the ultimate
We need advanced stop words filter in Solr.
We need stopwords to be stored in db and ability to change them by users
(each user should have own stopwords). That's why I am thinking about
sending stop words to solr from our app or connect to our db from solr and
use updated stop words in custom Sto
Hi,
I never used that but I think you should
- get the source code / clone the repository
- run the ant build (I see a "dist" target)
- put the artifact in your core / shared lib dir so Solr can see that
library
- have a look at the README [1] for how to use that
Best,
Andrea
[1]
https://git
I am also, can anyone help us?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Installing-the-auto-phrase-tokenfilter-tp4195466p4195787.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 23/03/15 20:05, Erick Erickson wrote:
you don't run a SQL engine from a servlet
container, why should you run Solr that way?
https://twitter.com/steff1193/status/580491034175660032
https://issues.apache.org/jira/browse/SOLR-7236?focusedCommentId=14383624&page=com.atlassian.jira.plugin.system.
Hi,
my SOLR 5 solrconfig.xml file contains the following lines:
on
text
100
where the 'text' field contains thousands of words.
When I start SOLR, the search engine takes several minutes to index the words
in the 'text' field (although
for the single where clause RDBMS with index performs comparable same as
inverted index. Inverted index wins on multiple 'where' clauses, where it
doesn't need composite indices; multivalue field is also its' intrinsic
advantage. More details at
http://www.slideshare.net/lucenerevolution/what-is-in
I think it is very likely that it is due to Solr-nodes losing
ZK-connections (after timeout). We have experienced that a lot. One
thing you want to do, is to make sure your ZK-servers does not run on
the same machines as your Solr-nodes - that helped us a lot.
On 24/03/15 13:57, Gopal Jee wrot
Hi Edwin,
please provide some other detail about your context, (e.g. complete
stacktrace, query you're issuing)
Best,
Andrea
On 03/27/2015 09:38 AM, Zheng Lin Edwin Yeo wrote:
Hi everyone,
I've changed my uniqueKey to another name, instead of using id, on the
schema.xml.
However, after I ha
Hi everyone,
I've changed my uniqueKey to another name, instead of using id, on the
schema.xml.
However, after I have done the indexing (the indexing is successful), I'm
not able to perform a search query on it. I gives the error
java.lang.NullPointerException.
Is there other place which I need
>
> so you’ll end up forever invalidating your cache.
What if we have 1 million ids assigned to the different user and each user
daily performs the query on solr. Then will it be there forever?
With Regards
Aman Tandon
On Fri, Mar 27, 2015 at 1:50 PM, Upayavira wrote:
> The below won’t perfor
The below won’t perform well. You’ve used a filter query, which will be
cached, so you’ll end up forever invalidating your cache.
Better would be http://localhost:8983/solr/select?q=id:153
Perhaps better still would be http://localhost:8983/solr/get?id=153
The latter is a “real time get” which w
Okay. Thanks Shawn..
On Thu, Mar 26, 2015 at 12:25 PM, Shawn Heisey wrote:
> On 3/26/2015 12:03 AM, Nitin Solanki wrote:
> > Great thanks Shawn...
> > As you said - **For 204GB of data per server, I recommend at least 128GB
> > of total RAM,
> > preferably 256GB**. Therefore, if I have 204GB of
35 matches
Mail list logo