It does. Absolutely. But it depends on what you in it. Start from
http://wiki.apache.org/solr/UpdateXmlMessages#add.2Freplace_documents
On Fri, Jun 19, 2015 at 7:54 AM, 步青云 wrote:
> Hello,
> I'm a solr user with some question. I want to append new data to the
> existing index. Does Solr sup
Steve,
Thank you thank you so much. You guys are awesome.
Steve how can i learn more about the lucene indexing process in more
detail. e.g. after we send documents for indexing which function calls till
the doc actually store in index files.
I will be thankful to you. If you guide me here.
With
I'm implementing an auto-suggest feature in Solr, and I'll like to achieve
the follwing:
For example, if the user enters "mp3", Solr might suggest "mp3 player",
"mp3 nano" and "mp3 music".
When the user enters "mp3 p", the suggestion should narrow down to "mp3
player".
Currently, when I type "mp3
Aman,
Solr uses the same Token filter instances over and over, calling reset() before
sending each document through. Your code sets “exhausted" to true and then
never sets it back to false, so the next time the token filter instance is
used, its “exhausted" value is still true, so no input str
Yes I just saw.
With Regards
Aman Tandon
On Fri, Jun 19, 2015 at 10:39 AM, Steve Rowe wrote:
> Aman,
>
> My version won’t produce anything at all, since incrementToken() always
> returns false…
>
> I updated the gist (at the same URL) to fix the problem by returning true
> from incrementToken()
Hi Steve,
> you never set exhausted to false, and when the filter got reused, *it
> incorrectly carried state from the previous document.*
Thanks for replying, but I am not able to understand this.
With Regards
Aman Tandon
On Fri, Jun 19, 2015 at 10:25 AM, Steve Rowe wrote:
> Hi Aman,
>
>
Aman,
My version won’t produce anything at all, since incrementToken() always returns
false…
I updated the gist (at the same URL) to fix the problem by returning true from
incrementToken() once and then false until reset() is called. It also handles
the case when the concatenated token is zer
Hi Aman,
The admin UI screenshot you linked to is from an older version of Solr - what
version are you using?
Lots of extraneous angle brackets and asterisks got into your email and made
for a bunch of cleanup work before I could read or edit it. In the future,
please put your code somewhere
Hello,
I'm a solr user with some question. I want to append new data to the
existing index. Does Solr support to append new data to index?
Thanks for any reply.
Best wishes.
Jason
You've repeated your original statement. Shawn's
observation is that 10M docs is a very small corpus
by Solr standards. You either have very demanding
document/search combinations or you have a poorly
tuned Solr installation.
On reasonable hardware I expect 25-50M documents to have
sub-second resp
See particularly the ADDREPLICA command and the
"node" parameter. You might not even need the "node"
parameter since when you add a replica Solr does its
best to put the new replica on an underutilized node.
Best,
Erick
On Thu, Jun 18, 2015 at 2:58 PM, Shawn Heisey wrote:
> On 6/18/2015 3:23 PM,
The stack trace is what gets returned to the client, right? It's often
much more informative to see the Solr log output, the error message
is often much more helpful there. By the time the exception bubbles
up through the various layers vital information is sometimes not returned
to the client in t
No clue whatsoever, you haven't provided near enough details. I rather
doubt that many people
on this list really understand the interactions of that technology
stack, I certainly don't.
I'd ask on the ColdFusion list, as they're (apparently) the ones
who've integrated a Solr
connector of sorts. W
The query without load is still under 1 second. But under load, response time
can be much longer due to the queued up query.
We would like to shard the data to something like 6 M / shard, which will
still give a under 1 second response time under load.
What are some best practice to shard the dat
On 6/18/2015 3:23 PM, Jim.Musil wrote:
> Let's say I have a zookeeper ensemble with several Solr nodes connected to
> it. I've created a collection successfully and all is well.
>
> What happens when I want to add another solr node?
>
> I've tried spinning one up and connecting it to zookeeper, bu
Hi,
Let's say I have a zookeeper ensemble with several Solr nodes connected to it.
I've created a collection successfully and all is well.
What happens when I want to add another solr node?
I've tried spinning one up and connecting it to zookeeper, but the new node
doesn't "join" the collectio
10M doesn't sound too demanding.
How complex are your queries?
How complex is your data - like number of fields and size, like very large
documents?
Are you sure you have enough RAM to fully cache your index?
Are your queries compute-bound or I/O bound? If I/O-bound, get more RAM. If
compute-bo
Hi Dmitry,
It’s weird that start and end offsets are the same - what do you see for the
start/end of ‘$’, i.e. if you take out MCFF? (I think it should be start:5,
end:6.)
As far as offsets “respecting the remapped token”, are you asking for offsets
to be set as if ‘dollarsign' were part of t
Hi,
We probably would like to shard the data since the response time for
demanding queries at > 10M records is getting > 1 second in a single request
scenario.
I have not done any data sharding before. What are some recommended way to
do data sharding. For example, may be by a criteria with a lis
Just rolling out a little bit more information as it is coming. I changed the
field type in the schema to text_general and that didn't change a thing.
Another thing is that it's consistently submitting/not submitting the same
documents. I will run over it one time and it won't index a set of
docu
Sent from my iPhone
Thanks :)
exactly what I was looking for...as I only need to create the signature once
this works perfect for me:)
Cheers,
Markus
Sent from my iPhone
> On 17.06.2015, at 20:32, Shalin Shekhar Mangar wrote:
>
> Comments inline:
>
> On Wed, Jun 17, 2015 at 3:18 PM, Markus.Mirsberger
> wrot
USING Solr 5.1.0
This is the schema file
filepath
Hi,
I want to log Solr search queries/response time and Solr indexing log
separately in different set of log files.
Is there any convenient framework/way to do it.
Thanks
Bharath
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Logging-tp4212730.html
Sent from the Solr
Please help, what wrong I am doing here. please guide me.
With Regards
Aman Tandon
On Thu, Jun 18, 2015 at 4:51 PM, Aman Tandon
wrote:
> Hi,
>
> I created a *token concat filter* to concat all the tokens from token
> stream. It creates the concatenated token as expected.
>
> But when I am posti
On 6/18/2015 8:10 AM, Steven White wrote:
> In 5.1.0 (and maybe prior ver.?) when I enable managed schema per the
> above, the existing schema.xml file is left as-is, a copy of it is created
> as schema.xml.bak and a new one is created based on the name I gave it
> "my-schema.xml".
>
> With 5.2.1
We would like more information, but the first thing I notice is that hardly
would make any sense to use a "string" type for a file content.
Can you give more details about the exception ?
Have you debugged a little bit ?
How does the solr input document look before it is sent to Solr ?
Furthermor
On 6/18/2015 8:05 AM, Bence Vass wrote:
> Is there any documentation on how to start Solr 5.2.1 on Solaris (Solaris
> 10)? The script (solr start) doesn't work out of the box, is anyone running
> Solaris 5.x on Solaris?
I think the biggest problem on Solaris will be the options used on the
ps comm
Hello,
I'm using Solr to pull information from a Database and a file system
simultaneously. The database houses the file path of the file in the file
system. It pulls all of those just fine. In fact, it combines the metadata
from the database and the metadata from the file system great. The probl
Hello,
Is there any documentation on how to start Solr 5.2.1 on Solaris (Solaris
10)? The script (solr start) doesn't work out of the box, is anyone running
Solaris 5.x on Solaris?
- Thanks
Hi everyone,
I just upgraded from 5.1.0 to 5.2.1 and noticed a behavior change which I
consider a bug.
In my solrconfig.xml, I have the following:
true
my-schema.xml
In 5.1.0 (and maybe prior ver.?) when I enable managed schema per the
above, the existing schema.xml file is left a
I had the very same issue,
because I had some document with a redundant field, and I was using the
Infix Suggester as well.
Because the Infix Suggester returns the whole field content, if you have
duplicated fields across your docs, you will se duplicate suggestions.
Do you have any intermediate
I got this working - the errors were due to a mistake in letter case - was
using 'datasource' instead of 'dataSource' in the entity that was using
XpathEntityProcessor. Hence this was being ignored and was inheriting the JDBC
Datasource of the parent entity.
I am pasting the complete data-con
Hi Advait ,
First of all I suggest you to study Solr a little bit [1]. because your
requirements are actually really simple :
1) You can simply use more than one suggest dictionary if you care to keep
the suggestions separated ( keeping if a term is coming from the name or
from the the category)
Hi,
We run an ecommerce company and would like to use SOLR for our product database
searches.
We have products along with the categories that they belong to. In case the
product belongs to more than 1 category, we have a comma separated field of
categories.
How do we do auto complete on -
1.
Our web site is created using PaperThin's CommonSpot CMS in a ColdFusion 10 and
Windows Server 2008 R2 environment, using Apache Solr 4.10.4 instead of CF
Solr. We create collections through the CMS interface and they do appear in
both the CMS and the Solr dashboard when created. However, when w
This is the error cause reported. I also see that it has been reported earlier
(http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201103.mbox/%3cd0f0d26c-3ac0-4982-9e2b-09dc96937...@535consulting.com%3E)
but could not find a solution.
I am nesting the FieldReaderDataSource within the
Hi,
I created a *token concat filter* to concat all the tokens from token
stream. It creates the concatenated token as expected.
But when I am posting the xml containing more than 30,000 documents, then
only first document is having the data of that field.
*Schema:*
* required="false" omitNorms
If he has not put any appends or invariant in the request handler,
facet=true is mandatory to activate the facets.
I haven't tried those specific facet queries .
I hope the problem was not simply he didn't activate faceting ...
2015-06-18 10:35 GMT+01:00 Mikhail Khludnev :
> isn't facet=true ne
Hi,
I am using solr 5.1. I'm getting duplicate suggestions when using my
solrsuggester. I'm using AnalyzingInfixLookupFactory &
DocumentDictionaryFactory. can i configure it to suggest me only different
suggestions?
here are details about my configuration:
from schema.xml:
mySuggeste
isn't facet=true necessary?
On Thu, Jun 18, 2015 at 12:03 PM, Midas A wrote:
>
> http://localhost:8983/solr/col/select?q=*:*&sfield=geolocation&pt=26.697,83.1876&facet.query={!frange%20l=0%20u=50}geodist()&facet.query={!frange%20l=50.001%20u=100}geodist()&&wt=json
>
>
> I am not getting facet re
http://localhost:8983/solr/col/select?q=*:*&sfield=geolocation&pt=26.697,83.1876&facet.query={!frange%20l=0%20u=50}geodist()&facet.query={!frange%20l=50.001%20u=100}geodist()&&wt=json
I am not getting facet results .
schema:
<
dynamicField name="*_coordinate" type="tdouble" indexed="true" store
Hi Aman,
https://wiki.apache.org/solr/HowToContribute
HTH
On Thu, Jun 18, 2015 at 12:11 PM, Aman Tandon
wrote:
> Hi,
>
> We created the new phonetic filter, It is working great on our products,
> mostly of our suppliers are Indian, it is quite helpful for us to provide
> the exact result e.g.
Hello,
I have a question to the extended dismax query parser. If the default operator
is changed to AND (q.op=AND) then the search results seems to be incorrect. I
will explain it on some examples. For this test I use solr v5.1 and the tika
core from the example directory.
== Preparation ==
Add
Hi,
It looks like MappingCharFilter sets start and end offset to the same
value. Can this be affected on by some setting?
For a string: test $ test2 and mapping "$" => " dollarsign " (we insert
extra space to separate $ into its own token)
we get: http://snag.gy/eJT1H.jpg
Ideally, we would like
45 matches
Mail list logo