Hi,
There's https://github.com/flaxsearch/luwak, which isn't integrated into Solr
yet, but could be added as a SearchComponent with a bit of work. It's running
off a lucene fork at the moment, but I cut a 4.8 branch at Berlin Buzzwords
which I will push to github later today.
Alan Woodward
ww
Hi,
I've been reading up a lot on what David has written about GeoHash fields
and would like to use them.
I'm trying to create a nice way to display cluster counts of geo points on
a google map. It's naturally not going to be possible to send 40k marker
information over the wire to cluster... so
Hi All,
We are using EdgeNGramFilterFactory for searching with minGramSize="3", as
per Business logic, auto fill suggestions should appear on entering 3
characters in search filter. While searching for contact with name "Bill
Moor", the value will does not get listed when we type 'Bill M' but wh
Hi,
How to achieve distributed indexing in solr cloud.I have external Zookeeper
with two separate machines acting as leader.
In researching further I found
As of now we are specifying the port id in our "update" call and if the
leader is down zookeeper do not forward the request to other leader
If you are using Java to index/query, then use CloudSolrServer which
accepts the ZooKeeper connection string as a constructor parameter and it
will take care of routing requests and failover.
On Thu, May 29, 2014 at 2:41 PM, Priti Solanki wrote:
> Hi,
>
> How to achieve distributed indexing in s
Hi,
Thanks for your valuable inputs... Find below my code and config in
solrconfig.xml. Index update is successful but I am not able to see any data
from solr admin console. What could be the issue? Any help here is highly
appreciated.
I can see the data in the solr admin gui a
I am not using DIH to index data, I use the post.jar & .XML file to load in
SOLR.
I am not sure still I can use DIH , importstart and importend ..?
-Original Message-
From: Stefan Matheis [mailto:matheis.ste...@gmail.com]
Sent: Wednesday, May 28, 2014 11:55 AM
To: solr-user@lucene.apach
Sounds like you are tokenizing your string when you don't really want to.
Either you want all queries to only search against prefixes of the whole
value without tokenization, or you need to produce several copyFields with
different analysis applied and use dismax to let Solr know which should
rank
We've definitely looked at Luwak before... nice to hear it might be being
brought closer into the Solr ecosystem!
On 5/29/2014 4:18 AM, M, Arjun (NSN - IN/Bangalore) wrote:
> Thanks for your valuable inputs... Find below my code and config in
> solrconfig.xml. Index update is successful but I am not able to see any data
> from solr admin console. What could be the issue? Any help here is highly
> appr
On 5/29/2014 12:50 AM, Elran Dvir wrote:
> In my index, I have an EnumField called severity. This is its configuration
> in enumsConfig.xml:
>
>
> Not Available
> Low
>Medium
>High
>Critical
>
>
> My index contains documents with these values.
> When I se
Thanks Shawn... Just one more question..
Can both autoCommit and autoSoftCommit be enabled? If both are enabled, which
one takes precedence?
Thanks & Regards,
Arjun M
-Original Message-
From: ext Shawn Heisey [mailto:s...@elyograg.org]
Sent: Thursday, May 29, 2014 7:02 PM
To:
At a minimum, the doc is too skimpy to say whether this should work or
whether this is forbidden. That said, I wouldn't have expected wildcard to
be supported for enum fields since they are really storing small integers.
Ditto for regular expressions on enum fields.
See:
https://cwiki.apache.o
Hi,
On Wed, May 28, 2014 at 4:25 AM, Vineet Mishra wrote:
> Hi All,
>
> Has anyone tried with building Offline indexes with EmbeddedSolrServer and
> posting it to Shards.
>
What do you mean by "posting it to shards"? How is that different than
copying them manually to the right location in FS?
Hi Bihan,
That's a lot of parameters and without trying one can't really give you
very specific and good advice. If I had to suggest something quickly I'd
say:
* go back to the basics - remove most of those params and stick with the
basic ones. Look at GC and tune slowly by changing/adding para
I managed to figure it out! I did a full commit to the index by:
1. Creating an update.xml file with the commands:
... with the pasted index in the data folder.
2. Running the command from the web browser:
/solr/update?commit=true
--
View this message in context:
http://lucene.472066.n3.
you will probably also want to get some better visibility into what is going on
with your JVM and GC
easiest way is to enable some GC logging options. the following additional
options will give you a good deal of information in gc logs
-Xloggc:$JETTY_LOGS/gc.log
-verbose:gc
-XX:+PrintGCDateStam
Hi,
1. openSearcher (autoCommit)
According to the Apache Solr reference, "autoCommit/openSearcher" is set to
false by default.
https://cwiki.apache.org/confluence/display/solr/UpdateHandlers+in+SolrConfig
But on Solr v4.8.1, if "openSearcher" is omitted from the autoCommit config,
new searcher
Thanks for looking into this.
These are our static queries. We only see one of them getting executed. If it
fails to execute others, shouldn't it show error in log?
*:*
field1:"abc"
-field2:"xy
Hi,
What are ways to prevent someone executing random delete commands against Solr?
Like:
curl http://solr.com:8983/solr/core/update?commit=true -H "Content-Type:
text/xml" --data-binary '*:*'
I understand we can do IP based access (change /etc/jetty.xml). Is there
anything Solr provides out
Hi All
Thanks for the Suggestion. I will implement this changes and let us know the
update
Regards
Bihan
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-High-GC-issue-tp4138570p4138694.html
Sent from the Solr - User mailing list archive at Nabble.com.
And I'm not even sure what the actual use case is here. I mean, the values
of an enum field must be defined in advance, so if you think a value starts
with "H", just eyeball that static list and see that the only predefined
value starting with "H" is "High", so you can simply replace your "*" wi
Agreed, that is a LOT of options.
First, check the defaults and remove any flags that are setting something to
the default. You can see all the flags and the default values with this command:
java -XX:+PrintFlagsFinal -version
For example, the default for ParallelGCThreads is 8, so you do not
On 5/29/2014 7:52 AM, M, Arjun (NSN - IN/Bangalore) wrote:
> Thanks Shawn... Just one more question..
>
> Can both autoCommit and autoSoftCommit be enabled? If both are enabled, which
> one takes precedence?
Yes, and it's a very common configuration. If you do enable both, you
want openSearcher
On IRC you said you found out the answers before I came along. For
everyone else’s benefit:
* Solr’s “documentation” is essentially the “Solr Reference Guide”. Only
look at the wiki as a secondary source.
* See “location_rpt” in the example schema.xml which supports multi-valued
spatial data. I
On 5/29/2014 9:21 AM, Boon Low wrote:
> 1. openSearcher (autoCommit)
> According to the Apache Solr reference, "autoCommit/openSearcher" is set to
> false by default.
>
> https://cwiki.apache.org/confluence/display/solr/UpdateHandlers+in+SolrConfig
>
> But on Solr v4.8.1, if "openSearcher" is omit
Hi all,
At the moment I am reviewing the code to determine if this is a legitimate bug
that needs to be set as a JIRA ticket.
Any insight or recommendation is appreciated.
Including the replication steps as text:
-
Solr versions wh
bihan.chandu [bihan.cha...@gmail.com] wrote:
> I am Currently using solr 3.6.1 and my system handle lot of request .Now we
> are facing High GC issue in system.
Maybe it would help to get an idea of what is causing all the allocations?
- How many documents in your index?
- How many queries/sec?
-
On 5/29/2014 12:06 PM, Ronald Matamoros wrote:
> Hi all,
>
> At the moment I am reviewing the code to determine if this is a legitimate
> bug that needs to be set as a JIRA ticket.
> Any insight or recommendation is appreciated.
> Note: the value in changes with every other
> refres
I am wondering the best way to debug an error I am getting in Solr. The
error is below, but as far as I can tell, pdfbox can not read a font and
returns a null pointer which is passed to tika and then to solr. Even
though it is only a warning, this appears to terminate the indexing and I
get an e
Thanks Shalin. I will have a look at this. Currently we are using 4.3.1 so
it should not be much trouble to patch it.
Regards
Mohit
On Wed, May 28, 2014 at 6:57 PM, Shalin Shekhar Mangar <
shalinman...@gmail.com> wrote:
> Support for keys, tagging and excluding filters in StatsComponent was add
I'm trying to write some integration tests against SolrCloud for which I'm
setting up a solr instance backed with a zookeeper and pointing it to a
namenode (all in memory using hadoop testing utilities and JettySolrRunner).
I'm getting the following error when I'm trying to create a collection (btw
We were running Solr 4.2, and are in the process of upgrading. I believe that
the particular scenario that was clogging our queue was resolved in 4.7.1 -
https://issues.apache.org/jira/browse/SOLR-5811
--
View this message in context:
http://lucene.472066.n3.nabble.com/overseer-queue-clogged-tp
Hi Shawn,
Thanks a lot for your nice explanation.. Now I understood the
difference between autoCommit and autoSoftCommit.. Now my config looks like
below.
1
false
15000
With this now I am getting some other error like this.
org.
Hi,
What would happen to DataImportHandler that is setup on the master when the
slave is in the process of replicating the index.
Is there anyway to configure DataImportHandler to not do anything if
replication is in process and/or disable replication before
DataImportHandler starts its process?
James,
Thanks for clearly stating this , I was not able to find this documented
anywhere, yes I am using it with another spell checker (Direct) with the
collation on. I will try the maxChangtes and let you know.
On a side note , whenever I change the spellchecker parameter , I need to
rebuild the
Hi all:
I'm using solr 4.7 and my application uses local param syntax to specify
different facet.prefix on the same field. It works fine without the
facet.threads parameter, but if I specify facets.threads then all the facet
count results have the same prefix even though I specify different pre
37 matches
Mail list logo