Ok Thanks Erick, for your help.
Tony.
On Sun, Jul 14, 2013 at 5:12 PM, Erick Erickson wrote:
> Not sure how to do the "pass to another request handler" thing, but
> the debugging part is pretty straightforward. I use IntelliJ, but as far
> as I know Eclipse has very similar capabilities.
>
> Fi
Hi,
I have extended Solr's SearchComonent class and I am iterating through all
the docs in ResponseBuilder in @overrider Process() method.
Here I want to get the value of FucntionQuery result but in Document object
I am only seeing the standard field of document not the FucntionQuery
result.
Thi
Hi,
How can I have faceting on a subset of the query docset e.g. with something
akin to:
SimpleFacets.base =
SolrIndexSearcher.getDocSet(
Query mainQuery,
SolrIndexSearcher.getDocSet(Query filter)
)
Is there anything like facet.fq?
Cheers,
Dan
Hi,
I'm trying to write a validation test that reads some statistics by
querying
Solr 4.3 via HTTP, namely the number of indexed documents (`numDocs`)
and the
number of pending documents (`pendingDocs`) from the Solr4 cluster. I
believe
that in Solr3 there was a `stats.jsp` page thtat offered b
Thank you!
I really need to eventually increase the number of shards, so I can not
directly use numshards = X and the only way out - splitshards, but then I
encountered the following problem:
1. run empty node1
java -Dbootstrap_confdir=./solr/collection1/conf
-Dcollection.configName=myconf -Dz
Hi,
I got the solution to the above problem . Sharing the code so that it could
help people in future
PoolingClientConnectionManager poolingClientConnectionManager = new
PoolingClientConnectionManager();
poolingClientConnectionManager.setMaxTotal(2);
poolingClientConnectionManager.setDefaultMaxPe
Hi,
maybe someone here can help me with my solr-4.3.1 issue.
I've successful deployed the solr.war on a tomcat7 instance.
Starting the tomcat with only the solr.war deployed - works nicely.
I can see the admin interface and logs are "clean".
If i
deploy my wicket-spring-data-solr based app (usin
Hi,
I am having an issue with network failure to one of the node (or many). When
network is down, number of sockets in that machine keeps on increasing, At a
point it throws too many file descriptors exeption.
When network is available before that exception, all the open sockets are
getting close
Hi,
I am using solr-4.3.0 with zookeeper-3.4.5. My scenario is, users will
communicate with solr via zookeeper ports.
*My question is how many users can simultaneously access the solr. *
In zookeeper i configured maxClientCxns, but that is for max connections
from a single host(User??)
Note:
Please any help on how to get the value of 'freq' field in my custom
SearchComponent ?
http://localhost:8080/solr/collection2/demoendpoint?q=spider&wt=xml&indent=true&fl=*,freq:termfreq%28product,%27spider%27%29
11Video Gamesxbox 360The Amazing
Spider-Man1114399940813452738561
Here is my code
Hi all,
I have the following case.
Solr documents has fields --> id and status. Id is not unique. Unique is the
combination of these two elements.
Documents with same id have different statuses.
List of Documents
-ID- -STATUS-
id11
id12
id13
id14
id2
Manuel:
First off, anything that Mike McCandless says about low-level
details should override anything I say. The memory savings
he's talking about there are actually something he tutored me
in once on a chat.
The savings there, as I understand it, aren't huge. For large
sets I think it's a 25% s
I'm going to let someone who knows the splitting details
take over ...
Best
Erick
On Mon, Jul 15, 2013 at 5:19 AM, Evgeny Salnikov wrote:
> Thank you!
> I really need to eventually increase the number of shards, so I can not
> directly use numshards = X and the only way out - splitshards, but th
Hi,
I'm trying to change default tempDir where solr.war file is extracted to.
If I change context or webbaps XML it works, but I need to do it from
commandline and don't know how. I tried to run:
java -Djava.io.tmpdir=/path/to/my/dir -jar start.jar
or
java -Djavax.servlet.context.tempdir=/path
1> You have to re-index after changing your schema, did you?
2> The admin/analysis page is your friend. It'll show you exactly
what transformations are applied both at query and index time.
3> WhitespaceTokenizerFactory is only _part_ of the solution, it
just breaks up the incoming. WordD
Hello,
it sounds like FieldCollapsing or Join scenarios, but given the only
information which you provided, it can be solved by indexing statuses as
multivalue field:
-ID- -STATUS-
id1(1 2 3 4)
id2(1 2)
id3(1)
q=*:*&fq=STATUS:1&fq=NOT STATUS:3
On Mon, Jul 15, 2
Hi,
I would like to run two Solr instances on different computers as a cluster.
My main interest is High availability - meaning, in case one server crashes
or is down there will be always another one.
(my performances on a single instance are great. I do not need to split the
data to two servers.)
Hello,
I am able to configure solr 4.3.1 version with tomcat6.
I followed these steps:
1. Extract solr431 package. In my case I did in
"E:\solr-4.3.1\example\solr"
2. Now copied solr dir from extracted package (E:\solr-4.3.1\example\solr)
into TOMCAT_HOME dir.
In my case TOMCAT_HOME dir is pointe
* Go with SolrCloud - unless you think you're smarter than Yonik and Mark
Miller.
* "Replicas" are used for both query capacity and resilience (HA).
* "Shards" are used for increased index capacity (number of documents) and
to reduce query latency (parallel processing of portions of a query.)
*
When I search something which has non ASCII characters at Google it returns
me results both original and ascified versions and *highlights both of them*.
For example if I search *çiğli* at Google first result is that:
*Çiğli* Belediyesi
www.*cigli*.bel.tr/
How can I do that at Solr? How can I ind
Either do a custom highlighter or preprocess the query and generate an "OR"
of the accented and unaccented terms. Solr has no magic feature to do both.
Sure, you could do a token filter that duplicated each term and included
both the accented and unaccented versions, but... it gets messy and is
Hi Furkan,
Using MappingCharFilterFactory with mapping-FoldToASCII.txt or
mapping-ISOLatin1Accent.txt
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#solr.MappingCharFilterFactory
From: Furkan KAMACI
To: solr-user@lucene.apache.org
Sent: Monda
Actually, on second thought, I think you should be able to do this directly,
but I don't have the highlighter magic at my fingertips. The field type
analyzer simply needs to map the accented characters; the character
positions of the accented and unaccented tokens should line up fine. Really,
i
Yes I know about that, but design schema cannot be changed. This is not my
decision :)
--
View this message in context:
http://lucene.472066.n3.nabble.com/Nested-query-in-SOLR-filter-query-fq-tp4078020p4078047.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hello, first time writing to the list. I am a developer for a company where we
recently switched all of our search core from Sphinx to Solr with very great
results. In general we've been very happy with the switch, and everything seems
to work just as we want it to.
Today however we've run into
Hi Sandeep,
Thank you for your extensive answer :)
Before I'm going through all your steps, I noticed you mentioning something
about a data import handler.
Now, what I will be requiring after I've completed the basic setup of
Tomcat6 and Solr431 I want to migrate my Solr350 (now running on Cygwin
Hi Henrik,
Try setting up a copyfield in your schema and set the copied field to use
something like 'text_ws' which implements LowerCaseFilterFactory. Then sort on
the copyfield.
Regards,
DQ
-Original Message-
From: Henrik Ossipoff Hansen [mailto:h...@entertainment-trading.com]
Sent:
Hi,
I see there are few ways in Solr which can "almost" be used for my use
case, but all of them appear to fall short eventually.
Here is what I am trying to do: consider the following document structure
(there are many more fields in play, but this is enough for example):
Manufacturer
Produ
On 7/15/2013 3:08 AM, Federico Ragona wrote:
> Hi,
>
> I'm trying to write a validation test that reads some statistics by
> querying
> Solr 4.3 via HTTP, namely the number of indexed documents (`numDocs`)
> and the
> number of pending documents (`pendingDocs`) from the Solr4 cluster. I
> belie
Hello, thank you for the quick reply!
But given that facet.sort=index just sorts by the faceted index (and I don't
want the facet itself to be in lower-case), would that really work?
Regards,
Henrik Ossipoff
-Original Message-
From: David Quarterman [mailto:da...@corexe.com]
Sent: 15.
On 7/15/2013 5:45 AM, wolbi wrote:
> I'm trying to change default tempDir where solr.war file is extracted to.
> If I change context or webbaps XML it works, but I need to do it from
> commandline and don't know how. I tried to run:
>
> java -Djava.io.tmpdir=/path/to/my/dir -jar start.jar
>
> or
Hi Henrik,
We did something related to this that I'll share. I'm rather new to Solr so
take this idea cautiously :-)
Our requirement was to show exact values but have case-insensitive sorting and
facet filtering (prefix filtering).
We created an index field (type="string") for creating facets
Sounds like Wicket and Solr are using the same port(s)...
If you start Wicket first then look at the Solr logs, you might
see some message about "port already in use" or some such.
If this is SolrCloud, there are also the ZooKeeper ports to
wonder about.
Best
Erick
On Mon, Jul 15, 2013 at 6:49
With only two instances, replication may be the way to go. Or send updates to
both.
Solr Cloud is much more tightly coupled, requires Zookeeper, etc. There are
more ways for two Solr Cloud nodes to fail, compared with two Solr nodes using
old-style replication. In general, a loosely-coupled sys
Hi Henrik,
If I understand the question correctly (case-insensitive sorting of the
facet values), then this is the limitation of the current Facet component.
You can see the full implementation at:
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/handler/compone
Ok, I have removed the problem with OutOfMemory by increasing jvm
parameters... and now I have another problem. My index worked since
yesterday evening... the number of documents increased (I run bin/crawl
script every 3 hours and I have 27040 documents now).. but the last increase
was 6 hours ago.
As I can see, this is the same problem like one from older posts -
http://lucene.472066.n3.nabble.com/strange-utf-8-problem-td3094473.html
...but it was without any response.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Apache-Solr-4-after-1st-commit-the-index-does-not-gr
Hi,
I am trying to pass empty values to fq parameter but passing null (or empty)
doesn't seem to work for fq.
Something like...
q=*:*&fq=(field1:test OR null)
We are trying to make fq more tolerant by making not fail whenever a
particular variable value is not passed..
Ex:
/select?q=*:*&fq=ln
Hi,
I am trying to pass empty values to fq parameter but passing null (or empty)
doesn't seem to work for fq.
Something like...
q=*:*&fq=(field1:test OR null)
We are trying to make fq more tolerant.. It shouldn't fail if particular
variable value is not passed..
Ex:
/select?q=*:*&fq=lna
I'm more than a little skeptical about your intentions here... just clean up
your code and pass clean parameters ONLY!!!
Why is that so difficult?
You should have an application layer between your application client and
Solr, anyway, so what's the difficulty? I mean, why are you just trying so
Walter,
Could you provide some more details about your staggered replication
approach?
We are currently running into similar issues and looks like staggered
replication is a better approach to address the performance issues on
Slaves.
thanks
Aditya
--
View this message in context:
http://luce
Hi,
Is there an easy way to clear zookeeper of all offline solr nodes without
restarting the cluster? We are having some stability issues and we think it
maybe due to the leader querying old offline nodes.
thank you,
Luis Guerrero
Jack,
First, thanks a lot for your response.
We hardcode certain queries directly in search component as its easy for us
to make changes to the query from SOLR side compared to changing in
applications (as many applications - mobile, desktop etc.. use single SOLR
instance). We don't want to chang
We ran replication at ten minute intervals. One master, five slaves, and
replication on the hour on the first slave, ten minutes after the hour on the
second, twenty minutes after on the third, and so on.
You could do this with a single crontab on the master. Send requests to each
slave to repl
Great explanation and article.
Yes, this buffer for merges seems very small, and still optimized. Thats
impressive.
I am new to using Velocity esp. with Solr. In the Velocity example provided,
I am curious where #url_for_home is set i.e. its value assigned? (It is used
a lot in the macros defined in VM_global_library.vm.)
Thank you in advance,
O. O.
--
View this message in context:
http://lucene.472066.n3.
I have a solr 4.3 instance I am in the process of standing up. It
started out with an empty index.
I have in it's solrconfig.xml,
10
false
I have an index process running, that has currently added around 400k
documents to Solr.
I had expected that a 'commit'
Jonathan,
Please note the openSearcher=false part of your configuration. This is why you
don't see documents. The commits are occurring, and being written to segments
on disk, but they are not visible to the search engine because a Solr searcher
class has not opened them for visibility.
You
Is there any way to change the default Velocity directory where the Velocity
templates are stored? In the example download, I modified the solrconfig.xml
under the Solr Request Handler to add:
conf/mycustom/
I have a mycustom directory under the conf directory for the example core,
but I still g
Try supplying an absolute path. I'm away from my computer so can't check just
yet, but it is probably coded to consider that value absolute since moving it
generally means you want templates outside of your Solr conf/.
Erik
On Jul 15, 2013, at 13:25, "O. Olson" wrote:
> Is there any way
Ah, thanks for this explanation. Although I don't entirely understand
it, I am glad there is an expected explanation!
This Solr instance is actually set up to be a replication master. It
never gets searched itself, it just replicates to slaves that get searched.
Perhaps some time in the past
Hi,
I want to dynamically specify the data source in the URL when invoking data
import handler. I'm looking at this :
http://wiki.apache.org/solr/DataImportHandler#solrconfigdatasource
/home/username/data-config.xml com.mysql.jdbc.Driver jdbc:mysql://localhost/d
any help plz !!!
On Mon, Jul 15, 2013 at 4:13 PM, Tony Mullins wrote:
> Please any help on how to get the value of 'freq' field in my custom
> SearchComponent ?
>
>
> http://localhost:8080/solr/collection2/demoendpoint?q=spider&wt=xml&indent=true&fl=*,freq:termfreq%28product,%27spider%27%29
>
>
I don't think you can get there from here.
But you can specify config file on a query line. If you only have a couple
of configurations, you could have them in different files and switch that
way.
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com
#url_for_home is defined in conf/velocity/VM_global_library.vm. Note that
it builds upon #url_root defined just above it, so maybe that's what you
want to adjust if you need to tinker with it.
Erik
On Jul 15, 2013, at 12:49, "O. Olson" wrote:
> I am new to using Velocity esp. with Solr. In
Hi,
I think the process of retrieving a stored field (through fl) is happens
after SearchComponent.
One solution: If you wrap a q params with function your score will be a
result of the function.
For example,
http://localhost:8080/solr/collection2/demoendpoint?q=termfreq%28product,%27spider%27%
Newbie question:
I have a Flume server, where I am writing to sink which is a RollingFile
Sink.
I have to take this files from the sink and send it to Solr which can index
and provide search.
Do I need to configure MorphineSolrSink?
What is the mechanism's to do this or send this data over to S
How to get a different field list in the first X results? For example, in the
first 5 results I want fields A, B, C, and on the next results I need only
fields A, and B.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Different-fl-for-first-X-results-tp4078178.html
Sent from
It is not really possible. Why do you actually need it?
Regards,
Alex.
Personal website: http://www.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.
I'm trying to index documents containing geo-spatial coordinates using
Solr 4.3.1 and am running into some difficulties. Whenever I attempt to
index a particular document containing a geospatial coordinate pair
(using post.jar), the operation fails as follows:
SimplePostTool version 1.5
Po
Thank you very much Erik. That’s exactly what I was looking for. I can swear
I looked into VM_global_library.vm. I'm not sure how I missed it :-(
O. O.
Erik Hatcher-4 wrote
> #url_for_home is defined in conf/velocity/VM_global_library.vm. Note that
> it builds upon #url_root defined just above i
1. Request all fields needed for all results and simply ignore the extra
field(s) (which can be empty or missing and will automatically be ignored by
Solr anyway).
2. Two separate query requests.
3. A custom search component.
4. Wait for the new scripted query request handler that gives you full
Thank you Erik. I did not think the Windows file/directory path format would
work for Solr. For others the following worked for me:
C:\Users\MyUsername\Solr\example\example-DIH\solr\db\conf\mycustom\
Erik Hatcher-4 wrote
> Try supplying an absolute path. I'm away from my computer so can't check
Make sure that s are within rather than .
Solr tends to ignore misplaced configuration elements.
-- Jack Krupansky
-Original Message-
From: Scott Vanderbilt
Sent: Monday, July 15, 2013 5:10 PM
To: solr-user@lucene.apache.org
Subject: Solr 4.3.1: Errors When Attempting to Index LatL
Brilliant. That's precisely what the issue was.
The Wiki didn't give a context for where the element was
supposed to go and I assumed (incorrectly) that it was in . Of
course, I should not have assumed that and verified it independently.
Mea culpa.
Thanks the gentle application of the clue
On Sun, Jul 14, 2013 at 1:45 PM, Oleg Burlaca wrote:
> Hello Erick,
>
> > Join performance is most sensitive to the number of values
> > in the field being joined on. So if you have lots and lots of
> > distinct values in the corpus, join performance will be affected.
> Yep, we have a list of uni
Is there a JIRA number for the last one?
Regards,
Alex
On 15 Jul 2013 17:21, "Jack Krupansky" wrote:
> 1. Request all fields needed for all results and simply ignore the extra
> field(s) (which can be empty or missing and will automatically be ignored
> by Solr anyway).
> 2. Two separate qu
SOLR-5005 - JavaScriptRequestHandler
https://issues.apache.org/jira/browse/SOLR-5005
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Monday, July 15, 2013 6:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Different 'fl' for first X results
Is there a JIRA num
Hi All,
I changed the name of the queryAnalyzerFieldType for my spellcheck
component and the corresponding field and now when solr starts up, it hangs
at this point:
5797 [searcherExecutor-4-thread-1] INFO org.apache.solr.core.SolrCore –
QuerySenderListener sending requests to
Searcher@153d12bf
Hello,
I am trying to join data between two cores: merchant and location
This is my query:
http://_server_.com:8983/solr/location/select?q={!join from=merchantId
to=merchantId fromIndex=merchant}walgreens
Ref: http://wiki.apache.org/solr/Join
Merchants core has documents for the query: "walgree
I have also tried these queries (as per this SO answer:
http://stackoverflow.com/questions/12665797/is-solr-4-0-capable-of-using-join-for-multiple-core
)
1. http://_server_.com:8983/solr/location/select?q=:&fq={!join
from=merchantId to=merchantId fromIndex=merchant}walgreens
And I get this:
{
Yandong,
have you figured out if it works for you to use one collection per customer?
We have the similar use-case as yours: customer id's are used as core names.
that was the reason our company did not upgrade to solrcould ... I might
remember it wrong but I vaguely remember I looked into using
Hi there,
I run 2 solr instances ( Tomcat 7, Solr 4.3.0 , one shard),one external
Zookeeper instance and have lots of cores.
I use collection API to create the new core dynamically after the
configuration for the core is uploaded to the Zookeeper and it all works
fine.
As there are so many core
Thank you Alex.
On Mon, Jul 15, 2013 at 12:37 PM, Alexandre Rafalovitch
wrote:
> I don't think you can get there from here.
>
> But you can specify config file on a query line. If you only have a couple
> of configurations, you could have them in different files and switch that
> way.
>
> Regard
Hello,
Packt Publishing has kindly agreed to let me run a contest with e-copies of
my book as prizes:
http://www.packtpub.com/apache-solr-for-indexing-data/book
Since my book is about learning Solr and targeted at beginners and early
intermediates, here is what I would like to do. I am asking for
Rajesh,
I think this question is better suited for the FLUME user mailing list.
You will need to configure the sink with the expected values so that the
events from the channels can head to the right place.
On Mon, Jul 15, 2013 at 4:49 PM, Rajesh Jain wrote:
> Newbie question:
>
> I have a Flu
I know that you can clear zookeeper's data directoy using the CLI with the
clear command, I just want to know if its possible to update the cluster's
state without wiping everything out. Anyone have any ideas/suggestions?
On Mon, Jul 15, 2013 at 11:21 AM, Luis Carlos Guerrero Covo <
lcguerreroc..
Hello Alex,
This sounds like an excellent idea! :)
Saqib
On Mon, Jul 15, 2013 at 8:11 PM, Alexandre Rafalovitch
wrote:
> Hello,
>
> Packt Publishing has kindly agreed to let me run a contest with e-copies of
> my book as prizes:
> http://www.packtpub.com/apache-solr-for-indexing-data/book
>
>
Hello Luis,
I don't think that is possible. If you delete clusterstate.json from
zookeeper, you will need to restart the nodes.. I could be very wrong
about this
Saqib
On Mon, Jul 15, 2013 at 8:50 PM, Luis Carlos Guerrero Covo <
lcguerreroc...@gmail.com> wrote:
> I know that you can cl
On Tue, Jul 16, 2013 at 2:19 AM, Rajesh Jain wrote:
> Newbie question:
>
> I have a Flume server, where I am writing to sink which is a RollingFile
> Sink.
>
> I have to take this files from the sink and send it to Solr which can index
> and provide search.
>
> Do I need to configure MorphineSolr
Alex,
You could submit a JIRA ticket, and add an option like facet.sort =
insensitive, and f.<> syntax
Then we all get the benefit of the new feature.
On Mon, Jul 15, 2013 at 9:16 AM, Alexandre Rafalovitch
wrote:
> Hi Henrik,
>
> If I understand the question correctly (case-insensitive sortin
Hi Alex,
great please go ahead..
-Sandeep
On Tue, Jul 16, 2013 at 9:40 AM, Ali, Saqib wrote:
> Hello Alex,
>
> This sounds like an excellent idea! :)
>
> Saqib
>
>
> On Mon, Jul 15, 2013 at 8:11 PM, Alexandre Rafalovitch
> wrote:
>
> > Hello,
> >
> > Packt Publishing has kindly agreed to let
No sorry, I am still not getting the termfreq() field in my 'doc' object.
I do get the _version_ field in my 'doc' object which I think is
realValue=StoredField.
At which point termfreq() or any other FunctionQuery field becomes the part
of doc object in Solr ? And at that point can I perform some
This is indeed an interesting idea so to speak, but I think it's a bit too
manual, so to speak, for our use case. I do see it would solve the problem
though, so thank you for sharing it with the community! :)
-Original Message-
From: James Thomas [mailto:jtho...@camstar.com]
Sent: 15.
Hi Alex,
Yes this makes sense. My Java is a bit dusty, but depending on how much in need
we will become at this feature, it's definitely something we will look into
creating, and if successful, we will definitely be submitting a patch. Thank
you for your time and detailed answer!
Best regards,
Hi,
You should use CoreAdmin API (or Solr Admin page) and UNLOAD unneeded
cores. This will unregister them from the zookeeper (cluster state will be
updated), so they won't be used for querying any longer. Solrcloud restart
is not needed in this case.
Regards.
On 16 July 2013 06:18, Ali, Saqib
I am using solr 4.3 and have 2 collections coll1, coll2.
After searching in coll1 I get field1 values which is a comma separated list
of strings like, val1, val2, val3,... valN.
How can I use that list to match field2 in coll2 with those values separated
by an OR clause.
So i want to return all do
87 matches
Mail list logo