Thanks Andrea and Erick
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
bq. The lucene-core7 had some useful functions like incrementToken which I
could not find in previous versions because of that I used this version.
Do not do this. You simply cannot mix jar versions because there was a
function in the old
version that you want to use. The support for that functi
Hi,
I mean you should use Maven which would pickup, starting from a number
(e.g. 6.6.1), all the correct dependencies you need for developing the
plugin.
Yes, the "top" libraries (e.g. Solr and Lucene) should have the same
version but on top of that, the plugin could require some other direct
Thanks Andrea. Do you mean all of my jar file versions should be 6.6.1?
The lucene-core7 had some useful functions like incrementToken which I
could not find in previous versions because of that I used this version.
--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
Hi Zahra,
I think your guessing it's right: I see some mess in libraries versions.
If I got you
* the target platform is Solr 6.6.1
* the compile classpath includes solr-core-4.1.0, 1.4.0 (!) and lucene
7.4.0?
If that is correct, with a ClassCastException you're just scraping the
surface
I am using solr 6.6.1. I want to write my own analyzer for the field type
"text_general" in schema. the field type in schema is as follows:
When I test the filter in Java, everything is alright; However, when I start
my solr I get the following error:
C
Haven't done it myself, but maybe these could be useful:
https://github.com/DiceTechJobs/SolrPlugins
https://github.com/leonardofoderaro/alba
Regards,
Alex.
http://www.solr-start.com/ - Resources for Solr users, new and experienced
On 21 November 2017 at 05:22, Zara Parst wrote:
> Hi,
Zara,
If you're looking for custom search components, request handlers or update
processors, you can check out my github repo with examples here:
https://github.com/bdalal/SolrPluginsExamples/
On Tue, Nov 21, 2017 at 3:58 PM Emir Arnautović <
emir.arnauto...@sematext.com> wrote:
> Hi Zara,
> What
Hi Zara,
What sort of plugins are you trying to build? What sort os issues did you run
into? Maybe you are not too far from having running custom plugin. I would
recommend you try running some of existing plugins as your own - just to make
sure that you are able to build and configure custom plu
Hi,
I have spent too much time learning plugin for Solr. I am about give up. If
some one has experience writing it. Please contact me. I am open to all
options. I want to learn it at any cost.
Thanks
Zara
corresponding JIRA issue [2].
I’ll share the result of my work, including the results from this poll
in my Lucene Revolution 2017 talk titled "Solr’s Missing Plugin Ecosystem” [3],
where I hope to see many of you too!
[1] https://s.apache.org/solr-plugin
[2] https://issues.apache.org/jira/browse/SOLR-106
I am looking for best practices when a search component in one handler,
needs to invoke another handler, say /basic. So far, I got this working
prototype:
public void process(ResponseBuilder rb) throws IOException {
SolrQueryResponse response = new SolrQueryResponse();
Modifiabl
: I have a few classes that are Analyzers, Readers, and TokenFilters. These
: classes use a large hashmap to map tokens to another value. The code is
: working great. I go to the Analysis page on the Solr dashboard and everything
: works as I would like. The problem is that the first time each one
I have a few classes that are Analyzers, Readers, and TokenFilters.
These classes use a large hashmap to map tokens to another value. The
code is working great. I go to the Analysis page on the Solr dashboard
and everything works as I would like. The problem is that the first time
each one of t
Hi Folks,
Thanks for all the great suggestions. i will try and see which one works
best.
@Hoss: The WEB-INF folder is just in my dev environment. I have a localo
Solr instance and I points it to the target/WEB-INF. Simple convenient
setup for development purposes.
Much appreciated.
Max.
On Wed,
Max,
Have you looked in External file field which is reload on every hard commit,
only disadvantage of this is the file (personal-words.txt) has to be placed
in all data folders in each solr core,
for which we have a bash script to do this job.
https://cwiki.apache.org/confluence/display/solr/Work
:
:
1) as a general rule, if you have a delcaration which includes
"WEB-INF" you are probably doing something wrong.
Maybe not in this case -- maybe "search-webapp/target" is a completley
distinct java application and you are just re-using it's jars. But 9
times out of 10, when people have
/25623797/solr-plugin-classloader.
Basically I'm writing a solr plugin by extending SearchComponent class. My
new class is part of a.jar archive. Also my class depends on a jar b.jar. I
placed both jars in my own folder and declared in it solrconfig.xml with:
I also declared my new compone
HI,
I am facing the exact issue described here:
http://stackoverflow.com/questions/25623797/solr-plugin-classloader.
Basically I'm writing a solr plugin by extending SearchComponent class. My
new class is part of a.jar archive. Also my class depends on a jar b.jar. I
placed both jars in m
Hi Sara,
The error is clear: class not found exception, which means solr couldn't
locate that jar file.
If you are not using solr-cloud then place that custom jar under
solr_home/lib folder.
You can also hard code the path of this jar file in solrconfig.xml under
/lib element.
If you are using s
hi i wanna to have own normalization .
i write 2 class one class form normalization filter factort that extends
token filter factory and imoplement multiTermAwarecomponent
and an other one class is normalization factory that extends token filter.
then i create a jar from this classes with dependenc
Hi folks,
I'm playing with a custom SOLR plugin and I'm trying to retrieve the value
for a multivalued field, using the code below.
==
schema.xml:
==
input data:
83127
somevalue
some other value
some other value 3
some other value 4
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
I've got a problem with a self written component that is configured in
the "first-components" section of my SearchHandler ("/select"). When a
certain keyword is given in the query string, I want to write a redirect
URL to the response and stop p
Thanks.
It is a fairly large ACL, so I am hoping to avoid any sort of application
redirect. That is sort of the problem we are trying to solve actually. Our
list was getting too large and we were maxing out maxBooleanQueries.
And I don't know which shard the user document is located on, just its
How much information do you need from this document? If it's a reasonably small
amount, can you read it at the application layer and attach it as a
set of parameters
to the query that are then available to the post filter. Or is it a
huge ACL list of something
In this latter case, if you know
I have created a PostFilter. PostFilter creates a DelegatingCollector,
which provides a Lucene IndexSearcher.
However, I need to query for an object that may or may not be located on
the shard that I am filtering on.
Normally, I would do something like:
searcher.search(new TermQuery(new Term("fi
Hi all!
After the initial release I finally came around to update the content based
image retrieval plugin LIRE Solr to the current version and it has been
extended to support more CBIR features.
https://bitbucket.org/dermotte/liresolr
I also took the freedom to update the web client and the dem
denormalizing data is a very common practice in Solr,
don't be afraid to try it if it makes your problem easier.
Also, take a look at the distributed analytics plugin, it allows
you to fairly painlessly add custom code to do whatever
you want, see: https://issues.apache.org/jira/browse/SOLR-6150
What about parent/child record? With availability in the child record?
You'd have to reindex the whole block together, but you should be able
to get away with one request.
Regards,
Alex.
No idea on the primary question, as you can probably guess.
Personal: http://www.outerthoughts.com/ and @ara
well we though of that but there are some problems with a second core
for availability:
we already have a core containing alot of house information (id, name,
latitude, longitude, city , country etc.) which means we would have to
do 2 solr queries just to build 1 search result or add alot of dou
Have you thought storing 'availability' as a document instead of
'house'. So, the house, date range and price are a single document.
Then, you group them, sum them and in post-filter sort them?
Some ideas may come from:
http://www.slideshare.net/trenaman/personalized-search-on-the-largest-flash-sa
Hello,
*Usercase:*
At this moment we have the current situation :
We have customers that want to rent houses to visitors of our website.
Customers can vary the prices according to specific dates, so renting a
house at holidays will cost more.
*The problems:*
- prices may vary according to the
Querying nested data is very easy in MarkLogic, it was built for that. I used
to work there.
The founder is a former search engine guy from Infoseek and Ultraseek, so it
has a lot of familiar behavior, like merging segments automatically.
wunder
Walter Underwood
wun...@wunderwood.org
http://obs
Querying nested data is very difficult in any modern db that I have seen.
If It works as you suggest then It would be cool if the feature was it going to
be eventually maintained inside solr.
> On Jul 23, 2014, at 7:13 AM, Renaud Delbru wrote:
>
> One of the coolest features of Lucene/Solr is
One of the coolest features of Lucene/Solr is its ability to index
nested documents using a Blockjoin approach.
While this works well for small documents and document collections, it
becomes unsustainable for larger ones: Blockjoin works by splitting the
original document in many documents, on
on=1544545&view=markup
You're probably want to implement the QParserPlugin as PostFilter.
On Sun, Dec 1, 2013 at 3:46 PM, Thomas Seidl wrote:
Hi,
I'm currently looking at writing my first Solr plugin, but I could not
really find any "overview" information about how a S
4_6_0/solr/core/src/java/org/apache/solr/search/FunctionRangeQParserPlugin.java?revision=1544545&view=markup
You're probably want to implement the QParserPlugin as PostFilter.
On Sun, Dec 1, 2013 at 3:46 PM, Thomas Seidl wrote:
> Hi,
>
> I'm currently looking at writing
as PostFilter.
On Sun, Dec 1, 2013 at 3:46 PM, Thomas Seidl wrote:
> Hi,
>
> I'm currently looking at writing my first Solr plugin, but I could not
> really find any "overview" information about how a Solr request works
> internally, what the control flow is
Hi,
I'm currently looking at writing my first Solr plugin, but I could not
really find any "overview" information about how a Solr request works
internally, what the control flow is and what kind of plugins are
available to customize this at which point. The Solr wiki page on
it to a SolrInputDocument and then just add the
document through Solr but I need first to convert all fields.
Thanks in advance,
Avner
--
View this message in context:
http://lucene.472066.n3.nabble.com/Adding-documents-in-Solr-plugin-tp4071574p4097168.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi Everyone,
Is there a way to index data into Solr from kettle.?
If so, could you please tell me how to do that.?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-Plugin-for-Kettle-tp4091877.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Sat, Jun 22, 2013, at 03:40 AM, Chris Hostetter wrote:
>
> : This could be a very useful feature. To do it properly, you'd want some
> : new update syntax, extending that of the atomic updates. That is, a new
> : custom request handler could do it, but might now be the best way.
>
> the bigg
: This could be a very useful feature. To do it properly, you'd want some
: new update syntax, extending that of the atomic updates. That is, a new
: custom request handler could do it, but might now be the best way.
the biggest complexity to implementing this in a general way would be
dealing w
r doing it on the Solr server side for avoiding sending millions of
> documents to the client and back.
> I'm thinking of writing a solr plugin which will receive a query and update
> some fields on the query documents (like the delete by query handler).
> Are existing solutions
te and add document) based on a condition
> (age>12 for example).
> All fields are stored so there is no problem to recreate the document
> from the search result.
> I prefer doing it on the Solr server side for avoiding sending millions
> of documents to the client and back.
> I
ult.
I prefer doing it on the Solr server side for avoiding sending millions of
documents to the client and back.
I'm thinking of writing a solr plugin which will receive a query and update
some fields on the query documents (like the delete by query handler).
Are existing solutions o
to make it read the hibernate.cfg.xml file,
because I always got a hibernate.cfg.xml not found. This is not a big issue,
I configured hibernate entirely with code and that problem was solved (even
if I'd like to understand how can I use it inside a solr plugin). Now the
problem is that
Hi all,
I would like to partition my data (by date for example) into Solr Cores by
implement some sort of *pluggable component* for Solr.
In other words, I want Solr to handle distribution to partitions (rather than
implementing an external "solr proxy" for sending requests to the right Solr
Co
: As solr-web plugin still not available I wanted to configure Liferay 6.1
: GA2 to use solr-web-6.1.10.1 throwing following error when deployed,
: appreciate if someone through some light how to resolve. Spent almost a
: couple weeks could not find any resolution.
Your error message does not see
In this case we have installed both Liferay+Tomcat bundle and Solr over
Tomcat on the same Linux Box.
Thanks
Anand
On Wed, Sep 26, 2012 at 10:32 PM, Anand Sudabattula <
anand.sudabatt...@gmail.com> wrote:
> Hi,
>
>
> As solr-web plugin still not available I wanted to configure Liferay 6.1
> GA2
On Thu, Jun 21, 2012 at 12:45 AM, Shameema Umer wrote:
>> I am going to write a n update handler solr plugin. the original
>> DirectUpdateHandler2 imports mport org.apache.lucene.index and
>> org.apache.lucene.search.
>> But to add the lucene jar files into the classpath, I
ing my IDE correctly. that way, the jar
I produce doesn't have anything in it except my code and relies
on Solr to resolve all the solr/lucene specific stuff...
Best
Erick
On Thu, Jun 21, 2012 at 12:45 AM, Shameema Umer wrote:
> I am going to write a n update handler solr plugin. the original
>
Hello,
Spatial-solr-2.0-RC5.jar works successfully with Solr-1.4.1.
With release of solr-3.1, is the support for spatial-solr-plaugin
going to continue or not?
Thanks!
Isha
ndexSchema.access$100(IndexSchema.java:58)
> >> >
> >> >
> >> > I am basically trying to enable this jar functionality to solr. Please
> >> let
> >> > me know the mistake here.
> >> >
> >> > Rajani
> >> >
&
>> >>
>> >> {{{> >> synonyms="synonyms.txt" ignoreCase="true" expand="true"/>
>> >>
>> >> as org.apache.solr.common.SolrException: Error loading class
>> >> 'pointcross.orchSynonymFilterFactor
}
> >>
> >> This seems to indicate that your config file is really looking for
> >> "pointcross.orchSynonymFilterFactory" rather than
> >> "org.apachepco.search.orchSynonymFilterFactory".
> >>
> >> Do you perhaps have another definition in your config
>
orrespond to
>> the class you defined) I'd also look to see if you have any old
>> jars lying around that you somehow get to first.
>>
>> Finally, is there any chance that your
>> "pointcross.orchSynonymFilterFactory"
>> is a depend
> which case Solr may be finding
> "org.apache....pco.search.orchSynonymFilterFactory"
> but failing to load a dependency (that would have to be put in the lib
> or the jar).
>
> Hope that helps
> Erick
>
>
>
> On Fri, Apr 22, 2011 at 3:00 AM, rajini maski
&g
.search.orchSynonymFilterFactory"
but failing to load a dependency (that would have to be put in the lib
or the jar).
Hope that helps
Erick
On Fri, Apr 22, 2011 at 3:00 AM, rajini maski wrote:
> One doubt regarding adding the solr plugin.
>
>
> I have a new java file c
One doubt regarding adding the solr plugin.
I have a new java file created that includes few changes in
SynonymFilterFactory.java. I want this java file to be added to solr
instance.
I created a package as : org.apache.pco.search
This includes OrcSynonymFilterFactory java class
: Hello, I am writing a clustering component for Solr. It registers, loads and
: works properly. However, whenever there is an exception inside my plugin, I
: cannot get tomcat to show me the line numbers. It always says "Unknown
source"
: for my classes. The stack trace in tomcat shows line n
Hello, I am writing a clustering component for Solr. It registers, loads and
works properly. However, whenever there is an exception inside my plugin, I
cannot get tomcat to show me the line numbers. It always says "Unknown source"
for my classes. The stack trace in tomcat shows line numbers for
t;
> thanks
> --
> View this message in context:
> http://n3.nabble.com/ANN-Zoie-Solr-Plugin-Zoie-Solr-Plugin-enables-real-time-update-functionality-for-Apache-Solr-1-4-tp506099p719893.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
this message in context:
http://n3.nabble.com/ANN-Zoie-Solr-Plugin-Zoie-Solr-Plugin-enables-real-time-update-functionality-for-Apache-Solr-1-4-tp506099p719893.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Mar 23, 2010, at 7:29 PM, brad anderson wrote:
> I see, so when you do a commit it adds it to Zoie's ramdirectory. So, could
> you just commit after every document without having a performance impact and
> have real time search?
>
Not likely, maybe on really, really small indexes. Zoie also
I see, so when you do a commit it adds it to Zoie's ramdirectory. So, could
you just commit after every document without having a performance impact and
have real time search?
Thanks,
Brad
On 20 March 2010 00:34, Janne Majaranta wrote:
> To my understanding it adds a in-memory index which holds
To my understanding it adds a in-memory index which holds the recent
commits and which is flushed to the main index based on the config
options. Not sure if it helps to get solr near real time. I am
evaluating it currently, and I am really not sure if it adds anything
because of the cache r
Indeed, which is why I'm wondering what is Zoie adding if you still need to
commit to search recent documents. Does anyone know?
Thanks,
Brad
On 18 March 2010 19:41, Erik Hatcher wrote:
> "When I don't do the commit, I cannot search the documents I've indexed." -
> that's exactly how Solr witho
"When I don't do the commit, I cannot search the documents I've
indexed." - that's exactly how Solr without Zoie works, and it's how
Lucene itself works. Gotta commit to see the documents indexed.
Erik
On Mar 18, 2010, at 5:41 PM, brad anderson wrote:
Tried following their tutori
Tried following their tutorial for plugging zoie into solr:
http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Server
It appears it only allows you to search on documents after you do a commit?
Am I missing something here, or does plugin not doing anything.
Their tutorial tells you to do a co
2010/3/9 Shalin Shekhar Mangar
> I think Don is talking about Zoie - it requires a long uniqueKey.
>
Yep; we're using UUIDs.
r, and thought you guys would be interested.. I
> >> haven't tried it, but it looks interesting.
> >>
> >> http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Solr+Plugin
> >>
> >> Thanks for the RT Shalin!
> >>
> >
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>
--
Regards,
Shalin Shekhar Mangar.
res integer (long) primary keys... :/
>
> 2010/3/8 Ian Holsman
>
>>
>> I just saw this on twitter, and thought you guys would be interested.. I
>> haven't tried it, but it looks interesting.
>>
>> http://snaprojects.jira.com/wiki/display/ZOIE/
Too bad it requires integer (long) primary keys... :/
2010/3/8 Ian Holsman
>
> I just saw this on twitter, and thought you guys would be interested.. I
> haven't tried it, but it looks interesting.
>
> http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Solr+Plugin
>
> Thanks for the RT Shalin!
>
I just saw this on twitter, and thought you guys would be interested.. I
haven't tried it, but it looks interesting.
http://snaprojects.jira.com/wiki/display/ZOIE/Zoie+Solr+Plugin
Thanks for the RT Shalin!
What you are describing corrisponds pretty closely to some work currently
in progress to make the DataImportHandler integrate with the
ExtractingRequestHandler/Tika ...
https://issues.apache.org/jira/browse/SOLR-1358
...in the meantime, your options are either to extract all the metad
an entity). Has anyone tried something like this or have suggestions how
best to implement this requirement?
--
View this message in context:
http://old.nabble.com/Solr-plugin-or-something-else-for-custom-work--tp26577014p26577014.html
Sent from the Solr - User mailing list archive at Nabble.com.
: plugin.query.parser.QueryParserPluginOne" in logs. I am sure that the
: request handler with which this query parser plugin is linked is
: working,Because I could find the results of System.out.println()s
: (those included in requesthandler) in log, but not query parser
: plugin's System.outs or
Hi,
I am not changing any URL while querying because the custom query
parser plugin is linked with the default request handler.You may have
a look at my first mail for xml snippets which is included in
solrconfig.xml .
Yeah.. I found the line .. "INFO: created queryParserPluginOne:
plugi
On Oct 27, 2009, at 12:58 PM, Phanindra Reva wrote:
Hello All,
I am a newbie, learning Solr - plugins concept. While
following the tutorials on the same from
http://wiki.apache.org/solr/SolrPlugins , I have tried to work on
Query Parser plugin concept by extending QParserPlugin class.
Hello All,
I am a newbie, learning Solr - plugins concept. While
following the tutorials on the same from
http://wiki.apache.org/solr/SolrPlugins , I have tried to work on
Query Parser plugin concept by extending QParserPlugin class.
I have registered my custom plugin class in solrconf
Teruhiko Kurosaka wrote:
I see Solr uses the JDK java.util.logging.Logger.
I should also be using this Logger when I write
a plugin, correct?
You can use which ever logging you like ;) solr uses JDK logging. If
you want to contribute the plugin back to solr, it will need to use JDK
logging
I see Solr uses the JDK java.util.logging.Logger.
I should also be using this Logger when I write
a plugin, correct?
I am asking only because I see commons-logging.jar
in apache-solr-1.1.0-incubating/example/ext
What is this for?
-kuro
83 matches
Mail list logo