SolrCloud with custom package in dataimport

2020-06-26 Thread stefan
Hey,

Is it possible to reference a custom java class during the dataimport? The 
dataimport looks something like this:

```






db-data-config.xml


```

Sadly I was unable to find any information on this topic.

Thanks for your help!


Re: very slow frequent updates

2016-02-24 Thread Stefan Matheis
Depending of what features you do actually need, might be worth a look
on "External File Fields" Roland?

-Stefan

On Wed, Feb 24, 2016 at 12:24 PM, Szűcs Roland
 wrote:
> Thanks Jeff your help,
>
> Can it work in production environment? Imagine when my customer initiate a
> query having 1 000 docs in the result set. I can not use the pagination of
> SOLR as the field which is the basis of the sort is not included in the
> schema for example the price. The customer wants the list in descending
> order of the price.
>
> So I have to get all the 1000 docids from solr and find the metadata of
> them in a sql database or in cache in best case. This is the way you
> suggested? Is it not too slow?
>
> Regards,
> Roland
>
> 2016-02-23 19:29 GMT+01:00 Jeff Wartes :
>
>>
>> My suggestion would be to split your problem domain. Use Solr exclusively
>> for search - index the id and only those fields you need to search on. Then
>> use some other data store for retrieval. Get the id’s from the solr
>> results, and look them up in the data store to get the rest of your fields.
>> This allows you to keep your solr docs as small as possible, and you only
>> need to update them when a *searchable* field changes.
>>
>> Every “update" in solr is a delete/insert. Even the "atomic update”
>> feature is just a shortcut for that. It requires stored fields because the
>> data from the stored fields gets copied into the new insert.
>>
>>
>>
>>
>>
>> On 2/22/16, 12:21 PM, "Roland Szűcs"  wrote:
>>
>> >Hi folks,
>> >
>> >We use SOLR 5.2.1. We have ebooks stored in SOLR. The majority of the
>> >fields do not change at all like content, author, publisher Only the
>> >price field changes frequently.
>> >
>> >We let the customers to make full text search so we indexed the content
>> >filed. Due to the frequency of the price updates we use the atomic update
>> >feature. As a requirement of the atomic updates we have to store all the
>> >fields even the content field which is 1MB/document and we did not want to
>> >store it just index it.
>> >
>> >As we wanted to update 100 documents with atomic update it took about 3
>> >minutes. Taking into account that our metadata /document is 1 Kb and our
>> >content field / document is 1MB we use 1000 more memory to accelerate the
>> >update process.
>> >
>> >I am almost 100% sure that we make something wrong.
>> >
>> >What is the best practice of the frequent updates when 99% part of a given
>> >document is constant forever?
>> >
>> >Thank in advance
>> >
>> >--
>> ><https://www.linkedin.com/pub/roland-sz%C5%B1cs/28/226/24/hu> Roland
>> Szűcs
>> ><https://www.linkedin.com/pub/roland-sz%C5%B1cs/28/226/24/hu> Connect
>> with
>> >me on Linkedin <
>> https://www.linkedin.com/pub/roland-sz%C5%B1cs/28/226/24/hu>
>> ><https://bookandwalk.hu/>
>> >CEO Phone: +36 1 210 81 13
>> >Bookandwalk.hu <https://bokandwalk.hu/>
>>
>
>
>
> --
> <https://www.linkedin.com/pub/roland-sz%C5%B1cs/28/226/24/hu> Szűcs Roland
> <https://www.linkedin.com/pub/roland-sz%C5%B1cs/28/226/24/hu> Ismerkedjünk
> meg a Linkedin <https://www.linkedin.com/pub/roland-sz%C5%B1cs/28/226/24/hu>
> -en <https://bookandwalk.hu/>
> Ügyvezető Telefon: +36 1 210 81 13
> Bookandwalk.hu <https://bokandwalk.hu/>


Re: Solr 5.5.0: SearchHandler: Appending a Join query

2016-04-06 Thread Stefan Matheis
Anand,

have a look at the example schema, there is a section that explains
"invariants" which could be one solution to your question.

-Stefan

On Wed, Apr 6, 2016 at 12:01 PM, Anand Chandrashekar
 wrote:
> Greetings.
>
> 1) A join query creates an array of "q" parameter. For example, the query
>
> http://localhost:8983/solr/gettingstarted/select?q=projectdata%3A%22top+secret+data2%22&q=%7B!join+from=locatorByUser+to=locator%7Dusers=joe
>
> creates the following array elements for the "q" parameter.
>
> [array entry #1] projectdata:"top secret data2"
> [array entry #2] {!join from=locatorByUser to=locator}users=joe
>
> 2) I would like to enforce the join part as a mandatory parameter with the
> "users" field added programmatically. I have extended the search handler,
> and am mimicking the array entry # 2 and adding it to the SolrParams.
>
> Pseudocode handleRequestBody:
> ModifiableSolrParams modParams=new
> ModifiableSolrParams(request.getParams());
> modParams.set("q",...);//adding the join (array entry # 2) part and the
> original query
> request.setParams(modParams);
> super.handleRequestBody(request, response);
>
> I am able to mimic the exact array, but the query does not enforce the
> join. Seems to only pick the first entry. Any advice/suggestions?
>
> Thanks and regards.
> Anand.


Re: Search opening hours

2015-08-26 Thread Stefan Matheis
Have a look at the links that Alexandre mentioned. a somewhat non-obvious style 
solution because you'd probably not think about spatial features while dealing 
with opening time - but it's worth having a look. 

-Stefan 


On Wednesday, August 26, 2015 at 10:16 AM, O. Klein wrote:

> Thank you for responding.
> 
> Yonik's solution is what I had in mind. Was hoping for something more
> elegant, as he said, but it will work.
> 
> The thing I haven't figured out is how to deal with closing times early
> morning next day.
> 
> So it's 22:00 now and opening hours are 20:00 to 03:00
> 
> Can this be done with either or both approaches?
> 
> 
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Search-opening-hours-tp4225250p4225339.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 




Re: refresh solr stats on plugin/stats

2015-09-22 Thread Stefan Matheis
Hi Lorenzo  

That is neither supposed to happen on 5.0.0 nor 5.3.0 - no matter if you're
using the current or the new admin ui (talking about the angular.js app)

Are you able to have a look at your browser console while that happens?
Do you get error messages? any other output?

What should happen is that an overlay is opening showing a spinner and
"waiting for changes" as long as this modal overlay is open. as soon as  
you hit "stop & show changes" it should close and reload the view, telling
you which handlers changed between the time when you started watching
and stopped it.

-Stefan  


On Tuesday, September 22, 2015 at 3:42 PM, Lorenzo Fundaró wrote:

> Hello folks,
>  
> when selecting a core, under the tab Plugin/Stats, if I click on "Refresh
> Values" I get redirected to the dashboard, is this the right behaviour ? or
> is it a bug ? I think it should stay on the page and refresh the stats. I
> am using solr 5.0.0, but I found the same behaviour on 5.3.0.
>  
> Cheers !
>  
> --  
>  
> --  
> Lorenzo Fundaro
> Backend Engineer
> E-Mail: lorenzo.fund...@dawandamail.com 
> (mailto:lorenzo.fund...@dawandamail.com)
>  
> Fax + 49 - (0)30 - 25 76 08 52
> Tel + 49 - (0)179 - 51 10 982
>  
> DaWanda GmbH
> Windscheidstraße 18
> 10627 Berlin
>  
> Geschäftsführer: Claudia Helming, Michael Pütz
> Amtsgericht Charlottenburg HRB 104695 B
>  
>  




Re: refresh solr stats on plugin/stats

2015-09-22 Thread Stefan Matheis
dang, sounds like Shawn is right on .. Lorenzo can you tell us more about  
the system you're using including browser specifics?

-Stefan  


On Tuesday, September 22, 2015 at 4:17 PM, Shawn Heisey wrote:

> On 9/22/2015 7:42 AM, Lorenzo Fundaró wrote:
> > when selecting a core, under the tab Plugin/Stats, if I click on "Refresh
> > Values" I get redirected to the dashboard, is this the right behaviour ? or
> > is it a bug ? I think it should stay on the page and refresh the stats. I
> > am using solr 5.0.0, but I found the same behaviour on 5.3.0.
> >  
>  
>  
> Looks like a browser-related issue. I tried 5.2.1 and 5.3.0 with both
> Chrome and Firefox, it worked with no problem. Then I tried it with
> Microsoft Edge. It behaved exactly as you said.
>  
> I then tried it in IE11. It behaves even worse than Edge ... I get a
> popup saying "404 Not Found get".
>  
> Getting Microsoft browsers to work right in web interfaces while
> maintaining compatibility with other browsers is always challenging. I
> recommend using something else.
>  
> The new AngularJS admin UI, which is slated to become the default fairly
> soon, seems to work correctly in Microsoft browsers.
>  
> I opened a bug issue for the problem:
>  
> https://issues.apache.org/jira/browse/SOLR-8084
>  
> Thanks,
> Shawn
>  
>  




Re: refresh solr stats on plugin/stats

2015-09-22 Thread Stefan Matheis
Thanks Lorenzo  

Don't know why, but i didn't get it while i've first read your mail. pretty sure
that worked earlier, the fix itself should be rather easy - i'll attached a 
patch
to Shawn's Ticket.

What's supposed to happen (in the current ui): reload the current address,
which is the same as you'd hit F5 or CTRL+R (depending on your OS) ...
nothing special :)

-Stefan  


On Tuesday, September 22, 2015 at 4:43 PM, Lorenzo Fundaró wrote:

> @Stefan
>  
> I checked the browser console, no errors, and a bunch of requests doing
> GET's either froms scripts, xhr, and some of them getting 200 and other
> 304's.
>  
> I am able to see the overlay, but this is when a click on "*watch changes*",
> this feature actually works as expected.
>  
> It is the "Refresh values" that's not showing any overlay or anything at
> all but it's redirecting to dashboard instead.
>  
> cheers !
>  
> On 22 September 2015 at 16:35, Lorenzo Fundaró <
> lorenzo.fund...@dawandamail.com (mailto:lorenzo.fund...@dawandamail.com)> 
> wrote:
>  
> > @Shawn, here's my browser config:
> >  
> > Google Chrome45.0.2454.93 (Official Build) (64-bit)Revision
> > ba1cb72081c2c07e4b689082852b1463fbca95f5-refs/branch-heads/2454@{#466}OSMac
> > OS X Blink537.36 (@202161)JavaScriptV8 4.5.103.31Flash18.0.0.232User 
> > AgentMozilla/5.0
> > (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko)
> > Chrome/45.0.2454.93 Safari/537.36Command Line/Applications/Google
> > Chrome.app/Contents/MacOS/Google Chrome --enable-avfoundation
> > --enable-avfoundation --flag-switches-begin --flag-switches-end
> >  
> >  
> >  
> > On 22 September 2015 at 16:26, Stefan Matheis  > (mailto:ste...@mathe.is)> wrote:
> >  
> > > dang, sounds like Shawn is right on .. Lorenzo can you tell us more about
> > > the system you're using including browser specifics?
> > >  
> > > -Stefan
> > >  
> > >  
> > > On Tuesday, September 22, 2015 at 4:17 PM, Shawn Heisey wrote:
> > >  
> > > > On 9/22/2015 7:42 AM, Lorenzo Fundaró wrote:
> > > > > when selecting a core, under the tab Plugin/Stats, if I click on
> > > >  
> > > >  
> > >  
> > > "Refresh
> > > > > Values" I get redirected to the dashboard, is this the right
> > > >  
> > >  
> > > behaviour ? or
> > > > > is it a bug ? I think it should stay on the page and refresh the
> > > >  
> > >  
> > > stats. I
> > > > > am using solr 5.0.0, but I found the same behaviour on 5.3.0.
> > > >  
> > > >  
> > > >  
> > > > Looks like a browser-related issue. I tried 5.2.1 and 5.3.0 with both
> > > > Chrome and Firefox, it worked with no problem. Then I tried it with
> > > > Microsoft Edge. It behaved exactly as you said.
> > > >  
> > > > I then tried it in IE11. It behaves even worse than Edge ... I get a
> > > > popup saying "404 Not Found get".
> > > >  
> > > > Getting Microsoft browsers to work right in web interfaces while
> > > > maintaining compatibility with other browsers is always challenging. I
> > > > recommend using something else.
> > > >  
> > > > The new AngularJS admin UI, which is slated to become the default fairly
> > > > soon, seems to work correctly in Microsoft browsers.
> > > >  
> > > > I opened a bug issue for the problem:
> > > >  
> > > > https://issues.apache.org/jira/browse/SOLR-8084
> > > >  
> > > > Thanks,
> > > > Shawn
> > > >  
> > >  
> > >  
> >  
> >  
> >  
> > --
> >  
> > --
> > Lorenzo Fundaro
> > Backend Engineer
> > E-Mail: lorenzo.fund...@dawandamail.com 
> > (mailto:lorenzo.fund...@dawandamail.com)
> >  
> > Fax + 49 - (0)30 - 25 76 08 52
> > Tel + 49 - (0)179 - 51 10 982
> >  
> > DaWanda GmbH
> > Windscheidstraße 18
> > 10627 Berlin
> >  
> > Geschäftsführer: Claudia Helming, Michael Pütz
> > Amtsgericht Charlottenburg HRB 104695 B
> >  
>  
>  
>  
>  
> --  
>  
> --  
> Lorenzo Fundaro
> Backend Engineer
> E-Mail: lorenzo.fund...@dawandamail.com 
> (mailto:lorenzo.fund...@dawandamail.com)
>  
> Fax + 49 - (0)30 - 25 76 08 52
> Tel + 49 - (0)179 - 51 10 982
>  
> DaWanda GmbH
> Windscheidstraße 18
> 10627 Berlin
>  
> Geschäftsführer: Claudia Helming, Michael Pütz
> Amtsgericht Charlottenburg HRB 104695 B
>  
>  




Re: Long Running Data Import Handler - Notifications

2015-12-08 Thread Stefan Matheis
https://wiki.apache.org/solr/DataImportHandler#EventListeners might be
worth a look

-Stefan

On Wed, Dec 9, 2015 at 2:51 AM, Walter Underwood  wrote:
> Not that I know of. I wrote a script to check the status and sleep until 
> done. Like this:
>
> SOLRURL='http://solr-master.prod2.cloud.cheggnet.com:6090/solr/textbooks/dataimport'
>
> while : ; do
> echo `date` checking whether Solr indexing is finished
> curl -s "${SOLRURL}" | fgrep '"status":"idle"' > /dev/null
> [ $? -ne 0 ] || break
> sleep 300
> done
>
> echo Solr indexing is finished
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
>> On Dec 8, 2015, at 5:37 PM, Brian Narsi  wrote:
>>
>> Is there a way to receive notifications when a Data Import Handler finishes
>> up and whether it succeeded or failed. (typically runs about an hour)
>>
>> Thanks
>


Re: Can we have [core name] in each log entry?

2015-04-21 Thread Stefan Moises

+1 :)

That would be very helpful!

Thanks,
Stefan

Am 21.04.2015 um 09:07 schrieb forest_soup:

Can we have [core name] in each log entry?
It's hard for us to know the exact core having a such issue and the
sequence, if there are too many cores in a solr node in a SolrCloud env.

I post the request to this JIRA ticket.
https://issues.apache.org/jira/browse/SOLR-7434



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Can-we-have-core-name-in-each-log-entry-tp4201186.html
Sent from the Solr - User mailing list archive at Nabble.com.


--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

***
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



Re: Can a DocTransformer access the whole results tree?

2016-05-28 Thread Stefan Matheis
Isn't that exactly what [explain] and [child] are doing? They locate
whatever data they're working on alongside the document it's related to

What Upayavira asks for looks the very same to me, doesn't it?

-Stefan
On May 27, 2016 7:27 PM, "Erick Erickson"  wrote:

> Maybe you'd be better off using a custom search component.
> instead of a doc transformer. The intent of a doc transformer
> is, as you've discovered, working on single docs at a time. You
> want to manipulate the whole response which seems to fit more
> naturally into a search component. Make sure to put it after
> the highlight component (i.e. last-components).
>
> Best,
> Erick
>
> On Fri, May 27, 2016 at 6:55 AM, Upayavira  wrote:
> > In a JSON response, we get this:
> >
> > {
> >   "responseHeader": {...},
> >   "response": { "docs": [...] },
> >   "highlighting": {...}
> >   ...
> > }
> >
> > I'm assuming that the getProcessedDocuments call would give me the docs:
> > {} element, whereas I'm after the whole response so I can retrieve the
> > "highlighting" element.
> >
> > Make sense?
> >
> > On Fri, 27 May 2016, at 02:45 PM, Mikhail Khludnev wrote:
> >> Upayavira,
> >>
> >> It's not clear what do you mean in "results themselves", perhaps you
> mean
> >> SolrDocuments ?
> >>
> >> public abstract class ResultContext {
> >>  ..
> >>   public Iterator getProcessedDocuments() {
> >> return new DocsStreamer(this);
> >>   }
> >>
> >> On Fri, May 27, 2016 at 4:15 PM, Upayavira  wrote:
> >>
> >> > Yes, I've seen that. I can see the getDocList() method will presumably
> >> > give me the results themselves, but I need the full response so I can
> >> > get the highlighting details, but I can't see them anywhere.
> >> >
> >> > On Thu, 26 May 2016, at 09:39 PM, Mikhail Khludnev wrote:
> >> > > public abstract class ResultContext {
> >> > >
> >> > >  /// here are all results
> >> > >   public abstract DocList getDocList();
> >> > >
> >> > >   public abstract ReturnFields getReturnFields();
> >> > >
> >> > >   public abstract SolrIndexSearcher getSearcher();
> >> > >
> >> > >   public abstract Query getQuery();
> >> > >
> >> > >   public abstract SolrQueryRequest getRequest();
> >> > >
> >> > > On Thu, May 26, 2016 at 11:25 PM, Upayavira  wrote:
> >> > >
> >> > > > Hi Mikhail,
> >> > > >
> >> > > > Is there really? If I look at ResultContext, I see it is an
> abstract
> >> > > > class, completed by BasicResultContext. I don't see any context
> method
> >> > > > there. I can see a getContext() on SolrQueryRequest which just
> returns
> >> > a
> >> > > > hashmap. Will I find the response in there? Is that what you are
> >> > > > suggesting?
> >> > > >
> >> > > > Upayavira
> >> > > >
> >> > > > On Thu, 26 May 2016, at 06:28 PM, Mikhail Khludnev wrote:
> >> > > > > Hello,
> >> > > > >
> >> > > > > There is a protected ResultContext field named context.
> >> > > > >
> >> > > > > On Thu, May 26, 2016 at 5:31 PM, Upayavira 
> wrote:
> >> > > > >
> >> > > > > > Looking at the code for a sample DocTransformer, it seems
> that a
> >> > > > > > DocTransformer only has access to the document itself, not to
> the
> >> > whole
> >> > > > > > results. Because of this, it isn't possible to use a
> >> > DocTransformer to
> >> > > > > > merge, for example, the highlighting results into the main
> >> > document.
> >> > > > > >
> >> > > > > > Am I missing something?
> >> > > > > >
> >> > > > > > Upayavira
> >> > > > > >
> >> > > > >
> >> > > > >
> >> > > > >
> >> > > > > --
> >> > > > > Sincerely yours
> >> > > > > Mikhail Khludnev
> >> > > > > Principal Engineer,
> >> > > > > Grid Dynamics
> >> > > > >
> >> > > > > <http://www.griddynamics.com>
> >> > > > > 
> >> > > >
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > Sincerely yours
> >> > > Mikhail Khludnev
> >> > > Principal Engineer,
> >> > > Grid Dynamics
> >> > >
> >> > > <http://www.griddynamics.com>
> >> > > 
> >> >
> >>
> >>
> >>
> >> --
> >> Sincerely yours
> >> Mikhail Khludnev
> >> Principal Engineer,
> >> Grid Dynamics
> >>
> >> <http://www.griddynamics.com>
> >> 
>


CPU hangs at LeapFrogScorer.advanceToNextDoc() under high load

2016-07-10 Thread Stefan Moises

Hi,

we are experiencing problems on our live system, we use a single Solr 
server with 7 live cores and as soon as there is some traffic on the 
website (Solr is used for filtering a Ecommerce Site with filters on 
category lists and of course for searching), all available CPUs (no 
matter how many we assign to the Solr node) go up to 100% and never go 
down again.


I've stared on many thread dumps etc. over the last days and every time, 
the most time consuming thread (which seems to "hang up" forever) is 
Lucene's LeapFrogScorer.advanceToNextDoc() method. Here is a profiler 
snapshop when the CPU is at 100%:


We are still on Solr 4.8. since we have some plugins extending the 
JoinQParser so that we can join child docs to parent docs to handle 
product variants in the shop. Therefore we also have our own 
DirectUpdateHandler plugin for indexing the documents so that always 
stacks of a parent doc and its variants/childs are added in a block.


May that changed indexing cause the LeapFrogScorer to get a problem with 
calculating scores? Or does anybody have an idea what else might be 
causing this?


Unfortunately it only happens on the live system, I can't reproduce it 
on my local test system, altough I am emulating some example requests 
with a JMeter setup...


Thanks for any hints!!

Best regards,

Stefan


--
--
****
Stefan Moises
Manager Research & Development
shoptimax GmbH
Ulmenstraße 52 H
90443 Nürnberg
Tel.: 0911/25566-0
Fax: 0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de

Geschäftsführung: Friedrich Schreieck
Ust.-IdNr.: DE 814340642
Amtsgericht Nürnberg HRB 21703
  





Tagging and excluding Filters with BlockJoin Queries and BlockJoin Faceting

2016-08-17 Thread Stefan Moises

Hey girls and guys,

for a long time we have been using our own BlockJoin Implementation, because 
for our Shop Systems a lot of requirements that we had were not implemented in 
solr.

As we now had a deeper look into how far the standard has come, we saw that 
BlockJoin and faceting on children is now part of the standard, which is pretty 
cool.
When I tried to refactor our external code to use that now, I stumbled upon one 
non-working feature with BlockJoins that still keeps us from using it:

It seems that tagging and excluding Filters with BlockJoin Faceting simply does 
not work yet.

Simple query:

&qt=products
&q={!parent which='isparent:true'}shirt AND isparent:false
&facet=true
&fq={!parent which='isparent:true'}{!tag=myTag}color:grey
&child.facet.field={!ex=myTag}color


Gives us:
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: undefined field: 
"{!ex=myTag}color"
at org.apache.solr.schema.IndexSchema.getField(IndexSchema.java:1231)


Does somebody have an idea?


Best,
Stefan

--
--

Stefan Moises
Manager Research & Development
shoptimax GmbH
Ulmenstraße 52 H
90443 Nürnberg
Tel.: 0911/25566-0
Fax: 0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de

Geschäftsführung: Friedrich Schreieck
Ust.-IdNr.: DE 814340642
Amtsgericht Nürnberg HRB 21703
  





Re: Tagging and excluding Filters with BlockJoin Queries and BlockJoin Faceting

2016-08-17 Thread Stefan Moises

Hi Mikhail,

thanks for the info ... what is the advantage of using the JSON FACET 
API compared to the standard BlockJoinQuery features?


Is there already anybody working on the tagging/exclusion feature or is 
there any timeframe for it? There wasn't any discussion yet in SOLR-8998 
about exclusions, was there?


Thank you very much,

best,

Stefan


Am 17.08.16 um 15:26 schrieb Mikhail Khludnev:

Stefan,
child.facet.field never intend to support exclusions. My preference is to
implement it under json.facet that's discussed under
https://issues.apache.org/jira/browse/SOLR-8998.

On Wed, Aug 17, 2016 at 3:52 PM, Stefan Moises  wrote:


Hey girls and guys,

for a long time we have been using our own BlockJoin Implementation,
because for our Shop Systems a lot of requirements that we had were not
implemented in solr.

As we now had a deeper look into how far the standard has come, we saw
that BlockJoin and faceting on children is now part of the standard, which
is pretty cool.
When I tried to refactor our external code to use that now, I stumbled
upon one non-working feature with BlockJoins that still keeps us from using
it:

It seems that tagging and excluding Filters with BlockJoin Faceting simply
does not work yet.

Simple query:

&qt=products
&q={!parent which='isparent:true'}shirt AND isparent:false
&facet=true
&fq={!parent which='isparent:true'}{!tag=myTag}color:grey
&child.facet.field={!ex=myTag}color


Gives us:
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
undefined field: "{!ex=myTag}color"
 at org.apache.solr.schema.IndexSchema.getField(IndexSchema.
java:1231)


Does somebody have an idea?


Best,
Stefan

--
--

Stefan Moises
Manager Research & Development
shoptimax GmbH
Ulmenstraße 52 H
90443 Nürnberg
Tel.: 0911/25566-0
Fax: 0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de

Geschäftsführung: Friedrich Schreieck
Ust.-IdNr.: DE 814340642
Amtsgericht Nürnberg HRB 21703
   ****






--
--

Stefan Moises
Manager Research & Development
shoptimax GmbH
Ulmenstraße 52 H
90443 Nürnberg
Tel.: 0911/25566-0
Fax: 0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de

Geschäftsführung: Friedrich Schreieck
Ust.-IdNr.: DE 814340642
Amtsgericht Nürnberg HRB 21703
  





Re: [Ext] Influence ranking based on document committed date

2016-08-17 Thread Stefan Matheis
Erick already gave you the solution, additional to that there’s a wiki
page that might contain a few more things about relevancy:

https://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_change_the_score_of_a_document_based_on_the_.2Avalue.2A_of_a_field_.28say.2C_.22popularity.22.29

-Stefan


On August 17, 2016 at 5:35:10 PM, Erick Erickson
(erickerick...@gmail.com) wrote:
> Try:
> recip(rord(creationDate),1,1000,1000)
>
> See:
> https://wiki.apache.org/solr/FunctionQuery
>
> You can play with the magic numbers to influence how this scales your docs.
>
> Best,
> Erick
>
> On Wed, Aug 17, 2016 at 7:11 AM, Jay Parashar wrote:
> > This is correct: " I index it and feed it the timestamp at index time".
> > You can sort desc on that field (can be a TrieDateField)
> >
> >
> > -Original Message-
> > From: Steven White [mailto:swhite4...@gmail.com]
> > Sent: Wednesday, August 17, 2016 9:01 AM
> > To: solr-user@lucene.apache.org
> > Subject: [Ext] Influence ranking based on document committed date
> >
> > Hi everyone
> >
> > Let's say I search for the word "Olympic" and I get a hit on 10 documents 
> > that have similar
> content (let us assume the content is at least 80%
> > identical) how can I have Solr rank them so that the ones with most 
> > recently updated doc
> gets ranked higher? Is this something I have to do at index time or search 
> time?
> >
> > Is the trick to have a field that holds the committed timestamp and boost 
> > on that field
> during search? If so, is this field something I can configure in Solr's 
> schema.xml or
> must I index it and feed it the timestamp at index time? If I'm on the right 
> track, does this
> mean I have to always append this field base boost to each query a user 
> issues?
> >
> > If there is a wiki or article written on this topic, that would be a good 
> > start.
> >
> > In case it matters, I'm using Solr 5.2 and my searches are utilizing 
> > edismax.
> >
> > Thanks in advanced!
> >
> > Steve
>


Re: Atomic Update w/ Date Copy Field

2016-08-30 Thread Stefan Matheis
To me, it sounds more like you shouldn’t have to care about such gory details 
as a user - at all.

would you mind opening a issue on JIRA Todd? Including all the details you 
already provided in as well as a link to this thread, would be best.

Depending on what you actually did to find this all out, you probably do even 
have a test case at hand which demonstrates the behaviour? if not, that’s 
obviously not a problem :)

-Stefan


On August 30, 2016 at 3:51:42 PM, Todd Long (lon...@gmail.com) wrote:
> It looks like the issue has to do with the Date object. When the document is
> fully updated (with the date specified) the field is created with a String
> object so everything is indexed as it appears. When the document is
> partially updated (with the date omitted) the field is re-created using the
> previously stored Date object which takes the "toString" representation
> (i.e. EEE MMM dd HH:mm:ss zzz ).
>  
> I ended up creating a DateTextField which extends TextField and simply
> overrides the "FieldType.createField(SchemaField, Object, float)" method. I
> then check for a Date instance and format as necessary.
>  
> Any ideas on a better approach or does it sound like this is the way to go?
> I wasn't sure if this could be accomplished in a filter or some other way.
>  
>  
>  
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Atomic-Update-w-Date-Copy-Field-tp4293779p4293968.html
>   
> Sent from the Solr - User mailing list archive at Nabble.com.
>  



JSON Facets and excluded tags - not working for empty results

2016-09-14 Thread Stefan Matheis
I’m not entirely sure i’m describing the correct problem here - for now it 
looks like the only way it occurs and i hope it’s not misleading any pointers 
that would be helpful. so in case you think i got it wrong, please say so

I have two documents in the index [{"source":"foo"}, {"source":"bar”}] where 
source is a simple string field (indexed as well as stored, if that’ll matter).

Using

> ?q=*:*
> &fq={!tag=source}source:"meh"
> &json.facet={"source":{"type":"terms","field":"source","domain":{"excludeTags":"source"}}}

where meh is a value that is not available for source, i get no results 
(expected) but no facets as well - which is rather unexpected to me. as soon as 
i go with source:”bar” (or something else that yields at least one record) i’m 
getting a record back and as well facets.

which is why i’ve started of with the idea that there might be a correlation 
between those things. verifying the situation using the old facet approach i 
always get the expected facets back, no matter if the result is empty or not.

Or isn’t it supposed to work like this anymore and i’m the guy who didn’t get 
the memo?

Thanks
Stefan



Re: JSON Facets and excluded tags - not working for empty results

2016-09-15 Thread Stefan Matheis

Thanks for the Pointer Mikhail,

i didn’t ;o i’ve seen it in some tests but i didn’t realize that it might help 
(now pretty obvious) .. and now i’m finding all the relevant threads as well.

Thanks again,
Stefan

On September 15, 2016 at 10:10:32 AM, Mikhail Khludnev (m...@apache.org) wrote:
> Hello Stefan,
> Have you tried to add processEmpty:true ?
>  
> json.facet={processEmpty:true,"source":{"type":"terms","field":"source","  
> domain":{"excludeTags":"source"}}}
>  
> On Wed, Sep 14, 2016 at 5:12 PM, Stefan Matheis wrote:
>  
> > I’m not entirely sure i’m describing the correct problem here - for now it
> > looks like the only way it occurs and i hope it’s not misleading any
> > pointers that would be helpful. so in case you think i got it wrong, please
> > say so
> >
> > I have two documents in the index [{"source":"foo"}, {"source":"bar”}]
> > where source is a simple string field (indexed as well as stored, if
> > that’ll matter).
> >
> > Using
> >
> > > ?q=*:*
> > > &fq={!tag=source}source:"meh"
> > > &json.facet={"source":{"type":"terms","field":"source","
> > domain":{"excludeTags":"source"}}}
> >
> > where meh is a value that is not available for source, i get no results
> > (expected) but no facets as well - which is rather unexpected to me. as
> > soon as i go with source:”bar” (or something else that yields at least one
> > record) i’m getting a record back and as well facets.
> >
> > which is why i’ve started of with the idea that there might be a
> > correlation between those things. verifying the situation using the old
> > facet approach i always get the expected facets back, no matter if the
> > result is empty or not.
> >
> > Or isn’t it supposed to work like this anymore and i’m the guy who didn’t
> > get the memo?
> >
> > Thanks
> > Stefan
> >
> >
>  
>  
> --
> Sincerely yours
> Mikhail Khludnev
>  



Re: Miserable Experience Using Solr. Again.

2016-09-16 Thread Stefan Matheis
> … choice between better docs and better UI, I’ll choose a better UI every time

Aaron, you (as well as all others) are more than welcome to help out - no 
matter what you do / how you do it.

While we’d obviously would love to get some more hands helping out with the 
coding parts - improving the UI in terms of wording (as you just pointed out) 
does help equally as much, if not even more.

When i've started this whole new Admin UI thing, my intensions were primarily 
to make it look like it was from recent times and not a century ago. Afterwards 
Upayavira joined in to upgrade the frontend architecture to the current state 
of art - which by now didn’t help as much as we’d expected for others to 
contribute. I’m running out of ideas what else could help. We are here in 
backend country and not that attractive for capable frontend developers.

We both came up with whatever we could - neither of us is a designer, at most a 
random guy with two eyes. In certain situations i’m the same as you, i’m the 
first person to critize this and that - i often see what others could improve, 
but as often i do not realize that i could do the very same for projects where 
i’m involved. and that’s for a variety of reasons.

To sum it up: if you (again, as well as others) do not speak up - our hands are 
tied. of course it’s easier to report a specific bug that gets fixed, but 
nobody said it’s the only thing you can do. as helpful and needed it is to have 
people working on the documentation instead of contributing code - as important 
are suggestions on the ui itself. you don’t have to actually do it, especially 
not if it’s an area where you can’t help .. but you are one of many using it 
day in, day out. 

And that goes for all the things .. wordings, usability as well as and 
especially design. the ui still looks like (actually is) my first 
work-in-progress draft from years ago - and the reason therefore is certainly 
not because we all love it to death and refuse the change the smallest bits.

those were a bit more than my two cents, but they needed to get off my chest.

-Stefan


On September 16, 2016 at 5:56:34 AM, Aaron Greenspan 
(aaron.greens...@plainsite.org) wrote:
> Hi again,
>  
> My two cents: I’m glad to see the discussion over improved documentation, but 
> if you give  
> me a choice between better docs and better UI, I’ll choose a better UI every 
> time. If contributors  
> are going to spend real time on the concerns raised in this thread, spend the 
> time on making  
> the software better to the point where more docs are unnecessary. All sorts 
> of things  
> could improve that would make the product far more intuitive (and I know, 
> there are probably  
> JIRA entries on most of these already…).
>  
> - The psuedo-frames in the web UI are the source of all kinds of problems, 
> with lots of weird  
> horizontal scrolling I’ve noticed over the years. It makes the Logging screen 
> in particular  
> infuriating to use. When I click on certain log entries an arbitrary-seeming 
> "false"  
> flips to "true" under the "WARN" statement in the Level column. But on other 
> log entries,  
> it all just goes haywire all over the screen because it’s too big both 
> horizontally and  
> vertically, and then re-condenses as though I’d never clicked, as I mentioned 
> before.  
>  
> - The top menu on the left is in plain English. The core menu on the bottom 
> is written as though  
> it’s being viewed by a person who only speaks UNIX. For example, there is no 
> space between  
> "Data" and "Import" in "DataImport" and "Segments info" could just be 
> "Segments". Is  
> "Plugins / Stats" two menus in one?
>  
> - "Ping" in the menu takes you nowhere in particular and shouldn’t really be 
> a menu item.  
> It should be part of the main dashboard with all of the other tech stats 
> (which I do like)  
> or a menu called "Status". (Why would one core ping faster than another 
> anyway? If this  
> is really for "cloud" installations where cores can be split up on different 
> servers,  
> why am I seeing it when everything is local and immediate?)
>  
> - On the Data Import page, the expandable icons are [-] when they’re expanded 
> and still  
> [-] when they’re collapsed. Extremely confusing.
>  
> - The Data Import UI makes no mention anywhere of the ability to import from 
> MySQL, which  
> is 99% of what I want to do with this product. It doesn’t tell me how to set 
> up the MySQL connector,  
> doesn’t give me a button that turns it on in some modular fashion, doesn’t 
> tell me if the  
> server connection is successful, doesn’t let me easily enter or edit 
> credentials, doesn’t  
> let me edit my q

Re: Zero value fails to match Positive, Negative, or Zero interval facet

2016-10-21 Thread Stefan Matheis
Hi Andy

> How should I proceed from here?

I'd say this qualifies as an issue in JIRA - if you're able to come up with
a test, that would be great, but not needed

Patches are typically created against thr master-branch, but as long as you
include all needed information (version, file, ..)  - we're good to go :)

-Stefan

On Oct 21, 2016 6:11 PM, "Andy C"  wrote:

Upon further investigation this is a bug in Solr.

If I change the order of my interval definitions to be Negative, Zero,
Positive, instead of Negative, Positive, Zero it correctly assigns the
document with the zero value to the Zero interval.

I dug into the 5.3.1 code and the problem is in the
org.apache.solr.request.IntervalFacets class. When the getSortedIntervals()
method sorts the interval definitions for a field by their starting value
is doesn't take into account the startOpen property. When two intervals
have equal start values it needs to sort intervals where startOpen == false
before intervals where startOpen == true.

In the accumIntervalWithValue() method it checks which intervals each
document value should be considered a match for. It iterates through the
sorted intervals and stops checking subsequent intervals when
LOWER_THAN_START result is returned. If the Positive interval is sorted
before the Zero interval it never checks a zero value against the Zero
interval.

I modified the compareStart() implementation and it seems to work correctly
now (see below). I also compared the 5.3.1 version of the IntervalFacets
class against the 6.2.1 code, and it looks like the same issue will occur
in 6.2.1.

How should I proceed from here?

Thanks,
- Andy -

  private int compareStart(FacetInterval o1, FacetInterval o2) {
if (o1.start == null) {
  if (o2.start == null) {
return 0;
  }
  return -1;
}
if (o2.start == null) {
  return 1;
}
//return o1.start.compareTo(o2.start);
int startComparison = o1.start.compareTo(o2.start);
if (startComparison == 0) {
  if (o1.startOpen != o2.startOpen) {
if (!o1.startOpen) {
  return -1;
}
else {
  return 1;
}
  }
}
return startComparison;
  }

On Wed, Oct 19, 2016 at 2:47 PM, Andy C  wrote:

> I have a field called "SCALE_double" that is defined as multivalued with
> the fieldType "tdouble".
>
> "tdouble" is defined as:
>
>  omitNorms="true" positionIncrementGap="0"/>
>
> I have a document with the value "0" indexed for this field. I am able to
> successfully retrieve the document with the range query "SCALE_double:[0
TO
> 0]". However it doesn't match any of the interval facets I am trying to
> populate that match negative, zero, or positive values:
>
> "{!key=\"Negative\"}(*,0)",
> "{!key=\"Positive\"}(0,*]",
> "{!key=\"Zero\"}[0,0]"
>
> I assume this is some sort of precision issue with the TrieDoubleField
> implementation (if I change the Zero interval to
> "(-.01,+.01)" it now considers the document a match).
> However the range query works fine (I had assumed that the interval was
> just converted to a range query internally), and it fails to show up in
the
> Negative or Positive intervals either.
>
> Any ideas what is going on, and if there is anything I can do to get this
> to work correctly? I am using Solr 5.3.1. I've pasted the output from the
> Solr Admin UI query below.
>
> Thanks,
> - Andy -
>
> {
>   "responseHeader": {
> "status": 0,
> "QTime": 0,
> "params": {
>   "facet": "true",
>   "fl": "SCALE_double",
>   "facet.mincount": "1",
>   "indent": "true",
>   "facet.interval": "SCALE_double",
>   "q": "SCALE_double:[0 TO 0]",
>   "facet.limit": "100",
>   "f.SCALE_double.facet.interval.set": [
> "{!key=\"Negative\"}(*,0)",
> "{!key=\"Positive\"}(0,*]",
> "{!key=\"Zero\"}[0,0]"
>   ],
>   "_": "1476900130184",
>   "wt": "json"
> }
>   },
>   "response": {
> "numFound": 1,
> "start": 0,
> "docs": [
>   {
> "SCALE_double": [
>   0
> ]
>   }
> ]
>   },
>   "facet_counts": {
> "facet_queries": {},
> "facet_fields": {},
> "facet_dates": {},
> "facet_ranges": {},
> "facet_intervals": {
>   "SCALE_double": {
> "Negative": 0,
> "Positive": 0,
> "Zero": 0
>   }
> },
> "facet_heatmaps": {}
>   }
> }
>


edismax, phrase field gets ignored for keyword tokenizer

2016-11-07 Thread Stefan Matheis
I’m guessing that i’m missing something obvious here - so feel free to
ask for more details and as well point out other directions i should
following.

the problem goes as follows: the input in one case might be a phone
number (like +49 1234 12345678), since we’re using edismax the parts
gets split on whitespaces - which is fine. bringing the same field
(based on TextField) to the party (using qf) doesn’t change a thing.

> responseHeader:
>     params:
>         q: '+49 1234 12345678'
>         defType: edismax
>         qf: person_mobile
>         pf: person_mobile^5
> debug:
>     rawquerystring: '+49 1234 12345678'
>     querystring: '+49 1234 12345678'
>     parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:49)) 
>DisjunctionMaxQuery((person_mobile:1234)) 
>DisjunctionMaxQuery((person_mobile:12345678))) ())/no_coord'
>     parsedquery_toString: '+(+(person_mobile:49) (person_mobile:1234) 
>(person_mobile:12345678)) ()’

but .. as far as i was able to reduce the culprit, that only happens
when i’m using solr.KeywordTokenizerFactory . as soon as i’m changing
that to solr.StandardTokenizerFactory the phrase query appears as
expected:

> responseHeader:
>     params:
>         q: '+49 1234 12345678'
>         defType: edismax
>         qf: person_mobile
>         pf: person_mobile^5
> debug:
>     rawquerystring: '+49 1234 12345678'
>     querystring: '+49 1234 12345678'
>     parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:49)) 
>DisjunctionMaxQuery((person_mobile:1234)) 
>DisjunctionMaxQuery((person_mobile:12345678))) 
>DisjunctionMaxQuery(((person_mobile:"49 1234 12345678")^5.0)))/no_coord'
>     parsedquery_toString: '+(+(person_mobile:49) (person_mobile:1234) 
>(person_mobile:12345678)) ((person_mobile:"49 1234 12345678")^5.0)’

removing the + at the beginning, doesn’t make a difference either
(just mentioning since tokee already asked this on #solr, where i’ve
brought up the question earlier)

it’s absolutely possible i’m focusing on a very wrong assumption - but
since switching the tokenizer does result in such a rather large
behaviour change, i think something is spooky here.

i’ve read older issues and posts from the list, some of them pointed
out that it might be a optimization that edismax brings to the table -
i didn’t find anything specific about that.

oh, and btw: if that would be working - my idea is to drop out
everything for a given phrase that is not a number, to match the phone
number, like this:

> 
>   
>     
>      replacement=""/>
>   
> 

any thoughts? or wild guesses?

Thanks Stefan


Re: edismax, phrase field gets ignored for keyword tokenizer

2016-11-07 Thread Stefan Matheis
Vincenzo,

thanks for the response - i know that only the Keyword Tokenizer by
itself does not do anything. as pointed at the end of the initial
mail, i’m applying a pattern replace for everything non-numeric to
make it actually useful.

and especially because of the tokenization based on whitespaces i’d
like to use the very same field once again as phrase field to around
this issue. Shawn mentioned in #solr in the meantime that there is
SOLR-9185 which is similar and would be helpful, but currently very
very in-the-works.

Standard Tokenizer you’ve mentioned does split on whitespace - as
edismax does by default in the first place. so i’m not sure how that
would help? For now, i don’t want to have partial matches on phone
numbers .. at least not yet.

-Stefan


On November 7, 2016 at 4:41:50 PM, Vincenzo D'Amore (v.dam...@gmail.com) wrote:
> Hi Stefan,
>
> I think the problem is solr.KeywordTokenizerFactory.
> This tokeniser does not make any tokenisation to the string, it returns
> exactly what you have.
>
> '+49 1234 12345678' -> '+49 1234 12345678'
>
> On the other hand, using edismax you are looking for '+49', '1234' and
> '12345678' and none of these keywords match your phone_number field.
>
> Try using a different tokenizer like solr.StandardTokenizerFactory, this
> should change your results.
>
> Bests,
> Vincenzo
>
> On Mon, Nov 7, 2016 at 4:05 PM, Stefan Matheis
> wrote:
>
> > I’m guessing that i’m missing something obvious here - so feel free to
> > ask for more details and as well point out other directions i should
> > following.
> >
> > the problem goes as follows: the input in one case might be a phone
> > number (like +49 1234 12345678), since we’re using edismax the parts
> > gets split on whitespaces - which is fine. bringing the same field
> > (based on TextField) to the party (using qf) doesn’t change a thing.
> >
> > > responseHeader:
> > > params:
> > > q: '+49 1234 12345678'
> > > defType: edismax
> > > qf: person_mobile
> > > pf: person_mobile^5
> > > debug:
> > > rawquerystring: '+49 1234 12345678'
> > > querystring: '+49 1234 12345678'
> > > parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:49))
> > DisjunctionMaxQuery((person_mobile:1234)) 
> > DisjunctionMaxQuery((person_mobile:12345678)))
> > ())/no_coord'
> > > parsedquery_toString: '+(+(person_mobile:49) (person_mobile:1234)
> > (person_mobile:12345678)) ()’
> >
> > but .. as far as i was able to reduce the culprit, that only happens
> > when i’m using solr.KeywordTokenizerFactory . as soon as i’m changing
> > that to solr.StandardTokenizerFactory the phrase query appears as
> > expected:
> >
> > > responseHeader:
> > > params:
> > > q: '+49 1234 12345678'
> > > defType: edismax
> > > qf: person_mobile
> > > pf: person_mobile^5
> > > debug:
> > > rawquerystring: '+49 1234 12345678'
> > > querystring: '+49 1234 12345678'
> > > parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:49))
> > DisjunctionMaxQuery((person_mobile:1234)) 
> > DisjunctionMaxQuery((person_mobile:12345678)))
> > DisjunctionMaxQuery(((person_mobile:"49 1234 12345678")^5.0)))/no_coord'
> > > parsedquery_toString: '+(+(person_mobile:49) (person_mobile:1234)
> > (person_mobile:12345678)) ((person_mobile:"49 1234 12345678")^5.0)’
> >
> > removing the + at the beginning, doesn’t make a difference either
> > (just mentioning since tokee already asked this on #solr, where i’ve
> > brought up the question earlier)
> >
> > it’s absolutely possible i’m focusing on a very wrong assumption - but
> > since switching the tokenizer does result in such a rather large
> > behaviour change, i think something is spooky here.
> >
> > i’ve read older issues and posts from the list, some of them pointed
> > out that it might be a optimization that edismax brings to the table -
> > i didn’t find anything specific about that.
> >
> > oh, and btw: if that would be working - my idea is to drop out
> > everything for a given phrase that is not a number, to match the phone
> > number, like this:
> >
> > >
> > >
> > >
> > > > > replacement=""/>
> > >
> > >
> >
> > any thoughts? or wild guesses?
> >
> > Thanks Stefan
> >
>
>
>
> --
> Vincenzo D'Amore
> email: v.dam...@gmail.com
> skype: free.dev
> mobile: +39 349 8513251
>


Re: edismax, phrase field gets ignored for keyword tokenizer

2016-11-07 Thread Stefan Matheis
Which is everything fine by itself - but doesn’t shed more light on my
initial question Vincenzo, does it? probably i shoudn’t have mentioned
partial matches in the first place, that might have lead into the
wrong direction - they are not relevant for now / not for this
question.

I’d like to know why & where edismax drops out phrase fields which are
using a Keyword Tokenizer. Maybe there is a larger idea behind this
behavior, but i don’t see it (yet).

-Stefan


On November 7, 2016 at 5:09:04 PM, Vincenzo D'Amore (v.dam...@gmail.com) wrote:
> If you don't want partial matches with edismax you should always use
> StandardTokenizerFactory and play with mm parameter.
>
> On Mon, Nov 7, 2016 at 4:50 PM, Stefan Matheis
> wrote:
>
> > Vincenzo,
> >
> > thanks for the response - i know that only the Keyword Tokenizer by
> > itself does not do anything. as pointed at the end of the initial
> > mail, i’m applying a pattern replace for everything non-numeric to
> > make it actually useful.
> >
> > and especially because of the tokenization based on whitespaces i’d
> > like to use the very same field once again as phrase field to around
> > this issue. Shawn mentioned in #solr in the meantime that there is
> > SOLR-9185 which is similar and would be helpful, but currently very
> > very in-the-works.
> >
> > Standard Tokenizer you’ve mentioned does split on whitespace - as
> > edismax does by default in the first place. so i’m not sure how that
> > would help? For now, i don’t want to have partial matches on phone
> > numbers .. at least not yet.
> >
> > -Stefan
> >
> >
> > On November 7, 2016 at 4:41:50 PM, Vincenzo D'Amore (v.dam...@gmail.com)
> > wrote:
> > > Hi Stefan,
> > >
> > > I think the problem is solr.KeywordTokenizerFactory.
> > > This tokeniser does not make any tokenisation to the string, it returns
> > > exactly what you have.
> > >
> > > '+49 1234 12345678' -> '+49 1234 12345678'
> > >
> > > On the other hand, using edismax you are looking for '+49', '1234' and
> > > '12345678' and none of these keywords match your phone_number field.
> > >
> > > Try using a different tokenizer like solr.StandardTokenizerFactory, this
> > > should change your results.
> > >
> > > Bests,
> > > Vincenzo
> > >
> > > On Mon, Nov 7, 2016 at 4:05 PM, Stefan Matheis
> > > wrote:
> > >
> > > > I’m guessing that i’m missing something obvious here - so feel free to
> > > > ask for more details and as well point out other directions i should
> > > > following.
> > > >
> > > > the problem goes as follows: the input in one case might be a phone
> > > > number (like +49 1234 12345678), since we’re using edismax the parts
> > > > gets split on whitespaces - which is fine. bringing the same field
> > > > (based on TextField) to the party (using qf) doesn’t change a thing.
> > > >
> > > > > responseHeader:
> > > > > params:
> > > > > q: '+49 1234 12345678'
> > > > > defType: edismax
> > > > > qf: person_mobile
> > > > > pf: person_mobile^5
> > > > > debug:
> > > > > rawquerystring: '+49 1234 12345678'
> > > > > querystring: '+49 1234 12345678'
> > > > > parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:49))
> > > > DisjunctionMaxQuery((person_mobile:1234)) DisjunctionMaxQuery((person_
> > mobile:12345678)))
> > > > ())/no_coord'
> > > > > parsedquery_toString: '+(+(person_mobile:49) (person_mobile:1234)
> > > > (person_mobile:12345678)) ()’
> > > >
> > > > but .. as far as i was able to reduce the culprit, that only happens
> > > > when i’m using solr.KeywordTokenizerFactory . as soon as i’m changing
> > > > that to solr.StandardTokenizerFactory the phrase query appears as
> > > > expected:
> > > >
> > > > > responseHeader:
> > > > > params:
> > > > > q: '+49 1234 12345678'
> > > > > defType: edismax
> > > > > qf: person_mobile
> > > > > pf: person_mobile^5
> > > > > debug:
> > > > > rawquerystring: '+49 1234 12345678'
> > > > > querystring: '+49 1234 12345678'
> > > > > parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:

Re: edismax, phrase field gets ignored for keyword tokenizer

2016-11-08 Thread Stefan Matheis
Any more thoughts on this? The longer i look at this situation, the
more i’m thinking i’m at fault here - expection something that isn’t
to be expected at all?

Whatever is on your mind once you’ve read mail - don’t keep to it, let me know.

-Stefan


On November 7, 2016 at 5:23:58 PM, Stefan Matheis
(matheis.ste...@gmail.com) wrote:
> Which is everything fine by itself - but doesn’t shed more light on my 
> initial question
> Vincenzo, does it? probably i shoudn’t have mentioned partial matches in the 
> first place,
> that might have lead into the wrong direction - they are not relevant for now 
> / not for this
> question.
>
> I’d like to know why & where edismax drops out phrase fields which are using 
> a Keyword Tokenizer.
> Maybe there is a larger idea behind this behavior, but i don’t see it (yet).
>
> -Stefan
>
>
> On November 7, 2016 at 5:09:04 PM, Vincenzo D'Amore (v.dam...@gmail.com) 
> wrote:
> > If you don't want partial matches with edismax you should always use
> > StandardTokenizerFactory and play with mm parameter.
> >
> > On Mon, Nov 7, 2016 at 4:50 PM, Stefan Matheis
> > wrote:
> >
> > > Vincenzo,
> > >
> > > thanks for the response - i know that only the Keyword Tokenizer by
> > > itself does not do anything. as pointed at the end of the initial
> > > mail, i’m applying a pattern replace for everything non-numeric to
> > > make it actually useful.
> > >
> > > and especially because of the tokenization based on whitespaces i’d
> > > like to use the very same field once again as phrase field to around
> > > this issue. Shawn mentioned in #solr in the meantime that there is
> > > SOLR-9185 which is similar and would be helpful, but currently very
> > > very in-the-works.
> > >
> > > Standard Tokenizer you’ve mentioned does split on whitespace - as
> > > edismax does by default in the first place. so i’m not sure how that
> > > would help? For now, i don’t want to have partial matches on phone
> > > numbers .. at least not yet.
> > >
> > > -Stefan
> > >
> > >
> > > On November 7, 2016 at 4:41:50 PM, Vincenzo D'Amore (v.dam...@gmail.com)
> > > wrote:
> > > > Hi Stefan,
> > > >
> > > > I think the problem is solr.KeywordTokenizerFactory.
> > > > This tokeniser does not make any tokenisation to the string, it returns
> > > > exactly what you have.
> > > >
> > > > '+49 1234 12345678' -> '+49 1234 12345678'
> > > >
> > > > On the other hand, using edismax you are looking for '+49', '1234' and
> > > > '12345678' and none of these keywords match your phone_number field.
> > > >
> > > > Try using a different tokenizer like solr.StandardTokenizerFactory, this
> > > > should change your results.
> > > >
> > > > Bests,
> > > > Vincenzo
> > > >
> > > > On Mon, Nov 7, 2016 at 4:05 PM, Stefan Matheis
> > > > wrote:
> > > >
> > > > > I’m guessing that i’m missing something obvious here - so feel free to
> > > > > ask for more details and as well point out other directions i should
> > > > > following.
> > > > >
> > > > > the problem goes as follows: the input in one case might be a phone
> > > > > number (like +49 1234 12345678), since we’re using edismax the parts
> > > > > gets split on whitespaces - which is fine. bringing the same field
> > > > > (based on TextField) to the party (using qf) doesn’t change a thing.
> > > > >
> > > > > > responseHeader:
> > > > > > params:
> > > > > > q: '+49 1234 12345678'
> > > > > > defType: edismax
> > > > > > qf: person_mobile
> > > > > > pf: person_mobile^5
> > > > > > debug:
> > > > > > rawquerystring: '+49 1234 12345678'
> > > > > > querystring: '+49 1234 12345678'
> > > > > > parsedquery: '(+(+DisjunctionMaxQuery((person_mobile:49))
> > > > > DisjunctionMaxQuery((person_mobile:1234)) DisjunctionMaxQuery((person_
> > > mobile:12345678)))
> > > > > ())/no_coord'
> > > > > > parsedquery_toString: '+(+(person_mobile:49) (person_mobile:1234)
> > > > > (person_mobile:12345678)) ()’
> > > > >
> > > > > but .. as fa

Re: international characters in facet.prefix

2017-06-07 Thread Stefan Matheis
If you don't mind, my question is what're trying to do in the first place?

And please don't describe it with the technical approach you're already
using (or at least trying to) but rather in basic/business terms.

-Stefan

On Jun 8, 2017 3:03 AM, "arik"  wrote:

> Thanks Erick, indeed your hunch is correct, it's the analyzing filters that
> facet.prefix seems to bypass, and getting rid of my
> ASCIIFoldingFilterFactory and MappingCharFilterFactory make it work ok.
>
> The problem is I need those filters... otherwise how should I create facets
> which match against both Anglicized as well as international prefix
> spellings?  I could of course maintain separate fields and do multiple
> queries, but seems like that quickly gets out of hand if I also want to
> support mixed case and other filtering dimensions.
>
> Is there a way to route facet.prefix through the field type filters like
> all
> the other params? I suppose I could manually instantiate and pre-apply the
> filters in the client code... any other ideas?
>
>
>
> --
> View this message in context: http://lucene.472066.n3.
> nabble.com/international-characters-in-facet-prefix-tp4339415p4339534.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Apache 4.9.1 - trouble trying to use complex phrase query parser.

2017-06-28 Thread Stefan Matheis
If you'd include the actual error message you get .. it might easier to try
and help?

-Stefan

On Jun 28, 2017 6:24 PM, "Michael Craven"  wrote:

> Hi -
>
> I am trying to use the complex phrase query parser on my Drupal
> installation. Our core is sore 4.9.1, so I thought it should be no problem.
> Search works fine when I use a local parameter to do a search of type
> lucene, dismax, or edismax, (a la {!lucene} etc.), but when I try to do a
> search of type complex phrase, I get an error. Does anyone know why that
> might be? Is this maybe a Drupal specific problem? We are running Drupal
> 7.56.
>
> Thanks
>
> -M


Re: Optimizing Dataimport from Oracle; cursor sharing; changing oracle session parameters

2017-08-15 Thread Stefan Matheis
Birgit,

any chance to utilise one of the caching strategies that DIH offers?

Like building a complete map for one of the subentities? That would mean
reading the whole table at the beginning and then only doing lookups by key.

Or getting data from subentities with joins in your main entity?

Heavily depends on the amount of data we're talking about - but might be
worth a thought.

Best
-Stefan

On Aug 15, 2017 4:33 PM, "Erick Erickson"  wrote:

I presume you're using Data Import Handler? An alternative when you
get into complex imports is to use a SolrJ client, here's a sample.
That way you can use whatever tools the particular JDBC connector will
allow and can be much faster.

https://lucidworks.com/2012/02/14/indexing-with-solrj/

Best,
Erick

On Tue, Aug 15, 2017 at 7:09 AM, Mannott, Birgit 
wrote:
> Hi,
>
> I'm using solr 6.6.0 and I have to do a complex data import from an
oracle db concerning 3.500.000 data rows.
> For each row I have 15 additional entities. That means that more than 52
Million selects are send to the database.
> For every select that is done I optimized the oracle execution path by
creating indizes.
> The execution plans are ok.
> But the import still lasts 12 hours.
>
> I think, the main remaining problem is that oracle cursor sharing is not
used and that for every select a hard parse is done.
> Solr does not use binding variables. This would be the easiest way to use
oracle cursor sharing. But I didn't find anything about influencing the way
solr builds select statements.
> I could force oracle cursor sharing without binding variables but I have
to do this configuration for the session. I'm not allowed to change the
configuration of the whole database system.
>
> Is there a way to execute a command like "ALTER SESSION SET
cursor_sharing = FORCE;" after getting the connection for processing an
entity?
>
> Thanks,
> Birgit


group.truncate for json facets not working?

2017-08-22 Thread Stefan Matheis
Hi all,

just a quick santiy check - group.truncate does apply for old-school facets, 
but it seems to be ignored for json.facets. I’m trying to strip my 
configuration down to verify that i’m not mistaken.

for others that are using json facets together with grouping, did you 
expirience something like this?

-Stefan



Re: "What is Solr" in Google search results

2017-08-31 Thread Stefan Matheis
Well, isn't it always the same with Wikipedia?

It's already there .. so it has to be correct. If you're trying to remove
it, you have to prove it - but there is not even prove it should be there
in the first place oO

You really need to have time to go through that kind of argument ...

-Stefan

On Aug 31, 2017 4:37 PM, "Vincenzo D'Amore"  wrote:

Hi Rick,

right, I've already tried to correct the wikipedia page, to be honest, I've
just removed the sentence "Solr is the second-most... etc."
But my change has been discarded because I missed to add a valid motivation.

Anyway, not sure I'm the most representative person to discuss this in the
wikipedia talk page :) but I'll try to do whatever I can

And just to share with you my thought, my principal motivation is that even
if DB Engines has a proven accuracy, the sentence in question has not be
considered so relevant to explain what is Solr. For sure, it should be used
as first one.


On Thu, Aug 31, 2017 at 5:53 AM, Rick Leir  wrote:

> Vincenzo,
> This is a discussion for the wikipedia 'talk' page. My sense is that
> information must be verifiable, and that the popularity rating at
> db-engines is not transparent. Would you like to start the discussion?
> Cheers -- Rick
>
> On August 30, 2017 5:17:25 PM MDT, Vincenzo D'Amore 
> wrote:
> >Hi All,
> >
> >googling for "what is Solr" I found this as *first* sentence:
> >
> >"Solr is the second-most popular enterprise search engine after
> >Elasticsearch. ... "
> >
> >The description comes from wikipedia https://en.
> >wikipedia.org/wiki/Apache_Solr
> >
> >Now, well, I'm a little upset, because I think this is a misleading
> >description, this answer does not really... well, answer the question.
> >
> >And even... because Solr is not the first most popular :)))
> >
> >Ok, seriously, the first sentence (or the answer at all) should not
> >define
> >the position of the search engine in a list, in a kind of competition
> >where
> >Solr has the second place.
> >If it is the first, the second or whatever most popular is not the
> >right
> >answer.
> >
> >So I want inform the community and search for an advice, if any, how to
> >have a better description in the Google results page.
> >
> >If you have any comments or questions, please let me know.
> >
> >Best regards,
> >Vincenzo
> >
> >
> >--
> >Vincenzo D'Amore
> >email: v.dam...@gmail.com
> >skype: free.dev
> >mobile: +39 349 8513251 <349%20851%203251>
>
> --
> Sorry for being brief. Alternate email is rickleir at yahoo dot com




--
Vincenzo D'Amore
email: v.dam...@gmail.com
skype: free.dev
mobile: +39 349 8513251 <349%20851%203251>


Re: Filter Factory question

2017-09-27 Thread Stefan Matheis
> In any case I figured out my problem. I was over thinking it.

Mind to share?

-Stefan

On Sep 27, 2017 4:34 PM, "Webster Homer"  wrote:

> There is a need for a special filter since the input has to be normalized.
> That is the main requirement, splitting into pieces is optional. As far as
> I know there is nothing in solr that knows about molecular formulas.
>
> In any case I figured out my problem. I was over thinking it.
>
> On Wed, Sep 27, 2017 at 3:52 AM, Emir Arnautović <
> emir.arnauto...@sematext.com> wrote:
>
> > Hi Homer,
> > There is no need for special filter, there is one that is for some reason
> > not part of documentation (will ask why so follow that thread if decided
> to
> > go this way): You can use something like:
> >  > pattern=“([A-Z][a-z]?\d+)” preserveOriginal=“true” />
> >
> > This will capture all atom counts as a separate tokens.
> >
> > HTH,
> > Emir
> >
> > > On 26 Sep 2017, at 23:14, Webster Homer 
> wrote:
> > >
> > > I am trying to create a filter that normalizes an input token, but also
> > > splits it inot multiple pieces. Sort of like what the
> WordDelimiterFilter
> > > does.
> > >
> > > It's meant to take a molecular formula like C2H6O and normalize it to
> > C2H6O1
> > >
> > > That part works. However I was also going to have it put out the
> > individual
> > > atom counts as tokens.
> > > C2H6O1
> > > C2
> > > H6
> > > O1
> > >
> > > When I enable this feature in the factory, I don't get any output at
> all.
> > >
> > > I looked over a couple of filters that do what I want and it's not
> > entirely
> > > clear what they're doing. So I have some questions:
> > > Looking at ShingleFilter and WordDelimitierFilter
> > > They both set several attributes:
> > > CharTermAttribute : Seems to be the actual terms being set. Seemed
> > straight
> > > forward, works fine when I only have one term to add.
> > >
> > > PositionIncrementAttribute: What does this do? It appears that
> > > WordDelimiterFilter sets this to 0 most of the time. This has decent
> > > documentation.
> > >
> > > OffsetAttribute: I think that this tracks offsets for each term being
> > > processed. Not really sure though. The documentation mentions tokens.
> So
> > if
> > > I have multiple variations for for a token is this for each variation?
> > >
> > > TypeAttribute: default is "word". Don't know what this is for.
> > >
> > > PositionLengthAttribute: WordDelimiterFilter doesn' use this but
> Shingle
> > > does. It defaults to 1. What's it good for when should I use it?
> > >
> > > Here is my incrementToken method.
> > >
> > >@Override
> > >public boolean incrementToken() throws IOException {
> > >while(true) {
> > >if (!hasSavedState) {
> > >if (! input.incrementToken()) {
> > >return false;
> > >}
> > >if (! generateFragments) { // This part works fine!
> > >String normalizedFormula = molFormula.normalize(new
> > > String(termAttribute.buffer()));
> > >char[]newBuffer = normalizedFormula.toCharArray();
> > >termAttribute.setEmpty();
> > >termAttribute.copyBuffer(newBuffer, 0, newBuffer.length);
> > >return true;
> > >}
> > >formulas = molFormula.normalizeToList(new
> > > String(termAttribute.buffer()));
> > >iterator = formulas.listIterator();
> > >savedPositionIncrement += posIncAttribute.getPositionIncrement();
> > >hasSavedState = true;
> > >first = true;
> > >saveState();
> > >}
> > >if (!iterator.hasNext()) {
> > >posIncAttribute.setPositionIncrement(savedPositionIncrement);
> > >savedPositionIncrement = 0;
> > >hasSavedState = false;
> > >continue;
> > >}
> > >String formula = iterator.next();
> > >int startOffset = savedStartOffset;
> > >
> > >if (first) {
> > >termAttribute.setEmpty();
> > >}
> > >int endOffset = savedStartOffset + formula.length();
> > >System.out.printf("Writing formula %s %d to %d%n", formula,
> > > startOffset, endOffset);;
> > >termAttribute.append(formula);
> > >  

Re: Solr 7.0.0 -- can it use a 6.5.0 data repository (index)

2017-09-27 Thread Stefan Matheis
That sounds like https://issues.apache.org/jira/browse/SOLR-11406 if i'm
not mistaken?

-Stefan

On Sep 27, 2017 8:20 PM, "Wayne L. Johnson" 
wrote:

> I’m testing Solr 7.0.0.  When I start with an empty index, Solr comes up
> just fine, I can add documents and query documents.  However when I start
> with an already-populated set of documents (from 6.5.0), Solr will not
> start.  The relevant portion of the traceback seems to be:
>
> Caused by: java.lang.NullPointerException
>
> at java.util.Objects.requireNonNull(Objects.java:203)
>
> …
>
> at java.util.stream.ReferencePipeline.reduce(
> ReferencePipeline.java:479)
>
> at org.apache.solr.index.SlowCompositeReaderWrapper.(
> SlowCompositeReaderWrapper.java:76)
>
> at org.apache.solr.index.SlowCompositeReaderWrapper.wrap(
> SlowCompositeReaderWrapper.java:57)
>
> at org.apache.solr.search.SolrIndexSearcher.(
> SolrIndexSearcher.java:252)
>
> at org.apache.solr.core.SolrCore.openNewSearcher(SolrCore.java:
> 2034)
>
> ... 12 more
>
>
>
> In looking at the de-compiled code (SlowCompositeReaderWrapper), lines
> 72-77, and it appears that one or more “leaf” files doesn’t have a
> “min-version” set.  That’s a guess.  If so, does this mean Solr 7.0.0 can’t
> read a 6.5.0 index?
>
>
>
> Thanks
>
>
>
> Wayne Johnson
>
> 801-240-4024
>
> wjohnson...@ldschurch.org
>
> [image: familysearch2.JPG]
>
>
>


Re: request dependent analyzer

2017-12-18 Thread Stefan Matheis
Hendrik,

this doesn't exactly answer your question, but I do remember reading a
thread on the lucene-dev list which became a jira ticket eventually - not
that long ago.

Doug asked for something that sounds at least a little bit similar to what
you're asking: https://issues.apache.org/jira/browse/SOLR-11698

Hope it's worth reading
- Stefan

On Dec 18, 2017 8:35 AM, "Hendrik Haddorp"  wrote:

> Hi,
>
> currently we use a lot of small collections that all basically have the
> same schema. This does not scale too well. So we are looking into combining
> multiple collections into one. We would however like some analyzers to
> behave slightly differently depending on the logical collection. We would
> for example like to use different synonyms in the different logical
> collections. Is there any clean way on how to do that, like somehow access
> request parameters from an analyzer?
>
> regards,
> Hendrik
>


DIH XPathEntityProcessor XPath subset?

2018-01-03 Thread Stefan Moises

Hi there,

I'm trying to index a wordpress site using DIH XPathEntityProcessor... 
I've read it only supports a subset of XPath, but I couldn't find any 
docs what exactly is supported.


After some painful trial and error, I've found that xpath expressions 
like the following don't work:


    xpath="/methodResponse/params/param/value/array/data/value/struct/member[name='post_title']/value/string" 
/>


I want to find elements like this ("the 'value' element after a 'member' 
element with a name element 'post_title'"):



  
    
  
    
    
    
    
post_id11809
post_titleSome 
titel


Unfortunately that is the default output structure of Wordpress' XMLrpc 
calls.


My Xpath expression works e.g. when testing it with 
https://www.freeformatter.com/xpath-tester.html but not if I try to 
index it with Solr any ideas? Or do I have to pre-transform the XML 
myself to match XPathEntityProcessors limited abilites?


Thanks in advance,

Stefan

--
--

Stefan Moises
Manager Research & Development
shoptimax GmbH
Ulmenstraße 52 H
90443 Nürnberg
Tel.: 0911/25566-0
Fax: 0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de

Geschäftsführung: Friedrich Schreieck
Ust.-IdNr.: DE 814340642
Amtsgericht Nürnberg HRB 21703
  





Re: Easy way to preserve Solr Admin form input

2016-12-27 Thread Stefan Matheis
Sebastian,

currently not - i'm sorry to say. We did it for the analysis screen but not
for the query screen. Shouldn't be too hard to add this kind of persitence.

Would you mind opening a ticket, so we can track the progress. Depending on
your knowledge, you might be willing to give it a first whirl?

-Stefan

On Dec 27, 2016 12:09 PM, "Sebastian Riemer"  wrote:

Hi,

is there an easy way to preserve the query data I input in SolrAdmin?

E.g. when debugging a query, I often have the desire to reopen the current
query in solrAdmin in a new browser tab to make slight adaptions to the
query without losing the original query.  What happens instead is the form
is opened blank in the new tab and I have to manually copy/paste the
entered form values.

This is not such a big problem, when I only use the "Raw Query Parameters"
field, but editing something in that tiny input is a real pain ...

I wonder how others come around this?

Sebastian


Re: Easy way to preserve Solr Admin form input

2016-12-28 Thread Stefan Matheis
Alex, this is what we already do for the analysis screen - not that much
magic happening there ;)

-Stefan

On Dec 27, 2016 11:54 PM, "Alexandre Rafalovitch" 
wrote:

> I think there may be a ticket for something similar. Or related to
> rerunning a same query/configuration on a new core.
>
> Worth having a quick look anyway.
>
> The challenge would be to write the infrastructure that will unpack
> those parameters back into the boxes. Because some go into raw query,
> some go into specific boxes, etc.
>
> Regards,
>Alex.
> 
> http://www.solr-start.com/ - Resources for Solr users, new and experienced
>
>
> On 27 December 2016 at 16:02, Stefan Matheis 
> wrote:
> > Sebastian,
> >
> > currently not - i'm sorry to say. We did it for the analysis screen but
> not
> > for the query screen. Shouldn't be too hard to add this kind of
> persitence.
> >
> > Would you mind opening a ticket, so we can track the progress. Depending
> on
> > your knowledge, you might be willing to give it a first whirl?
> >
> > -Stefan
> >
> > On Dec 27, 2016 12:09 PM, "Sebastian Riemer" 
> wrote:
> >
> > Hi,
> >
> > is there an easy way to preserve the query data I input in SolrAdmin?
> >
> > E.g. when debugging a query, I often have the desire to reopen the
> current
> > query in solrAdmin in a new browser tab to make slight adaptions to the
> > query without losing the original query.  What happens instead is the
> form
> > is opened blank in the new tab and I have to manually copy/paste the
> > entered form values.
> >
> > This is not such a big problem, when I only use the "Raw Query
> Parameters"
> > field, but editing something in that tiny input is a real pain ...
> >
> > I wonder how others come around this?
> >
> > Sebastian
>


Re: Facet date - autogap

2017-01-08 Thread Stefan Matheis
What about requesting all of them in a single request and decide on the client 
side of things which one to actually use?

-Stefan


On January 5, 2017 at 8:13:18 PM, sn0...@ulysses-erp.com 
(sn0...@ulysses-erp.com) wrote:
> Is it possible to make an "autogap" for a daterange?
> 
> I would like to send a query and depending on the daterange, the gap should be
> 1 Year
> 1 Month
> 1 Day
> depending on the date range of the results
> 
> My only possibility i see at the moment ist to make a query to get
> first and last date and send the query a second time ... but i would
> like to get it all in one query.
> 
> Some ideas on it?
> 
> 
> This message was sent using IMP, the Internet Messaging Program.
> 
> 



RE: Facet? Search problem

2017-03-14 Thread Stefan Matheis
Scott

Depending on what you're looking for
https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results
might be worth a look as well.

-Stefan

On Mar 14, 2017 7:25 PM, "Scott Smith"  wrote:

> Grouping appears to be exactly what I'm looking for.  I added
> "group=true&group.field=category" to my search and It appears that I get
> a list of groups, one document in each group that matches the search along
> with (bonus) the number of documents in the category that match that
> search. Perfect.  Thank you very much.
>
> -Original Message-
> From: Dave [mailto:hastings.recurs...@gmail.com]
> Sent: Monday, March 13, 2017 7:59 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Facet? Search problem
>
> Perhaps look into grouping on that field.
>
> > On Mar 13, 2017, at 9:08 PM, Scott Smith 
> wrote:
> >
> > I'm trying to solve a search problem and wondering if facets (or
> something else) might solve the problem.
> >
> > Let's assume I have a bunch of documents (100 million+).  Each document
> has a category (keyword) assigned to it.  A single document my only have
> one category, but there may be multiple documents with the same category (1
> to a few hundred documents may be in any one category).  There are several
> million categories.
> >
> > Supposed I'm doing a search with a page size of 50.  What I want to do
> is do a search (e.g., "dog") and get back the top 50 documents that match
> the contain the word "dog" and are all in different categories.  So, there
> needs to be one document from 50 different categories.
> >
> > If that's not possible, then is it possible to do it if I know the 50
> categories up-front and hand that off as part of the search (so "find 50
> documents that match the term 'dog' and there is one document from each of
> 50 specified categories").
> >
> > Is there a way to do this?
> >
> > I'm not extremely knowledgeable about facets, but thought that might be
> a solution.  But, it doesn't have to be facets.
> >
> > Thanks for any help
> >
> > Scott
> >
> >
>


Re: Solr-Ajax client

2014-03-12 Thread Stefan Matheis
Hey Davis 

I've added you to the Contributors Group :)

-Stefan 


On Wednesday, March 12, 2014 at 11:49 PM, Davis Marques wrote:

> Shawn;
> 
> My user name is "davismarques" on the wiki.
> 
> Yes, I am aware that its a bad idea to expose Solr directly to the
> Internet. As you've discovered, we filter all requests to the server so
> that only select requests make it through. I do not yet have documentation
> for the Javascript application, nor advice on configuring a proxy. However,
> documentation and setup instructions are on my to-do list so I'll get to
> that soon.
> 
> Davis
> 
> 
> On Wed, Mar 12, 2014 at 6:03 PM, Shawn Heisey  (mailto:s...@elyograg.org)> wrote:
> 
> > On 3/11/2014 11:48 PM, Davis Marques wrote:
> > > Just a quick announcement and request for guidance:
> > > 
> > > I've developed an open source, Javascript client for Apache Solr. Its
> > very
> > > easy to implement and can be configured to provide faceted search to an
> > > existing Solr index in just a few minutes. The source is available online
> > > here:
> > > 
> > > https://bitbucket.org/esrc/eaccpf-ajax
> > > 
> > > I attempted to add a note about it into the Solr wiki, at
> > > https://wiki.apache.org/solr/IntegratingSolr, but was prevented by the
> > > system. Is there some protocol for posting information to the wiki?
> > > 
> > 
> > 
> > Just give us your username on the wiki and someone will get you added.
> > Note that it is case sensitive, at least when adding it to the
> > permission group.
> > 
> > This is a nice bit of work that you've done, but I'm sure you know that
> > it is inherently unsafe to use a javascript Solr client on a website
> > that is accessible to the Internet. Exposing a Solr server directly to
> > the Internet is a bad idea.
> > 
> > Do you offer any documentation telling potential users how to configure
> > a proxy server to protect Solr? It looks like the Solr server in your
> > online demo is protected by nginx. I'm sure that building its
> > configuration was not a trivial task.
> > 
> > Thanks,
> > Shawn
> > 
> 
> 
> 
> -- 
> Davis M. Marques
> 
> t: 61 0418 450 194
> e: dmarq@gmail.com (mailto:dmarq@gmail.com)
> w: http://www.davismarques.com/
> 
> 




Re: [solr 4.7.0] analysis page: issue with HTMLStripCharFilterFactory

2014-03-16 Thread Stefan Matheis
Hey Dmitry 

We had a similar issue reported and already fixed: 
https://issues.apache.org/jira/browse/SOLR-5800
i'd suspect that this patch fixes your issue too? would like to hear back from 
you, if that's the case :)

-Stefan 


On Saturday, March 15, 2014 at 6:58 PM, Dmitry Kan wrote:

> Hello,
> 
> The following type does not get analyzed properly on the solr 4.7.0
> analysis page:
> 
>  positionIncrementGap="100" autoGeneratePhraseQueries="true">
> 
> 
> 
> 
>  ignoreCase="true"
> words="lang/stopwords_en.txt"
> />
>  generateWordParts="1" generateNumberParts="1" catenateWords="1"
> catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
> 
>  protected="protwords.txt"/>
> 
> 
> 
> 
>  ignoreCase="true" expand="true"/>
>  ignoreCase="true"
> words="lang/stopwords_en.txt"
> />
>  generateWordParts="1" generateNumberParts="1" catenateWords="0"
> catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
> 
>  protected="protwords.txt"/>
> 
> 
> 
> 
> Example text:
> fox jumps
> 
> Screenshot:
> http://pbrd.co/1lEVEIa
> 
> This works fine in solr 4.6.1.
> 
> -- 
> Dmitry
> Blog: http://dmitrykan.blogspot.com
> Twitter: http://twitter.com/dmitrykan
> 
> 




Re: [solr 4.7.0] analysis page: issue with HTMLStripCharFilterFactory

2014-03-16 Thread Stefan Matheis
Oh .. i'm sorry .. late to the party - didn't see the response from Doug .. so 
feel free to ignore that mail (: 


On Sunday, March 16, 2014 at 9:38 PM, Stefan Matheis wrote:

> Hey Dmitry 
> 
> We had a similar issue reported and already fixed: 
> https://issues.apache.org/jira/browse/SOLR-5800
> i'd suspect that this patch fixes your issue too? would like to hear back 
> from you, if that's the case :)
> 
> -Stefan 
> 
> On Saturday, March 15, 2014 at 6:58 PM, Dmitry Kan wrote:
> 
> > Hello,
> > 
> > The following type does not get analyzed properly on the solr 4.7.0
> > analysis page:
> > 
> >  > positionIncrementGap="100" autoGeneratePhraseQueries="true">
> > 
> > 
> > 
> > 
> >  > ignoreCase="true"
> > words="lang/stopwords_en.txt"
> > />
> >  > generateWordParts="1" generateNumberParts="1" catenateWords="1"
> > catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
> > 
> >  > protected="protwords.txt"/>
> > 
> > 
> > 
> > 
> >  > ignoreCase="true" expand="true"/>
> >  > ignoreCase="true"
> > words="lang/stopwords_en.txt"
> > />
> >  > generateWordParts="1" generateNumberParts="1" catenateWords="0"
> > catenateNumbers="0" catenateAll="0" splitOnCaseChange="1"/>
> > 
> >  > protected="protwords.txt"/>
> > 
> > 
> > 
> > 
> > Example text:
> > fox jumps
> > 
> > Screenshot:
> > http://pbrd.co/1lEVEIa
> > 
> > This works fine in solr 4.6.1.
> > 
> > -- 
> > Dmitry
> > Blog: http://dmitrykan.blogspot.com
> > Twitter: http://twitter.com/dmitrykan
> > 
> > 
> > 
> 
> 



Re: dih data-config.xml onImportEnd event

2014-03-27 Thread Stefan Matheis
I would suggest you read the replies to your last mail (containing the very 
same question) first? 

-Stefan 


On Thursday, March 27, 2014 at 1:56 PM, Andreas Owen wrote:

> i would like to call a url after the import is finished whith the event
> . how can i do this?
> 
> 




Re: Show the score in the search result

2014-04-17 Thread Stefan Matheis
That's exactly what Jack mentioned, you're defining an invariant for fl, which 
ignores everything you provide at runtime.  

From http://wiki.apache.org/solr/SearchHandler#Configuration

"invariants - provides param values that will be used in spite of any values 
provided at request time. They are a way of letting the Solr maintainer lock 
down the options available to Solr clients."

-Stefan  


On Thursday, April 17, 2014 at 9:38 AM, Croci Francesco Luigi (ID SWS) wrote:

> Hello Chris:
>  
> trying to execute 
> http://localhost:7001/solr/collection1/select?q=*%3A*&rows=1&fl=score&wt=json&indent=true&echoParams=true
>  
> I get  
>  
> {
> "error": {
> "msg": "Invalid value 'true' for echoParams parameter, use 'EXPLICIT' or 
> 'ALL'",
> "code": 400
> }
> }
>  
> With echoParams=ALL:
>  
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0,
> "params": {
> "defType": "edismax",
> "echoParams": "ALL",
> "fl": "*,fullText:fullText",
> "indent": "true",
> "q": "*:*",
> "_": "1397719590902",
> "wt": "json",
> "rows": "1",
> "uf": "* -fullText_*",
> "f.all.qf": "rmDocumentTitle rmDocumentArt rmDocumentClass rmDocumentSubclass 
> rmDocumentCatName rmDocumentCatNameEn fullText",
> "fq": "* -language:en -language:de"
> }
> },
> "response": {
> "numFound": 842,
> "start": 0,
> "docs": [
> {
> "rmDocumentTitle": [
> "Ersterfassung"
> ],
> "rmDocumentClass": [
> "Einführung Records Management"
> ],
> "rmDocumentSubclass": [
> "Einführung Records Management"
> ],
> "id": "aabziwlc4hkvgojtzyb4wbebqr4m3",
> "rmDocumentArt": [
> "Ersterfassung"
> ],
> "fullText": [
> " \n \n \n \n \n \n \n \n "
> ],
> "signatureField": "d41d8cd98f00b204e9800998ecf8427e"
> }
> ]
> }
> }
>  
> I adapted the sample on "Instant Apache Solr for Indexing Data How-to" 
> Chapter: Indexing multiple languages(advanced)
>  
>  
> here is the schema:
>  
> 
> 
> 
> 
> 
> 
> 
> 
>  
>  
>  
>  ignoreCase="true"/> 
> 
> 
> 
> 
> 
> 
>  ignoreCase="true"/>
> 
> 
> 
> 
> 
> 
> 
>  positionIncrementGap="100" autoGeneratePhraseQueries="true">
> 
> 
>  
>  
>  
>  ignoreCase="true"/> 
> 
> 
> 
> 
> 
> 
>  ignoreCase="true"/>
> 
> 
> 
> 
> 
> 
>  positionIncrementGap="100">
> 
> 
>  
>  
>  
>  ignoreCase="true" format="snowball" enablePositionIncrements="true"/> 
> 
> 
> 
> 
> 
> 
>  ignoreCase="true"/>
> 
> 
> 
> 
> 
> 
> 
>  
> 
>  multiValued="false" />
> 
>  indexed="false" stored="false" />
> 
> 
>  multiValued="false" />
>  multiValued="true"/>
>  multiValued="true"/>
>  multiValued="true"/>
>  multiValued="true"/>
>  multiValued="true"/>
>  multiValued="true"/>
> 
> 
>  
> fullText
>  
> 
> id
> 
>  
>  
>  
> Here the solrconfig:
>  
> 
> 
> LUCENE_45
> 
>  
> 
>  
> 
> 
> 
> 
> 
>  
>  default="true" />
>  
>  class="org.apache.solr.handler.admin.LukeRequestHandler" />
>  
> 
> 
> deduplication
> 
> 
>  
>  class="solr.extraction.ExtractingRequestHandler">
> 
> true
> false
> false
> true
> true
> ignored_
> link
> fullText
> 
> deduplication
> 
> 
>  
> 
>  class="org.apache.solr.update.processor.SignatureUpdateProcessorFactory">
> false
> signatureField
> true
> content
> 10
> .2
> solr.update.processor.TextProfileSignature
> 
> 
> 
>  
> 
> 
> fullText
> en,de
> en
> language
> true
> false
> 
> 
> 
> 
>  
> 
> 
> edismax
> 
> 
> * -language:en -language:de
> rmDocumentTitle rmDocumentArt rmDocumentClass 
> rmDocumentSubclass rmDocumentCatName rmDocumentCatNameEn fullText
> * -fullText_*
> *,fullText:fullText
> 
> 
>  
> 
> 
> edismax
> fullText_en
> full_Text
> json
> true
> 
> 
> language:en
> fullText_en
> rmDocumentTitle rmDocumentArt rmDocumentClass 
> rmDocumentSubclass rmDocumentCatName rmDocumentCatNameEn fullText_en
> * -fullText_*
> *,fullText:fullText_en
> 
> 
>  
> 
> 
> edismax
> fullText_de
> full_Text
> json
> true
> 
> 
> language:de
> fullText_de
> rmDocumentTitle rmDocumentArt rmDocumentClass 
> rmDocumentSubclass rmDocumentCatName rmDocumentCatNameEn fullText_de
> * -fullText_*
> *,fullText:fullText_de
> 
> 
>  
>  class="org.apache.solr.handler.admin.AdminHandlers" />
>  
> none
>  
> 
> *:*
> 
>  
> 
>  
>  
> Hope this will help.
>  
> Francesco
>  
> -Original Message-
> From: Chris Hostetter [mailto:hossman_luc...@fucit.org]  
> Sent: Mittwoch, 16. April 2014 19:09
> To: solr-user@lucene.apache.org (mailto:solr-user@lucene.apache.org)
> Subject: RE: Show the score in the search result
>  
>  
> : here is the query:
> : 
> http://localhost:7001/solr/collection1/select?q=*%3A*&rows=5&fl=*%2Cscore&wt=json&indent=true&debugQuery=true
> :  
> :  
> : and here the response:
>  
> that's bizare.
>  
> Do me a favor, and:
>  
> * post the results of 
> .../select?q=*%3A*&rows=1&fl=score&wt=json&indent=true&echoParams=true
> * show us your schema.xml
> * show us your solrconfig.xml
>  
>  
>  
> -Hoss
> http://www.lucidworks.com/
>  
>  




Re: Show the score in the search result

2014-04-17 Thread Stefan Matheis
The wiki contains an explanation for that as well :)

http://wiki.apache.org/solr/CommonQueryParameters#fl

It includes all fields the document actually has. and since there is no 'score' 
field included in your document, it won't get displayed. it's a so called 
virtual-field, which you have to request explicitly.

-Stefan 


On Thursday, April 17, 2014 at 11:09 AM, Croci Francesco Luigi (ID SWS) wrote:

> I think you mean this row:
> 
> * ,fullText: ...
> 
> Ok, but what I understood is that the "*" means that ALL the fields are 
> displayed anyway. Or not?
> 
> Francesco
> 
> -Original Message-
> From: Stefan Matheis [mailto:matheis.ste...@gmail.com] 
> Sent: Donnerstag, 17. April 2014 10:04
> To: solr-user@lucene.apache.org (mailto:solr-user@lucene.apache.org)
> Subject: Re: Show the score in the search result
> 
> That's exactly what Jack mentioned, you're defining an invariant for fl, 
> which ignores everything you provide at runtime. 
> 
> From http://wiki.apache.org/solr/SearchHandler#Configuration
> 
> "invariants - provides param values that will be used in spite of any values 
> provided at request time. They are a way of letting the Solr maintainer lock 
> down the options available to Solr clients."
> 
> -Stefan 



Re: Please add me as Solr Conrtibutor

2014-05-01 Thread Stefan Matheis
I’ve added you Keith, go ahead :)

-Stefan  


On Thursday, May 1, 2014 at 4:42 PM, Keith Thoma wrote:

> my wiki username is KeithThoma
>  
> Please add me to the list so I will be able to make updates to the Solr
> Wiki.
>  
>  
> Keith Thoma  



Re: Please add me to Contributors Group

2014-05-15 Thread Stefan Matheis
Hey

I’ve added you, thanks for contributing :)

-Stefan  


On Tuesday, May 13, 2014 at 1:03 PM, Gireesh C. Sahukar wrote:

> Hi,
>  
> I'd like to be added to the contributors group. My wiki username is gireesh
>  
>  
> Thanks
>  
> Gireesh
>  



Re: ContributorsGroup add request - Username: al.krinker

2014-05-16 Thread Stefan Matheis
Al  

i’ve added you :)

minor note aside: being listed in the contributors group in the wiki doesn’t 
mean, you can change/commit to the lucene/solr repository automatically. but 
improvements are always welcome, you can read about it on 
https://wiki.apache.org/solr/HowToContribute

-Stefan  


On Thursday, May 15, 2014 at 10:19 PM, Al Krinker wrote:

> Please add me to the list of contributors. Username: al.krinker
>  
> There is some minor css tweaks that I would like to fix.
>  
> I work with Solr almost daily, so I would love to contribute to make it
> better.
>  
> Thanks,
> Al
>  
>  




Re: Email Notification for Sucess/Failure of Import Process.

2014-05-28 Thread Stefan Matheis
How about using DIH’s EventListeners? 
http://wiki.apache.org/solr/DataImportHandler#EventListeners  

-Stefan  


On Wednesday, May 28, 2014 at 5:31 PM, EXTERNAL Taminidi Ravi (ETI, 
Automotive-Service-Solutions) wrote:

> Hi I am using the XML file for Indexing In SOLR. I am planning to make this 
> process more automation. Creating XML File and Loading to SOLR.
>  
> I like to get email once the process is completed. Is there any way in solr 
> can this achieved, I am not seeing more inputs on configure notification in 
> SOLR.
>  
> Also I am trying DIH, using MS SQL , Someone can help me on sharing the 
> data-config.xml if you are using already once for MSSQL with few basic steps.
>  
> Thanks
>  
> Ravi  



Re: Analysis browser not working in solr 4.8.1

2014-06-06 Thread Stefan Matheis
I’m not sure that’s a bug in the UI .. in case the underlying service is 
barking with an exception we can’t do anything else than showing to you.  

are you sure the custom filter works as expected? like, verified with a 
unit-test or something along the lines? i can still work with the examples 
provided in the tutorial, so in general .. it works, looks like the only thing 
that doesn’t work is related to your custom components

-Stefan  


On Friday, June 6, 2014 at 1:25 PM, Aman Tandon wrote:

> Hi,
>  
> I created a custom filter for my field named text_reversed, i tried my
> custom filter in solr 4.7.1 and i was able to analyse the result, it works
> fine but in solr 4.8.1 it gaves me error of : *Missing required parameter:
> analysis.fieldvalue. *It is also not working with any field*, *here is the
> logs of the error
>  
> 2090419 [http-bio-8984-exec-8] ERROR org.apache.solr.core.SolrCore –
> org.apache.solr.common.SolrException: Missing required parameter:
> analysis.fieldvalue
> at
> org.apache.solr.common.params.RequiredSolrParams.get(RequiredSolrParams.java:49)
> at
> org.apache.solr.handler.FieldAnalysisRequestHandler.resolveAnalysisRequest(FieldAnalysisRequestHandler.java:142)
> at
> org.apache.solr.handler.FieldAnalysisRequestHandler.doAnalysis(FieldAnalysisRequestHandler.java:99)
> at
> org.apache.solr.handler.AnalysisRequestHandlerBase.handleRequestBody(AnalysisRequestHandlerBase.java:60)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at
> org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:241)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1952)
> at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:774)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
> at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
> at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
> at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222)
> at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123)
> at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171)
> at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99)
> at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:953)
> at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
> at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
> at
> org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1023)
> at
> org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589)
> at
> org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>  
>  
>  
> With Regards
> Aman Tandon
>  
>  




Re: Request to be added to the ContrubitorsGroup

2014-07-17 Thread Stefan Matheis
Hey Vitaly  

i’ve added you to the contributors group

-Stefan  


On Thursday, July 17, 2014 at 12:58 PM, p...@satisware.com wrote:

> Hi there,
> Could you please add me as a contributor?
> My username is VitaliyVerbenko
> Thanks in advance!
> -
> Vitaliy Verbenko
> Marketing Guy
> Helprace by Satisware (http://satisware.com/help-desk-software)



Re: solr wiki: 'Support for Solr' page edit policy

2014-07-17 Thread Stefan Matheis
Xavi  

It’s the former :) I’ve adding you to the contributors group

-Stefan  


On Thursday, July 17, 2014 at 5:19 PM, jmlucjav wrote:

> Hi guys,
>  
> I don't remember anymore what is the policy to have someone added to this
> page:
>  
> - ask for edit rights and add your own line where needed
> - send someone your line and they'll add it for you.
>  
> If the former, could I get edit permissions for the wiki? My login is
> jmlucjav. If the later, who could I send it to?
>  
> thanks!
> xavi
>  
>  




Re: loading SolrInfoMBeanHandler is slow?

2013-10-15 Thread Stefan Matheis
Shinichrio

Perhaps i don't see it, but nowhere in your log is something related to this 
handler? For me it looks like this:

Oct 15, 2013 4:36:47 PM org.apache.solr.core.SolrCore execute
INFO: [collection1] webapp=/solr path=/admin/mbeans params={stats=true&wt=json} 
status=0 QTime=3


Stefan 


On Tuesday, October 15, 2013 at 6:00 PM, Shinichiro Abe wrote:

> Hi,
> In my Mac OSX when starting Solr as OOTB,
> I always have to wait 30 sec for completely loading.
> It seems that loading SolrInfoMBeanHandler is slow in Solr 4.x (also 4.5).
> Does anyone have the same problem?
> 
> 
> 
> log:
> DEBUG - 2013-10-16 00:51:14.144; 
> org.apache.solr.handler.component.SearchHandler; Adding debug 
> component:org.apache.solr.handler.component.DebugComponent@584391f0
> DEBUG - 2013-10-16 00:51:14.151; org.eclipse.jetty.webapp.WebAppClassLoader; 
> loaded class org.apache.solr.handler.admin.AdminHandlers$StandardHandler from 
> WebAppClassLoader=1510130526@5a02c35e
> DEBUG - 2013-10-16 00:51:14.158; org.eclipse.jetty.webapp.WebAppClassLoader; 
> loaded class org.apache.solr.handler.admin.LukeRequestHandler from 
> WebAppClassLoader=1510130526@5a02c35e
> DEBUG - 2013-10-16 00:51:44.166; org.eclipse.jetty.webapp.WebAppClassLoader; 
> loaded class org.apache.solr.handler.admin.SolrInfoMBeanHandler from 
> WebAppClassLoader=1510130526@5a02c35e
> DEBUG - 2013-10-16 00:51:44.168; org.eclipse.jetty.webapp.WebAppClassLoader; 
> loaded class org.apache.solr.handler.admin.PluginInfoHandler from 
> WebAppClassLoader=1510130526@5a02c35e
> DEBUG - 2013-10-16 00:51:44.169; org.eclipse.jetty.webapp.WebAppClassLoader; 
> loaded class org.apache.solr.handler.admin.ShowFileRequestHandler from 
> WebAppClassLoader=1510130526@5a02c35e
> :
> :
> :
> INFO - 2013-10-16 00:51:44.499; 
> org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener; 
> Loading spell index for spellchecker: default
> INFO - 2013-10-16 00:51:44.499; 
> org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener; 
> Loading spell index for spellchecker: wordbreak
> INFO - 2013-10-16 00:51:44.500; org.apache.solr.core.SolrCore; [collection1] 
> Registered new searcher Searcher@3b46ad8b 
> main{StandardDirectoryReader(segments_1:1:nrt)}
> DEBUG - 2013-10-16 00:51:53.870; org.eclipse.jetty.http.HttpParser; filled 
> 402/402
> DEBUG - 2013-10-16 00:51:53.882; org.eclipse.jetty.server.Server; REQUEST 
> /solr/select on 
> BlockingHttpConnection@4302df5,g=HttpGenerator{s=0,h=-1,b=-1,c=-1},p=HttpParser{s=-5,l=23,c=0},r=1
> 
> 
> 
> Regards,
> Shinichiro Abe
> 
> 




Re: Data import handler with multi tables

2013-10-28 Thread Stefan Matheis
> I think because  is unique. When importing tbl_tableA import first,
> tbl_tableB import after. tbl_tableB has id which the same id in tableA, so
> only data of tableB had indexed with unique id.
> 
> 

That's exactly what happens here :) If the second table would have fewer 
records than the first one, you'd still see records from that table.

> Anyone can help me to configure data import handler that can index all data
> of two (more) tables which have the same id in each table.
> 
> 

that requires the use of a key which is known as "compound key" 
(http://en.wikipedia.org/wiki/Compound_key), f.e. if data comes from Table A .. 
make it A1 instead of (only) 1, A2, B1, B2 .. and so on. you can still index 
the raw id's in another field .. but for the unique key .. you need something 
like that, to get it working.


HTH
Stefan



On Monday, October 28, 2013 at 10:45 AM, dtphat wrote:

> Hi,
> I wanna to import many tables from MySQL. Assume that, I have two tables:
> *** Tables 1: tbl_tableA(id, nameA) with data (1, A1), (2, A2), (3, A3).
> *** Tables 2: tbl_tableB(id, nameB) with data (1, B1), (2, B2), (3, B3), (4,
> B4), (5, B5).
> 
> I configure:
> 
>  driver="com.mysql.jdbc.Driver" 
> url="jdbc:mysql://xx" 
> user="xxx" password="xxx" batchSize="1" />
> 
> 
> 
>  query="select * from tbl_tableA">
> 
> 
> 
> 
> 
>  query="select * from tbl_tableB">
> 
> 
> 
> 
> 
> 
> I define nameA, nameB in schema.xml and id is configured by
> id
> 
> When I import data by
> http://localhost:8983/solr/dataimport?command=full-import
> 
> It's successfull. But only data of tbl_tableB had indexed.
> 
> I think because  is unique. When importing tbl_tableA import first,
> tbl_tableB import after. tbl_tableB has id which the same id in tableA, so
> only data of tableB had indexed with unique id.
> 
> Anyone can help me to configure data import handler that can index all data
> of two (more) tables which have the same id in each table.
> 
> Thanks.
> 
> 
> 
> -
> Phat T. Dong
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Data-import-handler-with-multi-tables-tp4098026.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 




Re: Data import handler with multi tables

2013-10-29 Thread Stefan Matheis
I've never looked for another way, what's the problem using a compound key?


On Monday, October 28, 2013 at 1:38 PM, dtphat wrote:

> Hi,
> is there no another way to import all data for this case instead Only the
> way using compound key?
> Thanks.
> 
> 
> 
> -
> Phat T. Dong
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Re-Data-import-handler-with-multi-tables-tp4098048p4098056.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 




Re: Data import handler with multi tables

2013-10-30 Thread Stefan Matheis
that is what i'd call a compound key? :) using multiple attribute to generate a 
unique key across multiple tables ..


On Wednesday, October 30, 2013 at 2:10 AM, dtphat wrote:

> yes, I've just used concat(id, '_', tableName) instead using compound key. I
> think this is an easy way.
> Thanks.
> 
> 
> 
> -
> Phat T. Dong
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Re-Data-import-handler-with-multi-tables-tp4098048p4098328.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 




Re: How to Configure Highlighting for Solr

2013-11-20 Thread Stefan Matheis
Solr is using the UniqueKey you defined for your documents, that shouldn't be a 
problem, since you can lookup the document from the list of documents in the 
main response?

And there is actually a ticket, which would allow it to inline the highlight 
response with DocTransfomers: https://issues.apache.org/jira/browse/SOLR-3479

-Stefan 


On Wednesday, November 20, 2013 at 4:37 PM, Furkan KAMACI wrote:

> I have setup my highlight as follows:
> 
> true
> name age address
> 
> However I don't want *name* be highlighted *but *included inside response:
> 
> "highlighting": {
> Something_myid: {
> name: "Something bla bla",
> age: "Something age bla bla",
> address: "Something age bla bla"
> }
> }
> 
> *or:*
> 
> I want to group them on name field instead of id:
> 
> "highlighting": {
> Something bla bla: {
> age: "Something age bla bla",
> address: "Something age bla bla"
> }
> }
> 
> *or*
> 
> "highlighting": {
> Something bla bla: {
> name: "Something bla bla",
> age: "Something age bla bla",
> address: "Something age bla bla"
> }
> }
> 
> How can I do *any* of them at Solr 4.5.1? 



Re: What is the right way to list top terms for a given field?

2013-11-27 Thread Stefan Matheis
Since your users shouldn't be allowed at any time to access Solr directly, it's 
up to you to implement that on the client side anyway?

I can't tell if there is a technical difference between the two calls you 
named, but i'd guess that the second might be a more direct way to access this 
information (and probably a bit faster?).

-Stefan 


On Wednesday, November 27, 2013 at 5:22 PM, Dave Seltzer wrote:

> Hello,
> 
> I'm trying to get a list of top terms for a field called "Tags".
> 
> One way to do this would be to query all data *:* and then facet by the
> Tags column:
> /solr/collection/admin/select?q=*:*&rows=0&facet=true&facet.field=Tags
> 
> I've noticed another way to do this is using the luke interface like this:
> /solr/collection/admin/luke?fl=Tags&numTerms=20
> 
> One problem I see with the luke interface is that its inside the /admin/
> path, which to me means that my users shouldn't be able to access it.
> 
> Whats the most SOLRy way to do this?
> 
> Thanks!
> 
> -D 



Block join and child doc facet counts

2013-11-29 Thread Stefan Moises

Hi,

I was playing around with the relatively new block join features in Solr 
4.6., because I need parent-child relations in my documents.
Until now, we are using Solr 4.2.1 with custom plugins, where we have 
integrated some JIRA patches (mainly 
https://issues.apache.org/jira/browse/SOLR-2272) to support (block) 
joins, but now I thought we might get rid of our patched version with 
custom extensions which we need to update with every Solr release.


Anyhow, the block join itself seems to work, but there is still one 
major drawback for me - faceting doesn't really make too much sense for 
me using the block join, because the facet counts are based on the 
parent docs if I join from childs to parents.


So in my example, I have e.g. 6 blue t-shirts in different sizes, when I 
join them to parent docs, I have 2 t-shirts, which is fine because I 
only want to list the parent articles in my results.. but when I facet 
over sizes and colors, I *expect* to get the child facet counts, e.g.

att_color
blue6
red4
green2
att_size
S4
L2
XL6
att_price
12.996

so that I can filter further down to my right child article.

But all I *get* is the parent counts, e.g.
att_color
blue 0
red0
green 0
att_size
S0
L0
XL0
att_price
12.996

where I have no chance to filter down to my desired size etc.

Here is a sample query I use:
http://localhost:8983/solr-4.6.0/ee/select?q={!parent%20which=%27type_s:parent%27}%2Batt_Farbe:Dark-Blue&wt=xml&facet=true&facet.field=att_Groesse&facet.field=oxprice&facet.field=att_Farbe

There is a good description of block joins and the current drawbacks 
here: http://blog.griddynamics.com/2013/09/solr-block-join-support.html

where the author also states:
"


 Faceting

Facet component for block indexes is quite useful in eCommerce. The 
trickiest thing is to count SKU field values and aggregate them into 
product counts like it was described at the earlier posts 
<http://blog.griddynamics.com/2011/10/solr-experience-search-parent-child.html>. 


"

Will this be fixed / added in one of the upcoming versions or do I have 
to stick with our custom plugins (where we /do/ aggregate the child 
counts into the parent counts) and update them for Solr 4.6.?


Thank you very much,
Stefan


Re: Protect admin pages with jetty

2013-12-01 Thread Stefan Matheis
The thing is, basic auth doesn't work with Ajax requests .. which is, why you 
don't see the page loaded.

The Server normally responds in such cases with an 401 Header, which makes your 
browser prompt _you_ for the credentials, sending it back to the server which 
then delivers the page you ask for with an 200.

Since the Ajax library (in that case we use jQuery) treats the 401 Header as an 
typical "error" (and doesn't do anything further with that information) you 
don't get the page nor are you prompted for the credentials.

if possible, i'd suggest you use either access based on ip-addresses or -ranges 
.. or you make the jetty instance listen to localhost only and open an ssh 
tunnel, if needed.

HTH
Stefan



On Sunday, December 1, 2013 at 12:23 PM, Jean-Pierre Lauris wrote:

> Hi,
> I'm using solr 4.4 with jetty and I'm trying to password-protect the
> admin pages.
> 
> I've read many posts from this list, as well as the main solr security doc :
> http://wiki.apache.org/solr/SolrSecurity#Jetty_realm_example
> 
> and added this to my web.xml (http://web.xml)
> 
> 
> 
> Solr authenticated application
> /admin/*
> 
> 
> admin-role
> 
> 
> 
> 
> BASIC
> Test Realm
> 
> 
> I also managed my realm settings with jetty, and I guess I'm correct
> on this side, since a simple "/*" protection (password protection for
> all pages) works fine.
> 
> However, for /admin/* the password is asked (and rejected if not
> correct), but the page never loads (I see the left menu, as well as a
> loading image on the main frame, but it never stops loading and never
> show me an error message).
> 
> I use the default start.jar with a custom solr.solr.home.
> 
> There I'm at a point where any help will be appreciated!
> Thanks,
> 
> 




Re: Question about external file fields

2013-12-06 Thread Stefan Matheis
I guess you refer to this post? 
http://1opensourcelover.wordpress.com/2013/07/02/solr-external-file-fields/

If so .. he already provides at least one possible use case:

*snip*

We use Solr to serve our company’s browse pages. Our browse pages are similar 
to how a typical Stackoverflow tag page looks. That “browse” page has the 
question title (which links to the actual page that contains the question, 
comments and replies), view count, snippet of the question text, questioner’s 
profile info, tags and time information. One thing that can change quite 
frequently on such a page is the view count. I believe Stackoverflow uses Redis 
to keep track of the view counts, but we have to currently manage this in Solr, 
since Solr is our only datastore to serve these browse pages.

The problem before Solr 4.0 was that you could not update a single field in a 
document. You have to form the entire document first (either by querying Solr 
or using an alternate data source which contains all the info), update the view 
count and then post the entire document to Solr. With Solr 4+, you can do 
atomic update of a single field – the Solr server internally handles fetching 
the entire document, updating the field and updating its index. But atomic 
update comes with some caveats – you must store all your Solr fields (other 
than copyFields), which can increase your storage space and enable updateLog, 
which can slow down Solr start-up.

For this specific problem of updating a field more frequently than the rest of 
the document, external file fields (EFFs) can come in quite handy. They have 
one main restriction though – you cannot use them in your queries directly i.e. 
they cannot be used in the q parameter directly. But we will see how we can 
circumvent this problem at least partially using function query hacks.

*/snip*

another case, out of my head, might be product pricing or updates on stock 
count.

- Stefan  


On Thursday, December 5, 2013 at 11:11 PM, yriveiro wrote:

> Hi,
>  
> I read this post http://1opensourcelover.wordpress.com/ about EEF's and I
> found very interesting.
>  
> Can someone give me more use cases about the utility of EEF's?
>  
> /Yago
>  
>  
>  
> -
> Best regards
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Question-about-external-file-fields-tp4105213.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
>  
>  




Re: Cloud graph gone after manually editing clusterstate.json

2013-12-12 Thread Stefan Matheis
Michael

that only shows that the http request is a success .. the white page might be 
caused through
a) invalid json structure -- which should be easy to check
b) missing information inside the clusterstate -- therefore it would be good to 
know the difference between the original file and your modified one.

-Stefan 


On Wednesday, December 11, 2013 at 5:06 PM, michael.boom wrote:

> I had a look, but all looks fine there too:
> 
> [Wed Dec 11 2013 17:04:41 GMT+0100 (CET)] runRoute get #/~cloud
> GET tpl/cloud.html?_=1386777881244
> 200 OK
> 57ms 
> GET /solr/zookeeper?wt=json&_=1386777881308
> 200 OK
> 509ms 
> GET /solr/zookeeper?wt=json&path=%2Flive_nodes&_=1386777881822
> 200 OK
> 62ms 
> GET
> /solr/zookeeper?wt=json&detail=true&path=%2Fclusterstate.json&_=1386777881886
> 200 OK
> 84ms 
> 
> 
> 
> 
> -
> Thanks,
> Michael
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Cloud-graph-gone-after-manually-editing-clusterstate-json-tp4106142p4106172.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 




Re: Analysis page broken on trunk?

2014-01-08 Thread Stefan Matheis
Hey Markus

i'm not up to date with the latest changes, but if you can describe how to 
reproduce it, i can try to verify that?

-Stefan  


On Wednesday, January 8, 2014 at 12:44 PM, Markus Jelsma wrote:

> Hi - it seems the analysis page is broken on trunk and it looks like our 4.5 
> and 4.6 builds are unaffected. Can anyone on trunk confirm this? 
> Markus
> 
> 




Re: How to index data in muliValue field with key

2014-01-10 Thread Stefan Matheis
Doesn't work like that - a multivalued field is like a list. PHP doesn't make a 
difference between a list and a map - but Solr does. you can't have a key in 
those fields.  

But based on what infos you've provided .. it looks more like you do in fact 
need different analyzers to get the english vs. the thai text properly. You 
could try title_th and title_en and configure those fields according to your 
needs.

-Stefan  


On Friday, January 10, 2014 at 10:06 AM, rachun wrote:

> *This might be very simple question but I can't figure out after i tried to
> google all day.
>  
> I just want the data to show like this*
>  
> /"record": [
> {
> id: "product001"
> name: "iPhone case",
> title: {
> th: "เคส ไอโฟน5 iphone5 Case วิบวับ ลายผสมมุกสีชมพู back case",
> en: "iphone5 Case pinky pearl back case"
> }
> ]/
>  
> *and this is my schema.xml*
>  
> / multiValued="true"/>
> /
>  
> *this is my php code*
>  
>   
> require_once( 'SolrPhpClient/Apache/Solr/Service.php' );
> $solr = new Apache_Solr_Service( 'localhost', '8983', './solr' );
>  
> if( !$solr->ping() ) {
> echo "Solr service is not responding";
> exit;
> }
>  
> $parts = array(
> 'spark_plug' => array(
> 'id' => 11,
> 'name' => 'Spark plug',
> 'title' => array(
> 'th' => 'เคส sdsdไอโฟน4 iphone4 Case วิบวับ ลายหอไอเฟลสุดเก๋ สีชมพูเข้ม
> ปปback case',
> 'en' => 'New design Iphone 4 case with pink and beutiful back case '
> ),
> 'model' => array( 'a'=>'Boxster', 'b'=>'924' ),
> 'price' => 25.00,
> 'inStock' => true,
> ),
> 'windshield' => array(
> 'id' => 2,
> 'name' => 'Windshield',
> 'model' => '911',
> 'price' => 15.50,
> 'inStock' => false,
> 'url'=>'http://store.weloveshopping.com/joeishiablex12'
> )
> );
>  
> $documents = array();
>  
> foreach ( $parts as $item => $fields ) {
> $doc = new Apache_Solr_Document();
>  
> foreach ( $fields as $key => $value ) {
> if ( is_array( $value ) ) {
> foreach ( $value as $datum ) {
> $doc->setMultiValue( $key, $datum );
> }
> }
> else {
> $doc->$key = $value;
> }
> }
>  
> $documents[] = $doc;
> }
>  
> try
> {
> $solr->addDocuments($documents);
> $solr->commit();
> $solr->optimize();
> }
> catch(Exeption $e)
> {
> echo $e->getMessage();
> }
>  
> ?>
>  
> *but the response that I'm getting now like below as you see it has no key (
> th or en) in response*
>  
> /"record": [
> {
> id: "product001"
> name: "iPhone case",
> title: {
> "เคส ไอโฟน5 iphone5 Case วิบวับ ลายผสมมุกสีชมพู back case",
> "iphone5 Case pinky pearl back case"
> }
> ]/
>  
>  
> *Please help, million thanks  
> Chun.*
>  
>  
>  
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/How-to-index-data-in-muliValue-field-with-key-tp4110653.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
>  
>  




Re: Analysis page broken on trunk?

2014-01-10 Thread Stefan Matheis
Sorry for not getting back on this earlier - i've tried several fields w/ 
values from the example docs and that looks pretty okay to me, no change 
noticed on that.

Can you share a screenshot or something like that? And perhaps Input, 
Fields/Fieldtype which doesn't work for you?

-Stefan 


On Wednesday, January 8, 2014 at 2:24 PM, Markus Jelsma wrote:

> Hi - You will see on the left side each filter abbreviation but you won't see 
> anything in the right container. No terms, positions, offsets, nothing.
> 
> Markus
> 
> 
> -Original message-
> > From:Stefan Matheis  > (mailto:matheis.ste...@gmail.com)>
> > Sent: Wednesday 8th January 2014 14:10
> > To: solr-user@lucene.apache.org (mailto:solr-user@lucene.apache.org)
> > Subject: Re: Analysis page broken on trunk?
> > 
> > Hey Markus
> > 
> > i'm not up to date with the latest changes, but if you can describe how to 
> > reproduce it, i can try to verify that?
> > 
> > -Stefan 
> > 
> > 
> > On Wednesday, January 8, 2014 at 12:44 PM, Markus Jelsma wrote:
> > 
> > > Hi - it seems the analysis page is broken on trunk and it looks like our 
> > > 4.5 and 4.6 builds are unaffected. Can anyone on trunk confirm this? 
> > > Markus
> > > 
> > 
> > 
> 
> 
> 




Re: Please add me to wiki contributors

2014-03-04 Thread Stefan Matheis
I've added you Susheel, go ahead :) 

-Stefan 


On Tuesday, March 4, 2014 at 5:09 AM, Susheel Kumar wrote:

> My user name is SusheelKumar for solr wiki.
> 
> -Original Message-
> From: Susheel Kumar [mailto:susheel.ku...@thedigitalgroup.net] 
> Sent: Monday, March 03, 2014 9:36 PM
> To: solr-user@lucene.apache.org (mailto:solr-user@lucene.apache.org)
> Subject: Please add me to wiki contributors
> 
> Hi,
> 
> Can you please add me to wiki contributors. I wanted to add some stats on 
> Linux vs Windows we came across recently, CSV update handler examples, and 
> also wanted to add company name to public server page.
> 
> Thanks,
> Susheel
> 
> 




Re: Too much mail

2014-09-01 Thread Stefan Matheis
Almost, try https://lucene.apache.org/solr/discussion.html

-Stefan 


On Monday, September 1, 2014 at 5:39 PM, William von Hagen wrote:

> unsubscribe
> 
> 




Re: Schemaless configuration using 4.10.2/API returning 404

2014-12-01 Thread Stefan Moises

Hi,

I've had the same problem - double-check your web.xml and make sure, all 
the required REST stuff is in there, that is:

...
  
SolrRestApi
org.restlet.ext.servlet.ServerServlet

  org.restlet.application
org.apache.solr.rest.SolrSchemaRestApi

  

  
SolrConfigRestApi
org.restlet.ext.servlet.ServerServlet

  org.restlet.application
org.apache.solr.rest.SolrConfigRestApi

  

...

  
SolrRestApi
/schema/*
  

  
SolrConfigRestApi
/config/*
  

...

Cheers,
Stefan

Am 07.11.2014 um 02:30 schrieb nbosecker:

I have some level of logging in Tomcat, and I can see that SolrDispatchFilter
is being invoked:
2014-11-06 17:23:19,016 [catalina-exec-3] DEBUG SolrDispatchFilter
- Closing out SolrRequest: {}

But that really isn't terribly helpful. Is there more logging that I could
invoke to get more info from the Solr side?

Some other logs from admin-type requests look like this:
2014-11-06 17:23:16,547 [catalina-exec-7] INFO  SolrDispatchFilter
- [admin] webapp=null path=/admin/info/logging
params={set=com.scitegic.web.catalog:ALL&wt=json} status=0 QTime=4
2014-11-06 17:23:16,551 [catalina-exec-7] DEBUG SolrDispatchFilter
- Closing out SolrRequest: {set=com.scitegic.web.catalog:ALL&wt=json}

I don't have a proxy in between.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Schemaless-configuration-using-4-10-2-API-returning-404-tp4167869p4168091.html
Sent from the Solr - User mailing list archive at Nabble.com.


--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

***
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



Problem with additional Servlet Filter (SolrRequestParsers Exception)

2014-12-03 Thread Stefan Moises

Hi Folks,

I have a problem with an additional servlet filter defined in my web.xml 
(Tomcat 7.x).
In Solr 4.2.1. we've successfully used a filter for processing POST 
request data (basically, the filter reads the POST data, collects some 
parameters from it and writes it back to the request, based on this 
example: 
http://www.coderanch.com/t/484631/Tomcat/configure-Tomcat-log-POST-data)
To make this work, the filter has to be the first one defined in the  
web.xml.


But now in Solr 4.8.0, if we define that filter, Solr complains that 
there is a filter before it and claims that we have to remove it:


null:org.apache.solr.common.SolrException: Solr requires that request 
parameters sent using application/x-www-form-urlencoded content-type can 
be read through the request input stream. Unfortunately, the stream was 
empty / not available. This may be caused by another servlet filter 
calling ServletRequest.getParameter*() before SolrDispatchFilter, please 
remove it.
at 
org.apache.solr.servlet.SolrRequestParsers$FormDataRequestParser.getParameterIncompatibilityException(SolrRequestParsers.java:622)


Here is my web.xml:

 
post-data-dumper-filter
filters.PostDataDumperFilter
  
  
post-data-dumper-filter
/*
  

  

  
SolrRequestFilter
org.apache.solr.servlet.SolrDispatchFilter
 
  
SolrRequestFilter
/*
  

Any idea how to solve this? Why does Solr have a problem now if there is 
any pre-filter defined?


Thanks a lot,
Stefan



Re: Problem with additional Servlet Filter (SolrRequestParsers Exception)

2014-12-03 Thread Stefan Moises

Hi again,

just for reference, here is my filter class (taken from the example 
posted earlier) - as soon as I iterate over the request parameters, Solr 
gets angry... :(
I have also tried HttpServletRequestWrapper, but that didn't help 
either... nor did this: 
http://ocpsoft.org/opensource/how-to-safely-add-modify-servlet-request-parameter-values/ 
(because I don't want to only add some static values, I still have to 
iterate over the original request's parameters to get my desired data 
out of the request's POST data)

Here goes...

public final class PostDataDumperFilter implements Filter {
private FilterConfig filterConfig = null;
public void init(FilterConfig filterConfig) throws ServletException {
this.filterConfig = filterConfig;
}
public void destroy() {
this.filterConfig = null;
}
public void setFilterConfig(FilterConfig fc) {
filterConfig=fc;
}
public FilterConfig getFilterConfig() {
return filterConfig;
}
public void doFilter(ServletRequest request, ServletResponse response,
FilterChain chain) throws IOException, ServletException {
if (filterConfig == null)
return;
HttpServletRequest httpRequest = (HttpServletRequest) request;

// The variable "postdata" is used in the Solr Tomcat Valve
Enumeration names = httpRequest.getParameterNames();
StringBuffer output = new StringBuffer();
boolean skipRequest = false;
while (names.hasMoreElements()) {
String name = (String) names.nextElement();
output.append(name + "=");
String values[] = httpRequest.getParameterValues(name);
for (int i = 0; i < values.length; i++) {
if (i > 0) {
output.append("&" + name + "=");
}
output.append(values[i]);
// ignore paging requests, where request parameter 
"start" is > 0, e.g. "&start=12":
if(name.equalsIgnoreCase("start") &&! 
values[i].equals("0")) {

skipRequest = true;
}
}
if (names.hasMoreElements())
output.append("&");
}
if(!skipRequest) {
request.setAttribute("postdata", output);
}

chain.doFilter(request, response);
}
}

Thanks,
Stefan


Am 03.12.2014 um 09:47 schrieb Stefan Moises:

Hi Folks,

I have a problem with an additional servlet filter defined in my 
web.xml (Tomcat 7.x).
In Solr 4.2.1. we've successfully used a filter for processing POST 
request data (basically, the filter reads the POST data, collects some 
parameters from it and writes it back to the request, based on this 
example: 
http://www.coderanch.com/t/484631/Tomcat/configure-Tomcat-log-POST-data)
To make this work, the filter has to be the first one defined in the  
web.xml.


But now in Solr 4.8.0, if we define that filter, Solr complains that 
there is a filter before it and claims that we have to remove it:


null:org.apache.solr.common.SolrException: Solr requires that request 
parameters sent using application/x-www-form-urlencoded content-type 
can be read through the request input stream. Unfortunately, the 
stream was empty / not available. This may be caused by another 
servlet filter calling ServletRequest.getParameter*() before 
SolrDispatchFilter, please remove it.
at 
org.apache.solr.servlet.SolrRequestParsers$FormDataRequestParser.getParameterIncompatibilityException(SolrRequestParsers.java:622)


Here is my web.xml:

 
post-data-dumper-filter
filters.PostDataDumperFilter
  
  
post-data-dumper-filter
/*
  

  

  
SolrRequestFilter
org.apache.solr.servlet.SolrDispatchFilter
 
  
SolrRequestFilter
/*
  

Any idea how to solve this? Why does Solr have a problem now if there 
is any pre-filter defined?


Thanks a lot,
Stefan



--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

***
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



Re: Problem with additional Servlet Filter (SolrRequestParsers Exception)

2014-12-04 Thread Stefan Moises
At least I found a good explanation here: 
https://issues.apache.org/jira/browse/STANBOL-437


"This is because of the Filter introduced for STANBOL-401. I have seen 
this as well have looked into it on more detail and come to the 
conclusion that is save.

Quote from the first resource linked below:
> It is informing you that the request entity of media type "application/
> x-www-form-urlencoded" has been consumed by the Servlet layer,
> probably by a servlet filter when one of the methods
> ServletRequest.getParameter* is called.
>
> Servlet does not distinguish between query parameters and form
> parameters when parameters are obtained. So if a form-based request
> entity is present and parameters are obtained it will consume that
> entity thus leaving Jersey with no entity to consume. So Jersey can
> only keep on working in this respect using @FormParam in conjunction
> with the Servlet parameters.
>
> Jersey cannot reliably construct bytes of the request entity because
> query and form parameters are merged together into the servlet
> parameters."

I'm wondering how to solve this now, though... :(

Cheers,
Stefan


Am 03.12.2014 um 17:06 schrieb Michael Sokolov:
Stefan I had problems like this -- and the short answer is -- it's a 
PITA.  Solr is not really designed to be extended in this way.  In 
fact I believe they are moving towards an architecture where this is 
even less possible - folks will be encouraged to run solr using a 
bundled exe, perhaps with jetty embedded (I'm not all up to speed on 
this, maybe someone will correct me), and no war file shipped.


So perhaps a better strategy is to wrap the service at the HTTP layer 
using a proxy.


Still, you can probably fix your immediate problem by extending Solr's 
SolrDispatchFilter class.  Here's how I did that:


https://github.com/msokolov/lux/blob/master/src/main/java/lux/solr/LuxDispatchFilter.java 



-Mike

On 12/03/2014 08:02 AM, Stefan Moises wrote:

Hi again,

just for reference, here is my filter class (taken from the example 
posted earlier) - as soon as I iterate over the request parameters, 
Solr gets angry... :(
I have also tried HttpServletRequestWrapper, but that didn't help 
either... nor did this: 
http://ocpsoft.org/opensource/how-to-safely-add-modify-servlet-request-parameter-values/ 
(because I don't want to only add some static values, I still have to 
iterate over the original request's parameters to get my desired data 
out of the request's POST data)

Here goes...

public final class PostDataDumperFilter implements Filter {
private FilterConfig filterConfig = null;
public void init(FilterConfig filterConfig) throws 
ServletException {

this.filterConfig = filterConfig;
}
public void destroy() {
this.filterConfig = null;
}
public void setFilterConfig(FilterConfig fc) {
filterConfig=fc;
}
public FilterConfig getFilterConfig() {
return filterConfig;
}
public void doFilter(ServletRequest request, ServletResponse 
response,

FilterChain chain) throws IOException, ServletException {
if (filterConfig == null)
return;
HttpServletRequest httpRequest = (HttpServletRequest) request;

// The variable "postdata" is used in the Solr Tomcat Valve
Enumeration names = httpRequest.getParameterNames();
StringBuffer output = new StringBuffer();
boolean skipRequest = false;
while (names.hasMoreElements()) {
String name = (String) names.nextElement();
output.append(name + "=");
String values[] = httpRequest.getParameterValues(name);
for (int i = 0; i < values.length; i++) {
if (i > 0) {
output.append("&" + name + "=");
}
output.append(values[i]);
// ignore paging requests, where request 
parameter "start" is > 0, e.g. "&start=12":
if(name.equalsIgnoreCase("start") &&! 
values[i].equals("0")) {

skipRequest = true;
}
}
if (names.hasMoreElements())
        output.append("&");
}
if(!skipRequest) {
request.setAttribute("postdata", output);
}

chain.doFilter(request, response);
}
}

Thanks,
Stefan


Am 03.12.2014 um 09:47 schrieb Stefan Moises:

Hi Folks,

I have a problem with an additional servlet filter defined in my 
web.xml (Tomcat 7.x).
In Solr 4.2.1. we've successfully used a filter for processing POST 
request data (basically, the filter reads the POST data, collects 
some paramet

Re: Problem with additional Servlet Filter (SolrRequestParsers Exception)

2014-12-04 Thread Stefan Moises

Thanks for your reply!
I've tried to extend Solr's SolrDispatchFilter class, but that doesn't 
work either... as soon as I do anything with the POST data in 
doFilter(), I get that error again ... works fine with GET, though 
(that's what you are using in your class, too...)


So I'm kinda stuck now... :(

The main problem is, we need to use POST in our application since the 
GET requests are too long with all filters applied etc.
And we can't really log the requests then cause you don't get much info 
from POST requests in the Tomcat log... but we need to look into the 
requests for logging and monitoring. That's why I need to read the POST 
data and store it somewhere else to be able to log it with Tomcat later on.


Any more ideas anyone?

Thanks a lot!
Stefan


Am 03.12.2014 um 17:06 schrieb Michael Sokolov:
Stefan I had problems like this -- and the short answer is -- it's a 
PITA.  Solr is not really designed to be extended in this way.  In 
fact I believe they are moving towards an architecture where this is 
even less possible - folks will be encouraged to run solr using a 
bundled exe, perhaps with jetty embedded (I'm not all up to speed on 
this, maybe someone will correct me), and no war file shipped.


So perhaps a better strategy is to wrap the service at the HTTP layer 
using a proxy.


Still, you can probably fix your immediate problem by extending Solr's 
SolrDispatchFilter class.  Here's how I did that:


https://github.com/msokolov/lux/blob/master/src/main/java/lux/solr/LuxDispatchFilter.java 



-Mike

On 12/03/2014 08:02 AM, Stefan Moises wrote:

Hi again,

just for reference, here is my filter class (taken from the example 
posted earlier) - as soon as I iterate over the request parameters, 
Solr gets angry... :(
I have also tried HttpServletRequestWrapper, but that didn't help 
either... nor did this: 
http://ocpsoft.org/opensource/how-to-safely-add-modify-servlet-request-parameter-values/ 
(because I don't want to only add some static values, I still have to 
iterate over the original request's parameters to get my desired data 
out of the request's POST data)

Here goes...

public final class PostDataDumperFilter implements Filter {
private FilterConfig filterConfig = null;
public void init(FilterConfig filterConfig) throws 
ServletException {

this.filterConfig = filterConfig;
}
public void destroy() {
this.filterConfig = null;
}
public void setFilterConfig(FilterConfig fc) {
filterConfig=fc;
}
public FilterConfig getFilterConfig() {
return filterConfig;
}
public void doFilter(ServletRequest request, ServletResponse 
response,

FilterChain chain) throws IOException, ServletException {
if (filterConfig == null)
return;
HttpServletRequest httpRequest = (HttpServletRequest) request;

// The variable "postdata" is used in the Solr Tomcat Valve
Enumeration names = httpRequest.getParameterNames();
StringBuffer output = new StringBuffer();
boolean skipRequest = false;
while (names.hasMoreElements()) {
String name = (String) names.nextElement();
output.append(name + "=");
String values[] = httpRequest.getParameterValues(name);
for (int i = 0; i < values.length; i++) {
if (i > 0) {
output.append("&" + name + "=");
}
output.append(values[i]);
// ignore paging requests, where request 
parameter "start" is > 0, e.g. "&start=12":
if(name.equalsIgnoreCase("start") &&! 
values[i].equals("0")) {

skipRequest = true;
}
}
if (names.hasMoreElements())
output.append("&");
}
    if(!skipRequest) {
request.setAttribute("postdata", output);
}

chain.doFilter(request, response);
}
}

Thanks,
Stefan


Am 03.12.2014 um 09:47 schrieb Stefan Moises:

Hi Folks,

I have a problem with an additional servlet filter defined in my 
web.xml (Tomcat 7.x).
In Solr 4.2.1. we've successfully used a filter for processing POST 
request data (basically, the filter reads the POST data, collects 
some parameters from it and writes it back to the request, based on 
this example: 
http://www.coderanch.com/t/484631/Tomcat/configure-Tomcat-log-POST-data) 

To make this work, the filter has to be the first one defined in 
the  web.xml.


But now in Solr 4.8.0, if we define that filter, Solr complains that 
there is a filter before it and claims that we have to remove it:


null:org.apache.solr.comm

Re: Schemaless configuration using 4.10.2/API returning 404

2014-12-04 Thread Stefan Moises

Oh no, now *I* have that same problem again... :(

I have copied my (running) schemaless core to another server, the core 
runs schemaless (managed-schema is created etc.), solrconfig.xml and 
web.xml are identical besides the paths on the server ...

And yet on one Tomcat (7.0.28) the URL
/solr/schema/fields
is working, but on the other server (7.0.53) it is NOT and I get a 404 
error  :(


I don't see any Exception in the logs and have no idea what's happening 
there...


My solrconfig.xml has this:

  
true
managed-schema
  

and my web.xml has all the required REST Settings:

  
SolrRestApi
org.restlet.ext.servlet.ServerServlet

  org.restlet.application
org.apache.solr.rest.SolrRestApi

  
  
SolrConfigRestApi
org.restlet.ext.servlet.ServerServlet

  org.restlet.application
org.apache.solr.rest.SolrConfigRestApi

  
...
  
SolrRestApi
/schema/*
  
  
SolrConfigRestApi
/config/*
  

but the URL
http://localhost:8983/solr/logstash_logs/schema/fields
gives me a 404 error.

Any ideas anyone? What else may be missing here? Is there anything else 
I need to configure to make the REST schema 


Thanks,
Stefan

Am 01.12.2014 um 20:51 schrieb nbosecker:

Perfect - the web.xml configuration was exactly what was missing.

Thanks so much! ;)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Schemaless-configuration-using-4-10-2-API-returning-404-tp4167869p4171932.html
Sent from the Solr - User mailing list archive at Nabble.com.


--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

*******
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



Re: Schemaless configuration using 4.10.2/API returning 404

2014-12-04 Thread Stefan Moises

Hi,

yeah, that's the strange thing admin UI, /select-URLs etc. are 
working fine... just the REST related URLs give me 404 errors... :(
I'll double check if it's the correct Solr instance, but I'm pretty sure 
it is since the requested core is only running on this instance.


Regards,
Stefan


Am 04.12.2014 um 19:32 schrieb Alexandre Rafalovitch:

The other options is that you not running your - expected - Solr on
that port but are running a different instance. I found that when I
use the new background scripts, I keep forgetting I have another Solr
running.

Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 4 December 2014 at 13:30, Alexandre Rafalovitch  wrote:

Does Admin UI works? Because that API end-points is called by the
Admin UI (forgot which screen though).

Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 4 December 2014 at 13:21, Stefan Moises  wrote:

Oh no, now *I* have that same problem again... :(

I have copied my (running) schemaless core to another server, the core runs
schemaless (managed-schema is created etc.), solrconfig.xml and web.xml are
identical besides the paths on the server ...
And yet on one Tomcat (7.0.28) the URL
/solr/schema/fields
is working, but on the other server (7.0.53) it is NOT and I get a 404 error
 :(

I don't see any Exception in the logs and have no idea what's happening
there...

My solrconfig.xml has this:

   
 true
 managed-schema
   

and my web.xml has all the required REST Settings:

   
 SolrRestApi
org.restlet.ext.servlet.ServerServlet
 
   org.restlet.application
org.apache.solr.rest.SolrRestApi
 
   
   
 SolrConfigRestApi
org.restlet.ext.servlet.ServerServlet
 
   org.restlet.application
org.apache.solr.rest.SolrConfigRestApi
 
   
...
   
 SolrRestApi
 /schema/*
   
   
 SolrConfigRestApi
 /config/*
   

but the URL
http://localhost:8983/solr/logstash_logs/schema/fields
gives me a 404 error.

Any ideas anyone? What else may be missing here? Is there anything else I
need to configure to make the REST schema 

Thanks,
Stefan

Am 01.12.2014 um 20:51 schrieb nbosecker:


Perfect - the web.xml configuration was exactly what was missing.

Thanks so much! ;)



--
View this message in context:
http://lucene.472066.n3.nabble.com/Schemaless-configuration-using-4-10-2-API-returning-404-tp4167869p4171932.html
Sent from the Solr - User mailing list archive at Nabble.com.


--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

*******
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

*******
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



Re: Schemaless configuration using 4.10.2/API returning 404

2014-12-04 Thread Stefan Moises
don't ask, but I've deleted the webapp and re-deployed it in Tomcat and 
everything is working now...

Thanks for the input!

Regards,
Stefan

Am 04.12.2014 um 19:53 schrieb Stefan Moises:

Hi,

yeah, that's the strange thing admin UI, /select-URLs etc. are 
working fine... just the REST related URLs give me 404 errors... :(
I'll double check if it's the correct Solr instance, but I'm pretty 
sure it is since the requested core is only running on this instance.


Regards,
Stefan


Am 04.12.2014 um 19:32 schrieb Alexandre Rafalovitch:

The other options is that you not running your - expected - Solr on
that port but are running a different instance. I found that when I
use the new background scripts, I keep forgetting I have another Solr
running.

Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and @solrstart
Solr popularizers community: https://www.linkedin.com/groups?gid=6713853


On 4 December 2014 at 13:30, Alexandre Rafalovitch 
 wrote:

Does Admin UI works? Because that API end-points is called by the
Admin UI (forgot which screen though).

Regards,
Alex.
Personal: http://www.outerthoughts.com/ and @arafalov
Solr resources and newsletter: http://www.solr-start.com/ and 
@solrstart
Solr popularizers community: 
https://www.linkedin.com/groups?gid=6713853



On 4 December 2014 at 13:21, Stefan Moises  wrote:

Oh no, now *I* have that same problem again... :(

I have copied my (running) schemaless core to another server, the 
core runs
schemaless (managed-schema is created etc.), solrconfig.xml and 
web.xml are

identical besides the paths on the server ...
And yet on one Tomcat (7.0.28) the URL
/solr/schema/fields
is working, but on the other server (7.0.53) it is NOT and I get a 
404 error

 :(

I don't see any Exception in the logs and have no idea what's 
happening

there...

My solrconfig.xml has this:

   
 true
 managed-schema
   

and my web.xml has all the required REST Settings:

   
 SolrRestApi
org.restlet.ext.servlet.ServerServlet
 
org.restlet.application
org.apache.solr.rest.SolrRestApi
 
   
   
SolrConfigRestApi
org.restlet.ext.servlet.ServerServlet
 
org.restlet.application
org.apache.solr.rest.SolrConfigRestApi
 
   
...
   
 SolrRestApi
 /schema/*
   
   
SolrConfigRestApi
 /config/*
   

but the URL
http://localhost:8983/solr/logstash_logs/schema/fields
gives me a 404 error.

Any ideas anyone? What else may be missing here? Is there anything 
else I

need to configure to make the REST schema 

Thanks,
Stefan

Am 01.12.2014 um 20:51 schrieb nbosecker:


Perfect - the web.xml configuration was exactly what was missing.

Thanks so much! ;)



--
View this message in context:
http://lucene.472066.n3.nabble.com/Schemaless-configuration-using-4-10-2-API-returning-404-tp4167869p4171932.html 


Sent from the Solr - User mailing list archive at Nabble.com.


--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

***
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***





--
Mit den besten Grüßen aus Nürnberg,
Stefan Moises

***
Stefan Moises
Senior Softwareentwickler
Leiter Modulentwicklung

shoptimax GmbH
Ulmenstrasse 52 H
90443 Nürnberg
Amtsgericht Nürnberg HRB 21703
GF Friedrich Schreieck

Fax:  0911/25566-29
moi...@shoptimax.de
http://www.shoptimax.de
***



[POLL] Who & how does use "admin-extra" ?

2013-07-16 Thread Stefan Matheis
Hey List  

I would be interested to hear who is using "admin-extra" Functionality in the 
4.x UI and especially _how_ that is used: for displaying graphs, providing 
links for $other_tool, adding other menu items … ?

The main reason i'm asking is .. i don't use it myself and i'm always curious 
while i have to touch it. I can test the example we provide, but that is very 
basic and doesn't necessarily reflect real-world scenarios.

So .. tell me - I'm happy to hear everything .. reports on usage, suggestions 
for improvements, … :)

- Stefan  



Re: Request to be added to the ContributorsGroup

2013-07-19 Thread Stefan Matheis
Sure :) Done!

- Stefan  


On Friday, July 19, 2013 at 9:28 PM, ricky gill wrote:

>  
> Hello,
>  
>  
>   
>  
>  
> Would someone please be kind enough and add me to the “ContributorsGroup”? My 
> Wiki Username is: RickyGill
>  
>  
>   
>  
>  
> Thanks again.
>  
>  
>   
>  
>  
> Regards
>  
>  
>   
>  
>  
> Ricky Gill | Managing Director | Jobuzu.co.uk (http://Jobuzu.co.uk)
> Mob: 07455071710 (Any Time) | Tel: 0845 805 2162 (11:00am - 5:30pm)
> Skype: JobuzuLTD | Email: ricky.g...@jobuzu.co.uk 
> (mailto:ricky.g...@jobuzu.co.uk)
> Web: http://jobuzu.co.uk (http://jobuzu.co.uk/)
>  
>  
>  
>  
>  
> We are a NO-SPAM company and respect your privacy if you would like not to 
> receive further emails from us please reply back with the following subject: 
> Remove Me
>  
>  
>  
> Jobuzu Ltd or any of its subsidiary companies may not be held responsible for 
> the content of this email as it may reflect the personal view of the sender 
> and not that of the company. Should you receive this email in error, please 
> notify the sender immediately and do not disclose copy or distribute it. 
> While Jobuzu Ltd runs anti-virus software on all servers and all 
> workstations, it cannot be held responsible for any infected files that you 
> may receive Jobuzu Ltd advises all recipients to virus scan any files.
>  
>  
>   
>  
>  
>  
>  




Re: Top Ten Terms

2013-08-05 Thread Stefan Matheis
Perhaps the Question-Mark is not the best Icon for what it does .. suggestions 
always welcome :) But indeed it "only" lead you to the query-interface, 
pre-defining "content:[* TO *]" (or whatever the name of the selected field is) 
as a default-query. 

It's not especially related to the terms .. it's more about the field itself 
and the possibility to query the records which have a value for that specific 
field you're actually looking at.

HTH
Stefan



On Monday, August 5, 2013 at 10:48 AM, Furkan KAMACI wrote:

> Hi;
> 
> When I click Schema Browser at Admin Page and load term info for a field I
> get top ten terms. When I click question mark near it, it redirects me to
> Solr query page. Query is that at page:
> 
> http://localhost:8983/solr/#/collection1/query?q=content:[* TO *]
> 
> What is the relation between that query and top ten terms and what does
> that query means?
> 
> 




Re: [solr 4.3.1 admin ui] bug in Plugins / Stats Refresh Values option?

2013-08-07 Thread Stefan Matheis
It shouldn't .. but from your description sounds as the javascript-onclick 
handler doesn't work on the second click (which would do a page reload).

if you use chrome, firefox or safari .. can you open the "developer tools" and 
check if they report any javascript error? which would explain why ..

BTW: You don't have to use that button in the meantime .. just refresh the page 
(that is exactly what the button does). sure, it should work, but shouldn't 
stop you from refreshing the page :)

- Stefan 


On Wednesday, August 7, 2013 at 3:00 PM, Dmitry Kan wrote:

> On the first click the values are refreshed. On the second click the page
> gets redirected:
> 
> from: http://localhost:8983/solr/#/statements/plugins/cache
> to: http://localhost:8983/solr/#/
> 
> Is this intentional?
> 
> Regards,
> 
> Dmitry 



Re: [solr 4.3.1 admin ui] bug in Plugins / Stats Refresh Values option?

2013-08-07 Thread Stefan Matheis
Hey Dmitry

That sounds a bit odd .. those are more like notices instead of real errors .. 
sure that those are stopping the UI from working? if so .. we should see more 
reports like those.

Can you verify the problem by using another browser?

I mean .. that is really a basic javascript handler .. directly written in the 
DOM, no chance that it doesn't get loaded. and that normally stops only working 
if something really bad happens ;o

- Stefan 


On Wednesday, August 7, 2013 at 4:23 PM, Dmitry Kan wrote:

> Hi Stefan,
> 
> I was able to debug the second click scenario (was tricky to catch it,
> since on click redirect happens and logs statements of the previous are
> gone; worked via setting break-points in plugins.js) and got these errors
> (firefox 23.0 ubuntu):
> 
> [17:20:00.731] TypeError: anonymous function does not always return a value
> @ http://localhost:8983/solr/js/scripts/logging.js?_=4.3.1:294
> [17:20:00.743] TypeError: anonymous function does not always return a value
> @ http://localhost:8983/solr/js/scripts/plugins.js?_=4.3.1:371
> [17:20:00.769] TypeError: anonymous function does not always return a value
> @ http://localhost:8983/solr/js/scripts/replication.js?_=4.3.1:35
> [17:20:00.771] TypeError: anonymous function does not always return a value
> @ http://localhost:8983/solr/js/scripts/schema-browser.js?_=4.3.1:68
> [17:20:00.772] TypeError: anonymous function does not always return a value
> @ http://localhost:8983/solr/js/scripts/schema-browser.js?_=4.3.1:1185
> 
> 
> 
> Dmitry
> 
> 
> On Wed, Aug 7, 2013 at 4:35 PM, Stefan Matheis  (mailto:matheis.ste...@gmail.com)>wrote:
> 
> > It shouldn't .. but from your description sounds as the javascript-onclick
> > handler doesn't work on the second click (which would do a page reload).
> > 
> > if you use chrome, firefox or safari .. can you open the "developer tools"
> > and check if they report any javascript error? which would explain why ..
> > 
> > BTW: You don't have to use that button in the meantime .. just refresh the
> > page (that is exactly what the button does). sure, it should work, but
> > shouldn't stop you from refreshing the page :)
> > 
> > - Stefan
> > 
> > 
> > On Wednesday, August 7, 2013 at 3:00 PM, Dmitry Kan wrote:
> > 
> > > On the first click the values are refreshed. On the second click the page
> > > gets redirected:
> > > 
> > > from: http://localhost:8983/solr/#/statements/plugins/cache
> > > to: http://localhost:8983/solr/#/
> > > 
> > > Is this intentional?
> > > 
> > > Regards,
> > > 
> > > Dmitry 



Re: [POLL] Who & how does use "admin-extra" ?

2013-08-07 Thread Stefan Matheis
Hmmm .. Didn't get at least one answer (except from Shawn in #solr, telling me 
he's using a 0 byte file to avoid errors :p) - does that mean, that really no 
one is using it?

Don't be afraid .. tell me, one way or another :)

- Stefan  


On Wednesday, July 17, 2013 at 8:50 AM, Stefan Matheis wrote:

> Hey List  
>  
> I would be interested to hear who is using "admin-extra" Functionality in the 
> 4.x UI and especially _how_ that is used: for displaying graphs, providing 
> links for $other_tool, adding other menu items … ?
>  
> The main reason i'm asking is .. i don't use it myself and i'm always curious 
> while i have to touch it. I can test the example we provide, but that is very 
> basic and doesn't necessarily reflect real-world scenarios.
>  
> So .. tell me - I'm happy to hear everything .. reports on usage, suggestions 
> for improvements, … :)
>  
> - Stefan  



Re: Solr4.4 DIH Headache

2013-08-08 Thread Stefan Matheis

> First things first, for the dataimport handler. Is it correct that when I
> visit it from the admin panel it takes me to this URL:
> 
> *http://x.com:8080/solr/#/collection1/dataimport//dataimport
> *
> 
> 
> 

That one is correct. the trailing "/dataimport" is the name of the handler 
you've defined. the former "/dataimport/" is the area in the UI you've 
selected. 

> When I visit it on this page, it seems to load my config correctly in the
> right panel. At the top of the page it has a purple bar at the top which
> says:
> 
> *Last Update: 14:52:25 - No information available (idle)*
After you reloaded the core (for example) there is no information available - 
you have to run a full- or delta-import .. while that one is running and after 
that one finished, you'll see some statistics about the last run.

> When I visit this URL it still works except my config doesn't show
> 
> http://x.com:8080/solr/#/collection1/dataimport/
> 
> Is it right to have this as my URL or have a miss configured something?
That URL isn't linked anywhere .. if you remove the trailing slash .. it'll 
redirect you to the first available handler (which might end up being the url 
you provided at the beginning of your mail.

> Question 2
> 
> In my data-config.xml I have two entities specified, but in the entity
> dropdown box it doesn't give me any options. Does anyone have any ideas what
> might have caused this?
> 
> 

To answer that question, it would be good to actually see your config .. use 
some service like https://paste.apache.org/ to paste your config. 


- Stefan 

Re: Issue in Swap Space display at Solr Admin

2013-08-19 Thread Stefan Matheis
Vladimir

Would you mind attaching the output of /solr/admin/system?wt=json ? The last 
about 20 lines should be enough .. i'm only interested in the "system" key 
which contains the memory informations. if that is completely missing .. or 
literally 0?

- Stefan 


On Monday, August 19, 2013 at 1:29 PM, Vladimir Vagaitsev wrote:

> Hi,
> 
> I've found an issue in displaying of Swap Space at Solr Admin page. When
> swap page is not used, the admin page shows a NaN percent of usage. Since
> used and total space are stored in double variables, the result of division
> of the used space (0.0Mb) by the total space (0.0Mb) is NaN. Maybe it's
> better to filter this case and put here something like "not used" instead
> of "NaN"?
> 
> You can see the screenshot here: http://pbrd.co/1cTvWPF
> 
> Vladimir 



Re: Issue in Swap Space display at Solr Admin

2013-08-20 Thread Stefan Matheis
Vladimir

That shouldn't matter .. perhaps i did not provide enough information? depends 
on which host & port you have solr running .. and the path you have defined.

based on the tutorial (host + port configuration) you would use something like 
this:

http://localhost:8983/solr/admin/system?wt=json

and that works in single- as well in multicore mode ..

Let me know if that still doesn't work? if so .. which is the address you're 
using to access the UI?

- Stefan 


On Tuesday, August 20, 2013 at 5:32 PM, Vladimir Vagaitsev wrote:

> Stefan,
> 
> I tried to execute the query what you mentioned, but http 404 error was
> returned. We are using multicore solr, I think query format depends on it.
> 
> - Vladimir.
> 
> 
> 2013/8/19 Stefan Matheis  (mailto:matheis.ste...@gmail.com)>
> 
> > Vladimir
> > 
> > Would you mind attaching the output of /solr/admin/system?wt=json ? The
> > last about 20 lines should be enough .. i'm only interested in the "system"
> > key which contains the memory informations. if that is completely missing
> > .. or literally 0?
> > 
> > - Stefan
> > 
> > 
> > On Monday, August 19, 2013 at 1:29 PM, Vladimir Vagaitsev wrote:
> > 
> > > Hi,
> > > 
> > > I've found an issue in displaying of Swap Space at Solr Admin page. When
> > > swap page is not used, the admin page shows a NaN percent of usage. Since
> > > used and total space are stored in double variables, the result of
> > > 
> > 
> > division
> > > of the used space (0.0Mb) by the total space (0.0Mb) is NaN. Maybe it's
> > > better to filter this case and put here something like "not used" instead
> > > of "NaN"?
> > > 
> > > You can see the screenshot here: http://pbrd.co/1cTvWPF
> > > 
> > > Vladimir 



Re: Issue in Swap Space display at Solr Admin

2013-08-21 Thread Stefan Matheis
Vladimir

As Shawn said .. there is/was a change in configuration - my explanation was 
perhaps not the best.
if you try that one, it should work: 
http://localhost:8983/solr/collection1/admin/system?wt=json
otherwise, let us know which is the url you're using to access the Admin UI

- Stefan 


On Wednesday, August 21, 2013 at 11:50 AM, Vladimir Vagaitsev wrote:

> Stefan. the link still doesn't work.
> 
> I'm usiing solr-4.3.1 and I have the following solr.xml file:
> 
> 
> 
> 
> 
> 
> 
> 
>  hostPort="${jetty.port:8983}" hostContext="${hostContext:solr}">
> 
> 
> 
> 
> 
> 
> 2013/8/20 Shawn Heisey mailto:s...@elyograg.org)>
> 
> > On 8/20/2013 9:49 AM, Stefan Matheis wrote:
> > 
> > > Vladimir
> > > 
> > > That shouldn't matter .. perhaps i did not provide enough information?
> > > depends on which host & port you have solr running .. and the path you 
> > > have
> > > defined.
> > > 
> > > based on the tutorial (host + port configuration) you would use something
> > > like this:
> > > 
> > > http://localhost:8983/solr/**admin/system?wt=json<http://localhost:8983/solr/admin/system?wt=json>
> > > 
> > > and that works in single- as well in multicore mode ..
> > > 
> > > Let me know if that still doesn't work? if so .. which is the address
> > > you're using to access the UI?
> > > 
> > 
> > 
> > That URL doesn't have a core name.
> > 
> > If defaultCoreName is missing from an old-style solr.xml, if it's not a
> > valid core name, or if the user is running 4.4 and has a new-style
> > solr.xml, that URL will not work.
> > 
> > The old-style solr.xml will continue to work in all 4.x versions, you
> > don't need to use the new style.
> > 
> > Thanks,
> > Shawn
> > 
> 
> 
> 




Re: Issue in Swap Space display at Solr Admin

2013-08-21 Thread Stefan Matheis
Thanks Vladimir, i've created SOLR-5178

- Stefan 


On Wednesday, August 21, 2013 at 1:29 PM, Vladimir Vagaitsev wrote:

> Stefan,
> 
> It's done! Here is the "system" key:
> 
> "system":{"name":"Linux","version":"3.2.0-39-virtual","arch":"amd64","systemLoadAverage":3.38,"committedVirtualMemorySize":32454287360,"freePhysicalMemorySize":912945152,"freeSwapSpaceSize":0,"processCpuTime":5627465000,"totalPhysicalMemorySize":71881908224,"totalSwapSpaceSize":0,"openFileDescriptorCount":350,"maxFileDescriptorCount":4096,"uname":"Linux
> ip-xxx-xxx-xxx-xxx 3.2.0-39-virtual #62-Ubuntu SMP Thu Feb 28 00:48:27
> UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n","uptime":" 11:24:39 up 4
> days, 23:03, 1 user, load average: 3.38, 3.10, 2.95\n"}
> 
> 
> 
> 2013/8/21 Stefan Matheis  (mailto:matheis.ste...@gmail.com)>
> 
> > Vladimir
> > 
> > As Shawn said .. there is/was a change in configuration - my explanation
> > was perhaps not the best.
> > if you try that one, it should work:
> > http://localhost:8983/solr/collection1/admin/system?wt=json
> > otherwise, let us know which is the url you're using to access the Admin UI
> > 
> > - Stefan
> > 
> > 
> > On Wednesday, August 21, 2013 at 11:50 AM, Vladimir Vagaitsev wrote:
> > 
> > > Stefan. the link still doesn't work.
> > > 
> > > I'm usiing solr-4.3.1 and I have the following solr.xml file:
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >  > > hostPort="${jetty.port:8983}" hostContext="${hostContext:solr}">
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 2013/8/20 Shawn Heisey mailto:s...@elyograg.org)>
> > > 
> > > > On 8/20/2013 9:49 AM, Stefan Matheis wrote:
> > > > 
> > > > > Vladimir
> > > > > 
> > > > > That shouldn't matter .. perhaps i did not provide enough
> > information?
> > > > > depends on which host & port you have solr running .. and the path
> > > > 
> > > 
> > 
> > you have
> > > > > defined.
> > > > > 
> > > > > based on the tutorial (host + port configuration) you would use
> > something
> > > > > like this:
> > > > > 
> > > > > http://localhost:8983/solr/**admin/system?wt=json<
> > http://localhost:8983/solr/admin/system?wt=json>
> > > > > 
> > > > > and that works in single- as well in multicore mode ..
> > > > > 
> > > > > Let me know if that still doesn't work? if so .. which is the address
> > > > > you're using to access the UI?
> > > > > 
> > > > 
> > > > 
> > > > 
> > > > That URL doesn't have a core name.
> > > > 
> > > > If defaultCoreName is missing from an old-style solr.xml, if it's not a
> > > > valid core name, or if the user is running 4.4 and has a new-style
> > > > solr.xml, that URL will not work.
> > > > 
> > > > The old-style solr.xml will continue to work in all 4.x versions, you
> > > > don't need to use the new style.
> > > > 
> > > > Thanks,
> > > > Shawn
> > > > 
> > > 
> > 
> > 
> 
> 
> 




Re: Different Responses for 4.4 and 3.5 solr index

2013-08-25 Thread Stefan Matheis
Kuchekar (hope that's your first name?)

you didn't tell us .. how they differ? do you get an actual error? or does the 
result contain documents you didn't expect? or the other way round, that some 
are missing you'd expect to be there?

- Stefan 


On Sunday, August 25, 2013 at 4:43 PM, Kuchekar wrote:

> Hi,
> 
> We get different response when we query 4.4 and 3.5 solr using same
> query params.
> 
> My query param are as following :
> 
> facet=true
> &facet.mincount=1
> &facet.limit=25
> &qf=content^0.0+p_last_name^500.0+p_first_name^50.0+strong_topic^0.0+first_author_topic^0.0+last_author_topic^0.0+title_topic^0.0
> &wt=javabin
> &version=2
> &rows=10
> &f.affiliation_org.facet.limit=150
> &fl=p_id,p_first_name,p_last_name
> &start=0
> &q=Apple
> &facet.field=affiliation_org
> &fq=table:profile
> &fq=num_content:[*+TO+1500]
> &fq=name:"Apple"
> 
> The content in both (solr 4.4 and solr 3.5) are same.
> 
> The solrconfig.xml from 3.5 an 4.4 are similarly constructed.
> 
> Is there something I am missing that might have been changed in 4.4, which
> might be causing this issue. ?. The "qf" params looks same.
> 
> Looking forward for your reply.
> 
> Thanks.
> Kuchekar, Nilesh
> 
> 




Re: Different Responses for 4.4 and 3.5 solr index

2013-08-26 Thread Stefan Matheis
Did you check the scoring? (use fl=*,score to retrieve it) .. additionally 
debugQuery=true might provide more information about how the score was 
calculated.

- Stefan 


On Monday, August 26, 2013 at 12:46 AM, Kuchekar wrote:

> Hi,
> The response from 4.4 and 3.5 in the current scenario differs in the
> sequence in which results are given us back.
> 
> For example :
> 
> Response from 3.5 solr is : id:A, id:B, id:C, id:D ...
> Response from 4.4 solr is : id C, id:A, id:D, id:B...
> 
> Looking forward your reply.
> 
> Thanks.
> Kuchekar, Nilesh
> 
> 
> On Sun, Aug 25, 2013 at 11:32 AM, Stefan Matheis
> mailto:matheis.ste...@gmail.com)>wrote:
> 
> > Kuchekar (hope that's your first name?)
> > 
> > you didn't tell us .. how they differ? do you get an actual error? or does
> > the result contain documents you didn't expect? or the other way round,
> > that some are missing you'd expect to be there?
> > 
> > - Stefan
> > 
> > 
> > On Sunday, August 25, 2013 at 4:43 PM, Kuchekar wrote:
> > 
> > > Hi,
> > > 
> > > We get different response when we query 4.4 and 3.5 solr using same
> > > query params.
> > > 
> > > My query param are as following :
> > > 
> > > facet=true
> > > &facet.mincount=1
> > > &facet.limit=25
> > > 
> > 
> > &qf=content^0.0+p_last_name^500.0+p_first_name^50.0+strong_topic^0.0+first_author_topic^0.0+last_author_topic^0.0+title_topic^0.0
> > > &wt=javabin
> > > &version=2
> > > &rows=10
> > > &f.affiliation_org.facet.limit=150
> > > &fl=p_id,p_first_name,p_last_name
> > > &start=0
> > > &q=Apple
> > > &facet.field=affiliation_org
> > > &fq=table:profile
> > > &fq=num_content:[*+TO+1500]
> > > &fq=name:"Apple"
> > > 
> > > The content in both (solr 4.4 and solr 3.5) are same.
> > > 
> > > The solrconfig.xml from 3.5 an 4.4 are similarly constructed.
> > > 
> > > Is there something I am missing that might have been changed in 4.4,
> > which
> > > might be causing this issue. ?. The "qf" params looks same.
> > > 
> > > Looking forward for your reply.
> > > 
> > > Thanks.
> > > Kuchekar, Nilesh
> > > 
> > 
> > 
> 
> 
> 




Re: Little XsltResponseWriter documentation bug (Attn: Wiki Admin)

2013-09-05 Thread Stefan Matheis
Dimitri

I've added you to the https://wiki.apache.org/solr/ContributorsGroup - feel 
free to improve the wiki :)

- Stefan 


On Wednesday, September 4, 2013 at 11:46 PM, Dmitri Popov wrote:

> Upayavira,
> 
> I could edit that page myself, but need to be confirmed human according to
> http://wiki.apache.org/solr/FrontPage#How_to_edit_this_Wiki
> 
> My wiki account name is 'pin' just in case.
> 
> On Wed, Sep 4, 2013 at 5:27 PM, Upayavira  (mailto:u...@odoko.co.uk)> wrote:
> 
> > It's a wiki. Can't you correct it?
> > 
> > Upayavira
> > 
> > On Wed, Sep 4, 2013, at 08:25 PM, Dmitri Popov wrote:
> > > Hi,
> > > 
> > > http://wiki.apache.org/solr/XsltResponseWriter (and reference manual PDF
> > > too) become out of date:
> > > 
> > > In configuration section
> > > 
> > >  > > name="xslt"
> > > class="org.apache.solr.request.XSLTResponseWriter">
> > > 5
> > > 
> > > 
> > > class name
> > > 
> > > org.apache.solr.request.XSLTResponseWriter
> > > 
> > > should be replaced by
> > > 
> > > org.apache.solr.response.XSLTResponseWriter
> > > 
> > > Otherwise ClassNotFoundException happens. Change is result of
> > > https://issues.apache.org/jira/browse/SOLR-1602 as far as I see.
> > > 
> > > Apparently can't update that page myself, please could someone else do
> > > that?
> > > 
> > > Thanks! 



Re: Javascript StatelessScriptUpdateProcessor

2013-09-24 Thread Stefan Matheis
Luís would you mind sharing your findings for others / archive?

On Tuesday, September 10, 2013 at 6:49 PM, Luís Portela Afonso wrote:

> Solved
> On Sep 10, 2013, at 4:55 PM, Luís Portela Afonso  (mailto:meligalet...@gmail.com)> wrote:
>  
> > It's that possible to execute queries on a javascript script on 
> > StatelessScriptUpdateProcessor.
> > I'm processing data with a javascript i want to execute a query to the 
> > indexed data of solr.
> >  
> > I know that the javascript script, has an instance of SolrQueryRequest and 
> > SolrQueryResponse, but neither can be used. At least i'm not being able to 
> > use it.  
>  
>  
> Attachments:  
> - smime.p7s
>  




Re: explicite deltaimports by given ids

2013-09-24 Thread Stefan Matheis
Peter

You can access request params that way: ${dataimporter.request.command} (from 
https://wiki.apache.org/solr/DataImportHandler#Accessing_request_parameters) - 
although i'm not sure what happens if you provide the same param multiple times.

Perhaps i'd go with &oid=5,6 as url param and use ".. WHERE oid IN( 
${dataimporter.request.oid} ) .." in the query?

-Stefan  


On Friday, September 13, 2013 at 3:37 PM, Peter Schütt wrote:

> Hallo,
> I want to trigger a deltaimportquery by given IDs.
>  
> Example:
>  
> query="select oid, att1, att2 from my_table"
>  
> deltaImportQuery="select oid, att1, att2 from my_table  
> WHERE oid=${dih.delta.OID}"
>  
> deltaQuery="select OID from my_table WHERE
> TIME_STAMP > TO_DATE
> (${dih.last_index_time:VARCHAR}, '-MM-DD HH24:MI:SS')"
>  
> deletedPkQuery="select OID from my_table
> where TIME_STAMP > TO_DATE(${dih.last_index_time:VARCHAR}, '-MM-
> DD HH24:MI:SS')"
>  
>  
> Pseudo URL:  
>  
> http://solr-server/solr/mycore/dataimport/?command=deltaImportQuery&&oid=5
> &&oid=6
>  
> to trigger the update or insert of the datasets with OID in (5, 6).
>  
> What is the correct way?
>  
> Thanks for any hint.
>  
> Ciao
> Peter Schütt
>  
>  




Re: [DIH] Logging skipped documents

2013-09-24 Thread Stefan Matheis
Jérôme

Just had a quick look at the source of 
http://svn.apache.org/viewvc/lucene/dev/trunk/solr/contrib/dataimporthandler/src/java/org/apache/solr/handler/dataimport/XPathEntityProcessor.java?view=markup#l324
 .. which looks like there is LOG.warn(msg, e); Statement on Line 331 where msg 
should include the url for the tried document?

Otherwise, if that's not the place where the exception happens .. you might be 
able to add LOG Statements all by yourself and compile SOLR from Source (again) 
to make that work?

-Stefan  


On Monday, September 23, 2013 at 2:32 PM, jerome.dup...@bnf.fr wrote:

>  
> Hello,
>  
> I have a question, I index documents and a small part them are skipped, (I
> am in onError="skip" mode)
> I'm trying to get a list of them, in order to analyse what's worng with
> these documents
> Is there a mean to get the list of skipped documents, and some more
> information (my onError="skip" is in an XPathEntityProcessor, the name of
> the file processed would be OK)
>  
>  
> Cordialement,
> ---
> Jérôme Dupont
> Bibliothèque Nationale de France
> Département des Systèmes d'Information
> Tour T3 - Quai François Mauriac
> 75706 Paris Cedex 13
> téléphone: 33 (0)1 53 79 45 40
> e-mail: jerome.dup...@bnf.fr (mailto:jerome.dup...@bnf.fr)
> ---
>  
>  
>  
> Participez à la Grande Collecte 1914-1918 Avant d'imprimer, pensez à 
> l'environnement.  



Re: Implementing Solr Suggester for Autocomplete (multiple columns)

2013-09-26 Thread Stefan Matheis
That is because of jQuery's changes ..

jQuery.browser (http://api.jquery.com/jQuery.browser/)
Description: Contains flags for the useragent, read from navigator.userAgent. 
This property was removed in jQuery 1.9 and is available only through the 
jQuery.migrate plugin. Please try to use feature detection instead.

Those plugins (like autocomplete, for example) normally have version 
dependencies to jQuery .. to ensure their functionality

-Stefan  


On Thursday, September 26, 2013 at 2:50 PM, JMill wrote:

> I managed to get rid of the query error by playing jquery file in the
> velocity folder and adding line: " src="#{url_for_solr}/admin/file?file=/velocity/jquery.min.js&contentType=text/javascript">".
> That has not solved the issues the console is showing a new error -
> "[13:42:55.181] TypeError: $.browser is undefined @
> http://localhost:8983/solr/ac/admin/file?file=/velocity/jquery.autocomplete.js&contentType=text/javascript:90";.
> Any ideas?
>  
>  
> On Thu, Sep 26, 2013 at 1:12 PM, JMill  (mailto:apprentice...@googlemail.com)> wrote:
>  
> > Do you know the directory the "#{url_root}" in  > type="text/javascript" src="#{url_root}/js/lib/
> > jquery-1.7.2.min.js"> points too? and same for ""#{url_for_solr}"
> >  > src="#{url_for_solr}/js/lib/jquery-1.7.2.min.js">
> >  
> >  
> > On Wed, Sep 25, 2013 at 7:33 PM, Ing. Jorge Luis Betancourt Gonzalez <
> > jlbetanco...@uci.cu (mailto:jlbetanco...@uci.cu)> wrote:
> >  
> > > Try quering the core where the data has been imported, something like:
> > >  
> > > http://localhost:8983/solr/suggestions/select?q=uc
> > >  
> > > In the previous URL suggestions is the name I give to the core, so this
> > > should change, if you get results, then the problem could be the jquery
> > > dependency. I don't remember doing any change, as far as I know that js
> > > file is bundled with solr (at leat in 3.x) version perhaps you could 
> > > change
> > > it the correct jquery version on solr 4.4, if you go into the admin panel
> > > (in solr 3.6):
> > >  
> > > http://localhost:8983/solr/admin/schema.jsp
> > >  
> > > And inspect the loaded code, the required file (jquery-1.4.2.min.js) gets
> > > loaded in solr 4.4 it should load a similar file, but perhaps a more 
> > > recent
> > > version.
> > >  
> > > Perhaps you could change that part to something like:
> > >  
> > >  > > src="#{url_root}/js/lib/jquery-1.7.2.min.js">
> > >  
> > > Which is used at least on a solr 4.1 that I have laying aroud here
> > > somewhere.
> > >  
> > > In any case you can test the suggestions using the URL that I suggest on
> > > the top of this mail, in that case you should be able to see the possible
> > > results, of course in a less fancy way.
> > >  
> > > - Mensaje original -
> > > De: "JMill"  > > (mailto:apprentice...@googlemail.com)>
> > > Para: solr-user@lucene.apache.org (mailto:solr-user@lucene.apache.org)
> > > Enviados: Miércoles, 25 de Septiembre 2013 13:59:32
> > > Asunto: Re: Implementing Solr Suggester for Autocomplete (multiple
> > > columns)
> > >  
> > > Could it be the jquery library that is the problem? I opened up
> > > solr-home/ac/conf/velocity/head.vm with an editor and I see a reference to
> > > the jquery library but I can't seem to find the directory referenced,
> > > line: 

Re: ContributorsGroup

2013-09-26 Thread Stefan Matheis
Mike

To add you as Contributor i'd need to know your Username? :)

Stefan 


On Thursday, September 26, 2013 at 6:50 PM, Mike L. wrote:

>  
> Solr Admins,
>  
>  I've been using Solr for the last couple years and would like to 
> contribute to this awesome project. Can I be added to the Contributorsgroup 
> with also access to update the Wiki?
>  
> Thanks in advance.
>  
> Mike L.
> 
> 




Re: Submitting Multiple JSON documents from Solr Admin Documents

2013-10-02 Thread Stefan Matheis
Dennis

Have a look at the wiki .. last code-block in the section: 
http://wiki.apache.org/solr/UpdateJSON#Example

-Stefan 


On Wednesday, October 2, 2013 at 5:46 PM, Dennis Brundage wrote:

> Is it possible to submit multiple JSON documents through the Solr Admin
> Documents page?
> 
> The default json is: {"id":"change.me (http://change.me)","title":"change.me 
> (http://change.me)"}. If I try to add
> a second document such as {"id":"change.me2","title":"change.me 
> (http://change.me)"}, and
> submit them together, I get parsing errors. I have tried:
> 
> 1.
> {"id":"change.me (http://change.me)","title":"change.me 
> (http://change.me)"}{"id":"change2.me 
> (http://change2.me)","title":"change.me2"}
> 2.
> {{"id":"change.me (http://change.me)","title":"change.me 
> (http://change.me)"}{"id":"change2.me 
> (http://change2.me)","title":"change.me2"}}
> 3.
> {"id":"change.me (http://change.me)","title":"change.me 
> (http://change.me)"},{"id":"change2.me 
> (http://change2.me)","title":"change.me2"}
> 4.
> {{"id":"change.me (http://change.me)","title":"change.me 
> (http://change.me)"},{"id":"change2.me 
> (http://change2.me)","title":"change.me2"}}
> 
> The parsing errors for items 2, 3 and 4 are (the parsing error for 1
> indicates the need for either "," or "}" after the first json document.
> 
> "msg": "Expected string: char={,position=16 BEFORE='{\"add\":{ \"doc\":{{'
> AFTER='\"id\":\"change.me (http://change.me)\",\"title\":\"change.me 
> (http://change.me)\"}{\"'",
> 
> "msg": "Expected string: char={,position=54 BEFORE='{\"add\":{
> \"doc\":{\"id\":\"change.me (http://change.me)\",\"title\":\"change.me 
> (http://change.me)\"},{'
> AFTER='\"id\":\"change.me2\",\"title\":\"change.me (http://change.me)\"},'",
> 
> "msg": "Expected string: char={,position=16 BEFORE='{\"add\":{ \"doc\":{{'
> AFTER='\"id\":\"change.me (http://change.me)\",\"title\":\"change.me 
> (http://change.me)\"},{'",
> 
> 
> 
> 
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Submitting-Multiple-JSON-documents-from-Solr-Admin-Documents-tp4093155.html
> Sent from the Solr - User mailing list archive at Nabble.com 
> (http://Nabble.com).
> 
> 




Re: Submitting Multiple JSON documents from Solr Admin Documents

2013-10-02 Thread Stefan Matheis
Hm, actually this post made me think, perhaps it would be a good idea to 
integrate some kind of client-side validation? So we could report back at least 
those "basic" JSON-format errors?

-Stefan 


On Wednesday, October 2, 2013 at 6:15 PM, Stefan Matheis wrote:

> Dennis
> 
> Have a look at the wiki .. last code-block in the section: 
> http://wiki.apache.org/solr/UpdateJSON#Example
> 
> -Stefan 
> 
> On Wednesday, October 2, 2013 at 5:46 PM, Dennis Brundage wrote:
> 
> > Is it possible to submit multiple JSON documents through the Solr Admin
> > Documents page?
> > 
> > The default json is: {"id":"change.me 
> > (http://change.me)","title":"change.me (http://change.me)"}. If I try to add
> > a second document such as {"id":"change.me2","title":"change.me 
> > (http://change.me)"}, and
> > submit them together, I get parsing errors. I have tried:
> > 
> > 1.
> > {"id":"change.me (http://change.me)","title":"change.me 
> > (http://change.me)"}{"id":"change2.me 
> > (http://change2.me)","title":"change.me2"}
> > 2.
> > {{"id":"change.me (http://change.me)","title":"change.me 
> > (http://change.me)"}{"id":"change2.me 
> > (http://change2.me)","title":"change.me2"}}
> > 3.
> > {"id":"change.me (http://change.me)","title":"change.me 
> > (http://change.me)"},{"id":"change2.me 
> > (http://change2.me)","title":"change.me2"}
> > 4.
> > {{"id":"change.me (http://change.me)","title":"change.me 
> > (http://change.me)"},{"id":"change2.me 
> > (http://change2.me)","title":"change.me2"}}
> > 
> > The parsing errors for items 2, 3 and 4 are (the parsing error for 1
> > indicates the need for either "," or "}" after the first json document.
> > 
> > "msg": "Expected string: char={,position=16 BEFORE='{\"add\":{ \"doc\":{{'
> > AFTER='\"id\":\"change.me (http://change.me)\",\"title\":\"change.me 
> > (http://change.me)\"}{\"'",
> > 
> > "msg": "Expected string: char={,position=54 BEFORE='{\"add\":{
> > \"doc\":{\"id\":\"change.me (http://change.me)\",\"title\":\"change.me 
> > (http://change.me)\"},{'
> > AFTER='\"id\":\"change.me2\",\"title\":\"change.me (http://change.me)\"},'",
> > 
> > "msg": "Expected string: char={,position=16 BEFORE='{\"add\":{ \"doc\":{{'
> > AFTER='\"id\":\"change.me (http://change.me)\",\"title\":\"change.me 
> > (http://change.me)\"},{'",
> > 
> > 
> > 
> > 
> > --
> > View this message in context: 
> > http://lucene.472066.n3.nabble.com/Submitting-Multiple-JSON-documents-from-Solr-Admin-Documents-tp4093155.html
> > Sent from the Solr - User mailing list archive at Nabble.com 
> > (http://Nabble.com).
> > 
> > 
> > 
> 
> 



Re: Solr 4.4.0 on Ubuntu 10.04 with Jetty 6.1 from package Repository

2013-10-10 Thread Stefan Matheis
Is there a specific reason you are trying to use that jetty instead of the 
provided one?

-Stefan 


On Thursday, October 10, 2013 at 11:01 AM, Peter Schmidt wrote:

> Hey folks,
> for some days i tried to get Solr 4.4.0 working as a webapp with Jetty 6.1
> from the Ubuntu repository installed with apt-get. First i tried the
> installation according the wiki http://wiki.apache.org/solr/SolrJetty. Then
> i found this example
> http://www.kingstonlabs.com/blog/how-to-install-solr-36-on-ubuntu-1204/ and
> tried the Configuration according to the book pache Solr 4 Cookbook by
> Rafal Kuc.
> But it semmed to be impossible to run Solr 4.4.0 as webapp on Ubuntus
> jetty 6.1 :(
> Can somebody confirm that it's impossible or give me an advice how to run
> Solr 4.4.0 on Jetty 6.1?
> Regards
> 
> 




Re: Solr Wiki Account

2013-10-10 Thread Stefan Matheis
Sure :) I've added it to https://wiki.apache.org/solr/AdminGroup

-Stefan 


On Thursday, October 10, 2013 at 3:41 PM, Joel Bernstein wrote:

> Hi,
> 
> Can the account JoelBernstein be granted permission to edit the Solr Wiki?
> 
> Thanks,
> Joel
> 
> 




Re: dataimport

2013-03-31 Thread Stefan Matheis
Hey

It well never turn "green" since we have no explicit status for the Importer 
when it's done. But, what did you see when you hit the "Refresh" Button at the 
bottom of the page? are the numbers counting? 

Stefan 


On Friday, March 29, 2013 at 5:38 PM, A. Lotfi wrote:

> Hi,
> 
> When I hit Execute button in Query tab I only see :
> 
> Last Update: 12:34:58
> Indexing since 01s
> Requests: 1 (1/s), Fetched: 0 (0/s), Skipped: 0, Processed: 0 (0/s)
> Started: about an hour ago
> 
> did not see  any green entry saying Indexing Completed.
> 
>  Thanks 



Re: 4.2 Admin UI

2013-03-31 Thread Stefan Matheis
Chris

Are those "some" nodes perhaps those you used already before for viewing the 
admin UI? If so, we fixed the underlying Caching-Issue with the upcoming 4.2.1 
Release (https://issues.apache.org/jira/browse/SOLR-4311)

Otherwise, what's the definition of "some" in your case? When does it work, and 
when not? Assuming you're using the same OS/Browser in both cases .. what's the 
difference? one a master, the other one(s) slave(s) - for example? 

Stefan 


On Friday, March 29, 2013 at 9:28 PM, Chris R wrote:

> I've notice on the Admin UI that on some of my nodes that Core Selector
> combo box doesn't populate. Known issue?
> 
> Chris 



Re: Understanding the Solr Admin page

2013-04-08 Thread Stefan Matheis
Dotan

On Monday, April 8, 2013 at 8:21 AM, Dotan Cohen wrote:
> I notice that some of the Args presented are in black text, and others
> in grey. Why are they presented differently? Where would I have found
> this information in the fine manual?
> 
> 


iirc there is one ticket open which is related to this. initially that was not 
meant to highlight specific values .. just a simple even/odd style to make it 
easier to read the different lines - at least that is what i thought it would 
be. looks like you're the second one being confused by them, so we'll take it 
out i'd say?


On Monday, April 8, 2013 at 8:21 AM, Dotan Cohen wrote:

> When I start Solr with nohup, the resulting nohup.out file is _huge_.
> How might I start Solr such that INFO is not output, but only WARNINGs
> and SEVEREs are. In particular, I'd rather not log every query, even
> the invalid queries which also log as SEVERE.
> 
> 


Since you're not telling us, how you get it started .. it's just a guess :) For 
starters: http://wiki.apache.org/solr/LoggingInDefaultJettySetup otherwise, the 
more advanced one: http://wiki.apache.org/solr/SolrLogging

HTH
Stefan


On Monday, April 8, 2013 at 8:21 AM, Dotan Cohen wrote:

> I am expanding my Solr skills and would like to understand the Admin
> page better. I understand that understanding Java memory management
> and Java memory options will help me, and I am reading and
> experimenting on that front, but if there are any concise resources
> that are especially pertinent to Solr I would love to know about them.
> Everything that I've found is either a "do this" one-liner or expects
> Java experience which I don't have and don't know what I need to
> learn.
> 
> I notice that some of the Args presented are in black text, and others
> in grey. Why are they presented differently? Where would I have found
> this information in the fine manual?
> 
> When I start Solr with nohup, the resulting nohup.out file is _huge_.
> How might I start Solr such that INFO is not output, but only WARNINGs
> and SEVEREs are. In particular, I'd rather not log every query, even
> the invalid queries which also log as SEVERE. I thought that this
> would be easy to Google for, but it is not! If there is a concise
> document that examines this issue, I would love to know where on the
> wild wild web it exists.
> 
> Thank you.
> 
> --
> Dotan Cohen
> 
> http://gibberish.co.il
> http://what-is-what.com
> 
> 




Re: /admin/stats.jsp in SolrCloud

2013-04-10 Thread Stefan Matheis
Hey Tim

SolrCloud-Mode or not does not really matter for this fact .. in 4.x (and afaik 
as well in 3.x) you can find the stats here: 
http://host:port/solr/admin/mbeans?stats=true in xml or json (setting the 
responsewriter with wt=json) - as you like

HTH
Stefan



On Wednesday, April 10, 2013 at 9:53 PM, Tim Vaillancourt wrote:

> Hey guys,
> 
> This feels like a silly question already, here goes:
> 
> In SolrCloud it doesn't seem obvious to me where one can grab stats
> regarding caches for a given core using an http call (JSON/XML). Those
> values are available in the web-based app, but I am looking for a http call
> that would return this same data.
> 
> In 3.x this was located at /admin/stats.php, and I used a script to grab
> the data, but in SolrCloud I am unclear and would like to add that to the
> docs below:
> 
> http://wiki.apache.org/solr/SolrCaching#Overview
> http://wiki.apache.org/solr/SolrAdminStats
> 
> Thanks!
> 
> Tim 



Re: /admin/stats.jsp in SolrCloud

2013-04-10 Thread Stefan Matheis
To complete my "as well in 3.x" phrase - what i wanted to say is: it was 
already there in the times of 3.x - but because there was stats.jsp .. you know 
:)

On Wednesday, April 10, 2013 at 10:19 PM, Stefan Matheis wrote:

> Hey Tim
> 
> SolrCloud-Mode or not does not really matter for this fact .. in 4.x (and 
> afaik as well in 3.x) you can find the stats here: 
> http://host:port/solr/admin/mbeans?stats=true in xml or json (setting the 
> responsewriter with wt=json) - as you like
> 
> HTH
> Stefan
> 
> 
> On Wednesday, April 10, 2013 at 9:53 PM, Tim Vaillancourt wrote:
> 
> > Hey guys,
> > 
> > This feels like a silly question already, here goes:
> > 
> > In SolrCloud it doesn't seem obvious to me where one can grab stats
> > regarding caches for a given core using an http call (JSON/XML). Those
> > values are available in the web-based app, but I am looking for a http call
> > that would return this same data.
> > 
> > In 3.x this was located at /admin/stats.php, and I used a script to grab
> > the data, but in SolrCloud I am unclear and would like to add that to the
> > docs below:
> > 
> > http://wiki.apache.org/solr/SolrCaching#Overview
> > http://wiki.apache.org/solr/SolrAdminStats
> > 
> > Thanks!
> > 
> > Tim 
> 



  1   2   3   4   >