Exception on distributed date facet SOLR-1709

2011-03-18 Thread Viswa S

Folks,
 
We are trying to do some date faceting on our distributed environment, applied 
solr-1709 on the trunk. A date facet query throws the below exception, I have 
attached the patched source for reference. Any help would be appreciated.
 
Other Info:
Java ver: 1_6_0_24
Trung change list: 1022216
 
 
SEVERE: java.lang.ClassCastException: java.util.Date cannot be cast to 
java.lang.Integer
at 
org.apache.solr.handler.component.FacetComponent.countFacets(FacetComponent.java:294)
at 
org.apache.solr.handler.component.FacetComponent.handleResponses(FacetComponent.java:232)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:326)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1325)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
at 
org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547)
at 
org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at 
org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 
  /**
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 * http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.solr.handler.component;

import java.io.IOException;
import java.net.URL;
import java.util.*;

import org.apache.solr.common.params.CommonParams;
import org.apache.solr.common.params.FacetParams;
import org.apache.solr.common.params.SolrParams;
import org.apache.solr.common.params.ModifiableSolrParams;
import org.apache.solr.common.util.NamedList;
import org.apache.solr.common.util.SimpleOrderedMap;
import org.apache.solr.common.util.StrUtils;
import org.apache.solr.common.SolrException;
import org.apache.solr.request.SimpleFacets;
import org.apache.lucene.util.OpenBitSet;
import org.apache.solr.search.QueryParsing;
import org.apache.solr.schema.FieldType;
import org.apache.lucene.queryParser.ParseException;

/**
 * TODO!
 *
 * @version $Id: FacetComponent.java 1004082 2010-10-04 01:54:23Z rmuir $
 * @since solr 1.3
 */
public class  FacetComponent extends SearchComponent
{
  public static final String COMPONENT_NAME = "facet";

  @Override
  public void prepare(ResponseBuilder rb) throws IOException
  {
if (rb.req.getParams().getBool(FacetParams.FACET,false)) {
  rb.setNeedDocSet( true );
  rb.doFacets = true;
}
  }

  /**
   * Actually run the query
   * @param rb
   */
  @Override
  public void process(ResponseBuilder rb) throws IOException
  {
if (rb.doFacets) {
  SolrParams params = rb.req.getParams();
  SimpleFacets f = new SimpleFacets(rb.req,
  rb.getResults().do

Re: Greek and English text into the same field

2011-03-18 Thread Upayavira
You're likely going to have to try it and see what works. But here's
some suggestions:

If the content is merged inseparably into one field, maybe you could
index that field twice, as text_en and text_gr using copyField. You
could then have a different analyser for each, and see what results you
get when you do your search against both fields.

If your code is separable, you could have text_en and text_gr, each with
their own language specific analyser chains, and index your content into
the field relevant for that language.

HTH

Upayavira


On Thu, 17 Mar 2011 18:18 -0700, "abiratsis" 
wrote:
> Hello everyone,
> 
> I have a index that contains text (several fileds) that can be in English
> or
> in Greek. I have found the corresponding filters
> 
> solr.GreekLowerCaseFilterFactory
> solr.GreekStemFilterFactory
> 
> for the greek language along with the special type text_greek included to
> the default schema.xml file, although I need to know if I can use them
> with
> the existing filters for a text field (embed them to the existing
> configuration for english). 
> 
> So my 1st question is if I can simply add these two filters to the
> existing
> field types or an extra configuration needed? 
> 
> And the 2nd question is about how to handle the greek
> synonyms-stopwords...
> should I simply add onether solr.SynonymFilterFactory filter to the
> existing
> configuration? Should I merge both files (english-greek) together?
> 
> Basicaly I don't know what the best approach is for handling a
> multilingual
> case like mine e.g:should I create a seperate index for each language?
> 
> Any suggestions appreciated...
> 
> Thanx,
> Alex
> 
> 
> 
> 
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Greek-and-English-text-into-the-same-field-tp2696186p2696186.html
> Sent from the Solr - User mailing list archive at Nabble.com.
> 
--- 
Enterprise Search Consultant at Sourcesense UK, 
Making Sense of Open Source



Re: Retrieving Ranking (Position)

2011-03-18 Thread Upayavira
Not sure if there is a way to make Solr do it by itself. If you cannot
have your app do it (which would be best), you could add an XSLT
stylesheet to Solr that would enhance the result set using the XSLT
position() function.

Upayavira

On Thu, 17 Mar 2011 17:25 -0400, "Jae Joo"  wrote:
> Hi,
> 
> I am looking for the way to retrieve a ranking (or position) of  the
> document matched  in the result set.
> 
> I can get the data, then parse it to find the position of the document
> matched, but am looking for the way if there is a feature.
> 
> Thanks,
> 
> Jae
> 
--- 
Enterprise Search Consultant at Sourcesense UK, 
Making Sense of Open Source



Re: Greek and English text into the same field

2011-03-18 Thread Robert Muir
On Thu, Mar 17, 2011 at 9:18 PM, abiratsis  wrote:
>
> Basicaly I don't know what the best approach is for handling a multilingual
> case like mine e.g:should I create a seperate index for each language?
>

In this particular case (Greek, English), they use totally distinct
characters. so their terms will never conflate with each other, their
stemmers will never mess with the other language's text, etc.

So I would:
a. switch from LowerCaseFilter to GreekLowerCaseFilter... it
lowercases english the same way, don't worry.
b. add greek stopwords file to your stopfilter. stopfilterfactory can
take multiple file arguments... just separate them with a comma.
c. add the greek stemmer right after the porter stemmer.

then your field works fine for greek and english...


Re: Smart Pagination queries

2011-03-18 Thread Erik Hatcher
And don't forget about faceting (and now grouping) that are dependent upon the 
result set being entirely set within the query component.  

So if you're client-side filtering out docs your facet counts could be way 
wrong (and again, also grouping).

Erik

On Mar 17, 2011, at 20:57 , Chris Hostetter wrote:

> 
> 
> : In order to paint "Next" links app would have to know total number of
> : records that user is eligible for read. getNumFound() will tell me that
> : there are total 4K records that Solr returned. If there wasn't any
> : entitlement rules then it could have been easier to determine how many
> : "Next" links to paint and when user clicks on "Next" pass in "start"
> : position appropriately in solr query. Since I have to apply post filter as
> : and when results are fetched from Solr is there a better way to achieve
> 
> In an ideal world, you would do this using a custom plugin -- either a 
> SearchComponent or a QParser used i na filter query.
> 
> if you really have to do this client side, then a few basic rules come to 
> mind...
> 
> 1) allways over request.  if you estimate that your user can only fiew 1/X 
> docs in your total collection, and you want ot show Y results per page, 
> then your rows param should be at least 2*X*Y (i picked 2 just for good 
> measure, just because you know the average doesn't mean you know the real 
> distrobution)
> 
> 2) however many rows you get back, you need to keep track of the "real" 
> start param you used, and at what in the current page you had enough docs 
> to show the user -- that will determine your next "start" param.
> 
> 3) wether you have a "next" link or not depends on:
> 3a) wether you had any left over the first time you over requested (see 
> #2 above)
> 3b) wether numFound was greater then the index of the last item you got.
> ...if 3a and 3b are both false, you definitley don't need a "next" link. 
> if either of them is true then you probably *should* give them a next 
> link, but you still need to be prepared for the possibility that you won't 
> have any more docs (they might only be half way through the result set, 
> but every remaining doc might be something they arne't allowed to see)
> 
> there's really no clean way to avoid the possibility completley, unless 
> you really crank up how agressively you over request -- ultimatley if you 
> over request *all* matches, then you can know definitively wether to give 
> them a next link at any point.
> 
> -Hoss



Re: SOLR building problems

2011-03-18 Thread Erick Erickson
On Thu, Mar 17, 2011 at 10:05 AM, royr  wrote:
> java version "1.6.0_21"
> Java(TM) SE Runtime Environment (build 1.6.0_21-b06)
> Java HotSpot(TM) Server VM (build 17.0-b16, mixed mode)
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/SOLR-building-problems-tp2692916p2693574.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>
Hmmm, then I'm puzzled. I was hoping it was something simple like you
weren't using the right Java version.

All I can say at this point is that lots of people (including me) are
compiling without any problem, your steps look correct and the error
*seems* to be related to somehow using an older Java compiler.

Is there any chance that you're using a different Java compiler than
you think, or you're somehow specifying that the compilation should be
compatible with an earlier version of Java?

Best
Erick


Re: DIH Issue(newbie to solr)

2011-03-18 Thread Gora Mohanty
On Fri, Mar 18, 2011 at 2:45 AM, neha  wrote:
> I am a newbie to solr I have an issue with DIH but unable to pinpoint what is
> causing the issue. I am using the demo jetty installation of Solr and tried
> to create a project with new schema.xml, solrconfig.xml and data-config.xml
> files. when I run
> "http://131.187.88.221:8983/solr/dataimport?command=full-import"; this is
> what I get:
> I am unable to index documents(it doesn't throw any error though).
[...]
> Indexing completed. Added/Updated: 0 documents. Deleted 0 documents.
[...]

Looks like no documents are getting selected. Could you show us
data-config.xml and schema.xml, preferably by uploading them to
pastebin.com, and posting the links here?

Regards,
Gora

P.S. You might want to hide the IP address of your Solr when posting
to a public list.


Re: Segments and Memory Correlate?

2011-03-18 Thread Erick Erickson
543M documents? On a single machine? How big is the index anyway?

I think you're running up against physical memory limitations,
the number of segments is a red herring. You're at a point where
you need to shard your index to multiple machines I'd guess. Or perhaps
that point was some time ago .

Best
Erick

On Thu, Mar 17, 2011 at 2:39 PM, danomano  wrote:
> Hi folks, I ran into problem today where I am no longer able to execute any
> queries :( due to Out of Memory issues.
>
> I am in the process of investigating the use of different mergeFactors, or
> even different merge policies all together.
> My question is if I have many segments (i.e. smaller sized segments), will
> that also reduce the total memory in RAM required for searching?  (my System
> is currently allocated 8GB ram and has a ~255GB index).  (I'm not fully up
> on the 'default merge policy' but I believe with a mergeFactor of 10, that
> would mean each segment should be approaching about 25Gb? with ~543 million
> documents
>
> of note: this is all running on 1 server.
>
> As seen below.
>
> SEVERE: java.lang.OutOfMemoryError: Java heap space
>        at
> org.apache.lucene.search.cache.LongValuesCreator.fillLongValues(LongValuesCreator.java:141)
>        at
> org.apache.lucene.search.cache.LongValuesCreator.validate(LongValuesCreator.java:84)
>        at
> org.apache.lucene.search.cache.LongValuesCreator.create(LongValuesCreator.java:74)
>        at
> org.apache.lucene.search.cache.LongValuesCreator.create(LongValuesCreator.java:37)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.createValue(FieldCacheImpl.java:155)
>        at
> org.apache.lucene.search.FieldCacheImpl$Cache.get(FieldCacheImpl.java:188)
>        at
> org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:337)
>        at
> org.apache.lucene.search.FieldComparator$LongComparator.setNextReader(FieldComparator.java:504)
>        at
> org.apache.lucene.search.TopFieldCollector$OneComparatorNonScoringCollector.setNextReader(TopFieldCollector.java:97)
>        at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:207)
>        at org.apache.lucene.search.Searcher.search(Searcher.java:101)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1389)
>        at
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1285)
>        at
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:344)
>        at
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:273)
>        at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:210)
>        at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1324)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
>        at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
>        at
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
>        at
> com.openmarket.servletfilters.LogToCSVFilter.doFilter(LogToCSVFilter.java:89)
>        at
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
>        at
> com.openmarket.servletfilters.GZipAutoDeflateFilter.doFilter(GZipAutoDeflateFilter.java:66)
>        at
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
> ...etc
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/Segments-and-Memory-Correlate-tp2694747p2694747.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Sorting on multiValued fields via function query

2011-03-18 Thread Erick Erickson
+1 for both Chris's and Yonik's comments.

On Thu, Mar 17, 2011 at 3:19 PM, Yonik Seeley
 wrote:
> On Thu, Mar 17, 2011 at 2:12 PM, Chris Hostetter
>  wrote:
>> As the code stands now: we fail fast and let the person building hte index
>> make a decision.
>
> Indexing two fields when one could work is unfortunate though.
> I think what we should support (eventually) is a max() function will also
> work on a multi-valued field and select the maximum value (i.e. it will
> simply bypass the check for multi-valued fields).
>
> Then one can utilize sort-by-function to do
> sort=max(author) asc
>
> -Yonik
> http://lucidimagination.com
>


Re: Adding the suggest component

2011-03-18 Thread Erick Erickson
What do you mean "you copied the contents...to the right place"? If you
checked out trunk and copied the files into 1.4.1, you have mixed source
files between disparate versions. All bets are off.

Or do you mean jar files? or???

I'd build the source you checked out (at the Solr level) and use that rather
than try to mix-n-match.

BTW, if you're just starting (as in not in production), you may want to consider
using 3.1, as it's being released even as we speak and has many improvements
over 1.4. You can get a nightly build from here:
https://builds.apache.org/hudson/view/S-Z/view/Solr/

Best
Erick

On Thu, Mar 17, 2011 at 3:36 PM, Brian Lamb
 wrote:
> Hi all,
>
> When I installed Solr, I downloaded the most recent version (1.4.1) I
> believe. I wanted to implement the Suggester (
> http://wiki.apache.org/solr/Suggester). I copied and pasted the information
> there into my solrconfig.xml file but I'm getting the following error:
>
> Error loading class 'org.apache.solr.spelling.suggest.Suggester'
>
> I read up on this error and found that I needed to checkout a newer version
> from SVN. I checked out a full version and copied the contents of
> src/java/org/apache/spelling/suggest to the same location on my set up.
> However, I am still receiving this error.
>
> Did I not put the files in the right place? What am I doing incorrectly?
>
> Thanks,
>
> Brian Lamb
>


Re: Greek and English text into the same field

2011-03-18 Thread abiratsis
OK thanx a lot guys, one last question is there any need to download and
embed the stopwords-synonyms files or solr.war already contains them?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Greek-and-English-text-into-the-same-field-tp2696186p2697795.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Info about Debugging SOLR in Eclipse

2011-03-18 Thread Erick Erickson
BTW, it's sometimes easier to debug if you forget the servlet
container and just debug through a Junit test case. Just fire up
an individual test case in debug mode and step wherever you want.

Of course sometimes that's inadequate, but it's worth having in
you bag of tricks.

Best
Erick

On Thu, Mar 17, 2011 at 5:08 PM, Geeta Subramanian
 wrote:
> Hi All,
>
> Thanks for the help... I am now able to debug my solr. :-)
>
> -Original Message-
> From: pkeegan01...@gmail.com [mailto:pkeegan01...@gmail.com] On Behalf Of 
> Peter Keegan
> Sent: 17 March, 2011 3:33 PM
> To: solr-user@lucene.apache.org
> Subject: Re: Info about Debugging SOLR in Eclipse
>
> The instructions refer to the 'Run configuration' menu. Did you try 'Debug 
> configurations'?
>
>
> On Thu, Mar 17, 2011 at 3:27 PM, Peter Keegan wrote:
>
>> Can you use jetty?
>>
>>
>> http://www.lucidimagination.com/developers/articles/setting-up-apache-
>> solr-in-eclipse
>>
>> On Thu, Mar 17, 2011 at 12:17 PM, Geeta Subramanian <
>> gsubraman...@commvault.com> wrote:
>>
>>> Hi,
>>>
>>> Can some please let me know the steps on how can I debug the solr
>>> code in my eclipse?
>>>
>>> I tried to compile the source, use the jars and place in tomcat where
>>> I am running solr. And do remote debugging, but it did not stop at
>>> any break point.
>>> I also tried to write a sample standalone java class to push the document.
>>> But I stopped at solr j classes and not solr server classes.
>>>
>>>
>>> Please let me know if I am making any mistake.
>>>
>>> Regards,
>>> Geeta
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> **Legal Disclaimer***
>>> "This communication may contain confidential and privileged material
>>> for the sole use of the intended recipient.  Any unauthorized review,
>>> use or distribution by others is strictly prohibited.  If you have
>>> received the message in error, please advise the sender by reply
>>> email and delete the message. Thank you."
>>> 
>>>
>>
>>
>
>
>
>
>
>
>
>
>
>
>
> **Legal Disclaimer***
> "This communication may contain confidential and privileged material
> for the sole use of the intended recipient.  Any unauthorized review,
> use or distribution by others is strictly prohibited.  If you have
> received the message in error, please advise the sender by reply
> email and delete the message. Thank you."
> 
>


Re: Exception on distributed date facet SOLR-1709

2011-03-18 Thread Peter Sturge
Hi Viswa,

This patch was orignally built for the 3x branch, and I don't see any
ported patch revision or testing for trunk. A lot has changed in
faceting from 3x to trunk, so it will likely need a bit of adjusting
to cater for these changes (e.g. deprecation of date range in favour
of range). Have you tried this patch on 3x branch?

Thanks,
Peter



On Fri, Mar 18, 2011 at 7:09 AM, Viswa S  wrote:
> Folks,
>
> We are trying to do some date faceting on our distributed environment,
> applied solr-1709 on the trunk. A date facet query throws the below
> exception, I have attached the patched source for reference. Any help would
> be appreciated.
>
> Other Info:
> Java ver: 1_6_0_24
> Trung change list: 1022216
>
>
>
>
> SEVERE: java.lang.ClassCastException: java.util.Date cannot be cast to
> java.lang.Integer
>
>     at
> org.apache.solr.handler.component.FacetComponent.countFacets(FacetComponent.java:294)
>
>     at
> org.apache.solr.handler.component.FacetComponent.handleResponses(FacetComponent.java:232)
>
>     at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:326)
>
>     at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
>
>     at org.apache.solr.core.SolrCore.execute(SolrCore.java:1325)
>
>     at
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
>
>     at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
>
>     at
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
>
>     at
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)
>
>     at
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
>
>     at
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
>
>     at
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
>
>     at
> org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)
>
>     at
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
>
>     at
> org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
>
>     at
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
>
>     at org.mortbay.jetty.Server.handle(Server.java:326)
>
>     at
> org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
>
>     at
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923)
>
>     at
> org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547)
>
>     at
> org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
>
>     at
> org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
>
>     at
> org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
>
>     at
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
>
>
>
>
>


Re: SOLR building problems

2011-03-18 Thread royr
It works!! 

Ant was using an old java version from another directory. THANK you:)

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOLR-building-problems-tp2692916p2697973.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Adding the suggest component

2011-03-18 Thread Brian Lamb
That does seem like a better solution. I downloaded a recent version and
there were the following files/folders:

build.xml
dev-tools
LICENSE.txt
lucene
NOTICE.txt
README.txt
solr

So I did cp -r solr/* /path/to/solr/stuff/ and started solr. I didn't get
any error message but I only got the following messages:

2011-03-18 14:11:02.016:INFO::Logging to STDERR via
org.mortbay.log.StdErrLog
2011-03-18 14:11:02.240:INFO::jetty-6.1-SNAPSHOT
2011-03-18 14:11:02.284:INFO::Started SocketConnector@0.0.0.0:8983

Where as before I got a bunch of messages indicating various libraries had
been loaded. Additionally, when I go to http://localhost/solr/admin/, I get
the following message:

HTTP ERROR: 404

Problem accessing /solr/admin. Reason:

NOT_FOUND

What did I do incorrectly?

Thanks,

Brian Lamb


On Fri, Mar 18, 2011 at 9:04 AM, Erick Erickson wrote:

> What do you mean "you copied the contents...to the right place"? If you
> checked out trunk and copied the files into 1.4.1, you have mixed source
> files between disparate versions. All bets are off.
>
> Or do you mean jar files? or???
>
> I'd build the source you checked out (at the Solr level) and use that
> rather
> than try to mix-n-match.
>
> BTW, if you're just starting (as in not in production), you may want to
> consider
> using 3.1, as it's being released even as we speak and has many
> improvements
> over 1.4. You can get a nightly build from here:
> https://builds.apache.org/hudson/view/S-Z/view/Solr/
>
> Best
> Erick
>
> On Thu, Mar 17, 2011 at 3:36 PM, Brian Lamb
>  wrote:
> > Hi all,
> >
> > When I installed Solr, I downloaded the most recent version (1.4.1) I
> > believe. I wanted to implement the Suggester (
> > http://wiki.apache.org/solr/Suggester). I copied and pasted the
> information
> > there into my solrconfig.xml file but I'm getting the following error:
> >
> > Error loading class 'org.apache.solr.spelling.suggest.Suggester'
> >
> > I read up on this error and found that I needed to checkout a newer
> version
> > from SVN. I checked out a full version and copied the contents of
> > src/java/org/apache/spelling/suggest to the same location on my set up.
> > However, I am still receiving this error.
> >
> > Did I not put the files in the right place? What am I doing incorrectly?
> >
> > Thanks,
> >
> > Brian Lamb
> >
>


Re: Solr admin page timed out and index updating issues

2011-03-18 Thread Ranma
Thank you, Markus for your reply.


..Well, in the end I think I found a way to make something work!

Apart from Solr Admin Page letting me randomly access, I could eventually be
able to update the search index and have the web site search functionality
work.

This is how:

- I had to set the java heap size in /ect/profile by assigning -Xms128M
-Xmx128M -XX:PermSize=128m -XX:MaxPermSize=128m to _JAVA_OPTIONS variable

[root@xx ~]# vi /etc/profile
...
export _JAVA_OPTIONS: "-Xms128M -Xmx128M -XX:PermSize=128m
-XX:MaxPermSize=128m"


- Started solr from ssh:

[root@xx ~]# cd /var/www/vhosts/.net/httpdocs/extension/ezfind/java
[root@xx java]# java -jar start.jar
Picked up _JAVA_OPTIONS: -Xms128M -Xmx128M -XX:PermSize=128m
-XX:MaxPermSize=128m
2011-03-16 16:53:58.759::INFO:  Logging to STDERR via
org.mortbay.log.StdErrLog
2011-03-16 16:54:01.260::INFO:  jetty-6.1.3
2011-03-16 16:54:03.018::INFO:  Extract
jar:file:/var/www/vhosts/xxx.net/httpdocs/extension/ezfind/java/webapps/solr.war!/
to /tmp/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp
...
INFO: SolrUpdateServlet.init() done
2011-03-16 16:54:13.445::INFO:  Started SocketConnector @ 0.0.0.0:8983


- With Solr running, opened a new shell window and launch the command:

[root@xx httpdocs]# php extension/ezfind/bin/php/updatesearchindexsolr.php
-s private --php-exec=php --allow-root-user
Starting object re-indexing
Using 1 concurent process(es)
Number of objects to index: 238
sh: pstotext: command not found
..
29.41%
..
58.82%
..
88.24%
  
100.00%
Optimizing. Please wait ...
Indexing took 23.989723920822 secs ( average: 9.9209145042901 objects/sec )
Finished updating the search index.


At least now, (of course while the server is running) the search seems to
work fine, finally giving the search results!

Note: With regards to java installation I also reconfigured the default java
configuration, more than what previously done:

[root@xx ~]# alternatives --install /usr/bin/java java
/usr/java/jdk1.6.0_24/bin/java 2
[root@xx ~]# alternatives --config java
There are 2 programs which provide 'java'.
 
  SelectionCommand
---
 + 1   /usr/lib/jvm/jre-1.4.2-gcj/bin/java
*  2   /usr/java/jdk1.6.0_24/bin/java
 
Enter to keep the current selection[+], or type selection number: 2
[root@xx ~]# alternatives --config java
 
There are 2 programs which provide 'java'.
 
  SelectionCommand
---
   1   /usr/lib/jvm/jre-1.4.2-gcj/bin/java
*+ 2   /usr/java/jdk1.6.0_24/bin/java
...



-
loredanaebook.it
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-admin-page-timed-out-and-index-updating-issues-tp2664429p2698408.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Full text hit term highlighting

2011-03-18 Thread lboutros
Hi,

It seems that we have the same problem, how did you solve it ? Did you write
some pieces of code ?

thx,

Ludovic

-
Jouve
France.
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Full-text-hit-term-highlighting-tp2020402p2698440.html
Sent from the Solr - User mailing list archive at Nabble.com.


How to get stopwords and synonyms files for several lanuages

2011-03-18 Thread abiratsis
Hello everyone,

I am developing a multilingual index so there is a need for different
languages support. I need some answers to the follwing questions:

1. Which steps should I follow in order to get(download) all the
stopwords-synonyms files for several languages? 

2. Is there any site containing them? 

3. Should I download them somehow or they are already embedded to the
solr.war?

Thanx,
Alex

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files-for-several-lanuages-tp2698494p2698494.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to get stopwords and synonyms files for several lanuages

2011-03-18 Thread Markus Jelsma
On Friday 18 March 2011 17:09:35 abiratsis wrote:
> Hello everyone,
> 
> I am developing a multilingual index so there is a need for different
> languages support. I need some answers to the follwing questions:
> 
> 1. Which steps should I follow in order to get(download) all the
> stopwords-synonyms files for several languages?

Synonyms largely depend on what you're indexing. There is no general list of 
synonyms. Also, because if you expand synonyms at index time, the index grows 
to extreme proportions.

> 
> 2. Is there any site containing them?

The wiki has a nice list for many languages. Which stemmer to use, whether 
special lowercasing is needed and stopwords.

http://wiki.apache.org/solr/LanguageAnalysis

> 
> 3. Should I download them somehow or they are already embedded to the
> solr.war?

They're stored in your SOLR_HOME/conf directory.

> 
> Thanx,
> Alex
> 
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files
> -for-several-lanuages-tp2698494p2698494.html Sent from the Solr - User
> mailing list archive at Nabble.com.

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


RE: Exception on distributed date facet SOLR-1709

2011-03-18 Thread Viswa S


Peter

I haven't, our implementation needs ZK integration.Would be great if you can  
give any additional pointers on porting this to trunk.

Thanks
Viswa


> Date: Fri, 18 Mar 2011 13:52:39 +
> Subject: Re: Exception on distributed date facet SOLR-1709
> From: peter.stu...@gmail.com
> To: solr-user@lucene.apache.org
> 
> Hi Viswa,
> 
> This patch was orignally built for the 3x branch, and I don't see any
> ported patch revision or testing for trunk. A lot has changed in
> faceting from 3x to trunk, so it will likely need a bit of adjusting
> to cater for these changes (e.g. deprecation of date range in favour
> of range). Have you tried this patch on 3x branch?
> 
> Thanks,
> Peter
> 
> 
> 
> On Fri, Mar 18, 2011 at 7:09 AM, Viswa S  wrote:
> > Folks,
> >
> > We are trying to do some date faceting on our distributed environment,
> > applied solr-1709 on the trunk. A date facet query throws the below
> > exception, I have attached the patched source for reference. Any help would
> > be appreciated.
> >
> > Other Info:
> > Java ver: 1_6_0_24
> > Trung change list: 1022216
> >
> >
> >
> >
> > SEVERE: java.lang.ClassCastException: java.util.Date cannot be cast to
> > java.lang.Integer
> >
> > at
> > org.apache.solr.handler.component.FacetComponent.countFacets(FacetComponent.java:294)
> >
> > at
> > org.apache.solr.handler.component.FacetComponent.handleResponses(FacetComponent.java:232)
> >
> > at
> > org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:326)
> >
> > at
> > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
> >
> > at org.apache.solr.core.SolrCore.execute(SolrCore.java:1325)
> >
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
> >
> > at
> > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
> >
> > at
> > org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
> >
> > at
> > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388)
> >
> > at
> > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> >
> > at
> > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> >
> > at
> > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765)
> >
> > at
> > org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418)
> >
> > at
> > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> >
> > at
> > org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
> >
> > at
> > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> >
> > at org.mortbay.jetty.Server.handle(Server.java:326)
> >
> > at
> > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> >
> > at
> > org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:923)
> >
> > at
> > org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:547)
> >
> > at
> > org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> >
> > at
> > org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> >
> > at
> > org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
> >
> > at
> > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> >
> >
> >
> >
> >
  

Re: How to get stopwords and synonyms files for several lanuages

2011-03-18 Thread abiratsis
OK thanx Markus, is clear enough now

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files-for-several-lanuages-tp2698494p2698566.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to get stopwords and synonyms files for several lanuages

2011-03-18 Thread abiratsis
OK thanx Markus, is clear enough now

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files-for-several-lanuages-tp2698494p2698567.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to get stopwords and synonyms files for several lanuages

2011-03-18 Thread abiratsis
Basically I have one more question, by saying that "Synonyms largely depend
on what you're indexing" you mean that I probably need to implement a
mechanism for handling synonyms right? If yes, you have any suggestions how
to implement this?

Thanx,
Alex

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files-for-several-lanuages-tp2698494p2698593.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: How to get stopwords and synonyms files for several lanuages

2011-03-18 Thread Markus Jelsma
No, it's not an implementation its more dependant on business. I mean, there 
is no need expand synonyms for terms in a biology field while you're indexing 
physics documents.

On Friday 18 March 2011 17:31:23 abiratsis wrote:
> Basically I have one more question, by saying that "Synonyms largely depend
> on what you're indexing" you mean that I probably need to implement a
> mechanism for handling synonyms right? If yes, you have any suggestions how
> to implement this?
> 
> Thanx,
> Alex
> 
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/How-to-get-stopwords-and-synonyms-files
> -for-several-lanuages-tp2698494p2698593.html Sent from the Solr - User
> mailing list archive at Nabble.com.

-- 
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350


SolrServerException: java.net.SocketTimeoutException: Read timed out

2011-03-18 Thread Mike Franon
Hi,

I am seeing this error from another jboss server that connects to solr
for queries/updates

SolrServiceImpl] Solr server exception
org.apache.solr.client.solrj.SolrServerException:
 java.net.SocketTimeoutException: Read timed out

On the solr serer itself only thing I am seeing is

 INFO org.apache.solr.core.SolrCore - SolrDeletionPolicy.onCommit: commits:num=3

commit{dir=/usr/local/jboss-4.0.5.GA/bin/solr/data/index,segFN=segments_oyk,version=1296502816641,generation=32348,filenames=[_tp9.prx,
_tpe.fdt, _tpg.tii, _tpc.prx, _tnk_1j.del, _tpd.fdx, _tp9.fnm,
_tnk.fdt, _tpc.fdx, _tpa.nrm, _tp9.frq, _tph.nrm, _tph.prx, _tpg.fnm,
_tpd.nrm, _tpg.frq, _tph.fdx, _tpa.tii, _tpb.prx, _tp9.tii, _tpa.tis,
_tp9_1.del, _tpe.fdx, _tpe.frq, _tpd.tis, _tp9.fdx, _tnk.tii,
_tpb.fdx, _tpd.fdt, _tnk.prx, _tpb.fdt, _tpd.frq, _tpe.fnm, _tpb.fnm,
_tpg.tis, _tpe.tis, _tpe.tii, _tpd.prx, _tpc.fdt, _tpb.frq, _tpd.fnm,
_tph.fnm, _tpg.fdx, _tpb.tis, _tpg.fdt, _tph.fdt, _tpc.tii, _tpa.prx,
_tpd.tii, _tpa.fdx, _tp9.fdt, _tpf.fdx, _tpe.prx, _tp9.nrm, _tpc.nrm,
_tpg.prx, _tph.tis, _tpb.nrm, _tpg.nrm, _tpf.nrm, _tpd_1.del,
_tnk.tis, _tnk.fdx, _tnk.nrm, _tpc.tis, _tpf.fnm, _tpe.nrm, _tnk.fnm,
_tpf.fdt, _tnk.frq, _tpa.fdt, _tpf.tis, _tpf.frq, _tpb_1.del,
_tpf.tii, _tpa.fnm, _tph.frq, _tpa.frq, segments_oyk, _tph.tii,
_tpc.frq, _tp9.tis, _tpf.prx, _tpb.tii, _tpc.fnm]

commit{dir=/usr/local/jboss-4.0.5.GA/bin/solr/data/index,segFN=segments_oyl,version=1296502816642,generation=32349,filenames=[_tpj.tis,
_tnk.tis, _tnk.fdx, _tnk.nrm, _tnk_1j.del, _tnk.fnm, _tnk.frq,
_tnk.tii, _tnk.fdt, _tpj.fdx, _tnk.prx, _tpj.tii, _tpj.prx, _tpj.nrm,
_tpj.frq, segments_oyl, _tpj.fdt, _tpj.fnm]

commit{dir=/usr/local/jboss-4.0.5.GA/bin/solr/data/index,segFN=segments_oym,version=1296502816643,generation=32350,filenames=[_tpj.tis,
_tpk.fdx, segments_oym, _tnk.fdt, _tpk.tis, _tpk.frq, _tpj.prx,
_tpj.tii, _tpk.nrm, _tpk.fnm, _tpj_1.del, _tpk.prx, _tnk_1k.del,
_tnk.fdx, _tnk.tis, _tnk.nrm, _tnk.fnm, _tnk.frq, _tnk.tii, _tpk.fdt,
_tnk.prx, _tpj.fdx, _tpk.tii, _tpj.nrm, _tpj.frq, _tpj.fnm, _tpj.fdt]


I see this on the solr server every 30 seconds to minute on average/

Could that be causing the timeouts?

Thanks


Re: Adding the suggest component

2011-03-18 Thread Darx Oman
Hi
Solr 3.x and 4.x (trunk) include a component called Suggester
http://wiki.apache.org/solr/Suggester


Re: memory not getting released in tomcat after pushing large documents

2011-03-18 Thread Darx Oman
Hi guys
I'm facing a simillar porblem
and i find out it is caused by MS SQL that is running in the same machine
by just restarting MS SQL service, memory goes down.


Re: Implementing Fuzzy Search using OWA operator and Fuzzy Linguistic Quantifier

2011-03-18 Thread Anurag
I have some sample code to implement it written using Lucene. This code is
not final and need many modification. Now i want to embed with solr. How
this is possible.

the code is below
//package lia.searching;
import java.util.Arrays;
import java.util.Collections;
//import org.apache.lucene.analysis.standard.StandardAnalyzer;

import org.apache.lucene.index.IndexReader;
import org.apache.lucene.search.Scorer;
import org.apache.lucene.analysis.SimpleAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.Explanation;
import org.apache.lucene.search.Hits;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.store.FSDirectory;

public class Explainer {

public static void reverse(float[] array) {
  if (array == null) {
  return;
  }
  int i = 0;
  int j = array.length - 1;
  float tmp;
  while (j > i) {
  tmp = array[j];
  array[j] = array[i];
  array[i] = tmp;
  j--;
  i++;
  }
  }

public static float fun(float r ,float a ,float b )
{
if(rb) return 1.0f;
return 0.0f; 
   }
  
  public static void main(String[] args) throws Exception {
   
   if (args.length < 3) {
  System.err.println("Usage: Explainer  ");
  System.exit(1);
}
  
   
String indexDir = args[0];
String options = args[1]; //atleasthalf,most, asmanyaspossible
String[] queryExpression=new String[args.length-2];
for(int i=2;i

Re: DIH Issue(newbie to solr)

2011-03-18 Thread neha
Here are the links to the file

SOlr Response
http://pastebin.com/3KJhAe2q

Schema.xml
https://github.com/projectblacklight/blacklight-jetty/blob/master/solr/conf/schema.xml

SOlrConfil.xml
https://github.com/projectblacklight/blacklight-jetty/blob/master/solr/conf/solrconfig.xml

Data-config.xml

http://pastebin.com/ncXr1LzV

An observation, not sure if it is helpful to pinpoint the issue, I get this
message when i try too do a full import
SolrDeletionPolicy.onInit: commits:num=1 error

Stack Trace:

INFO: SolrDeletionPolicy.onInit: commits:num=1
   
commit{dir=/local/home/abc/ruby/Solr/apache-solr-1.4.1/example/solr/data/index,segFN=segments_k,version=1300286691490,generation=20,filenames=[segments_k]
Mar 17, 2011 5:08:20 PM org.apache.solr.core.SolrDeletionPolicy
updateCommits
INFO: newest commit = 1300286691490
Mar 17, 2011 5:08:20 PM org.apache.solr.handler.dataimport.DocBuilder finish
INFO: Import completed successfully
Mar 17, 2011 5:08:20 PM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start
commit(optimize=true,waitFlush=false,waitSearcher=true,expungeDeletes=false)
Mar 17, 2011 5:08:20 PM org.apache.solr.core.SolrDeletionPolicy onCommit
INFO: SolrDeletionPolicy.onCommit: commits:num=2
   
commit{dir=/local/home/abc/ruby/Solr/apache-solr-1.4.1/example/solr/data/index,segFN=segments_k,version=1300286691490,generation=20,filenames=[segments_k]
   
commit{dir=/local/home/abc/ruby/Solr/apache-solr-1.4.1/example/solr/data/index,segFN=segments_l,version=1300286691491,generation=21,filenames=[segments_l]
Mar 17, 2011 5:08:20 PM org.apache.solr.core.SolrDeletionPolicy
updateCommits
INFO: newest commit = 1300286691491
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher
INFO: Opening Searcher@d1329 main
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming Searcher@d1329 main from Searcher@1dcc2a3 main
   
fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=8,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0,item_subject_topic_facet={field=subject_topic_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_subject_geo_facet={field=subject_geo_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_subject_era_facet={field=subject_era_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_pub_date={field=pub_date,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_language_facet={field=language_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_lc_b4cutter_facet={field=lc_b4cutter_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_lc_alpha_facet={field=lc_alpha_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2},item_lc_1letter_facet={field=lc_1letter_facet,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=2}}
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming result for Searcher@d1329 main
   
fieldValueCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=0,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming Searcher@d1329 main from Searcher@1dcc2a3 main
   
filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=2,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming result for Searcher@d1329 main
   
filterCache{lookups=0,hits=0,hitratio=0.00,inserts=0,evictions=0,size=2,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming Searcher@d1329 main from Searcher@1dcc2a3 main
   
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=2,evictions=0,size=2,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming result for Searcher@d1329 main
   
queryResultCache{lookups=0,hits=0,hitratio=0.00,inserts=2,evictions=0,size=2,warmupTime=0,cumulative_lookups=0,cumulative_hits=0,cumulative_hitratio=0.00,cumulative_inserts=0,cumulative_evictions=0}
Mar 17, 2011 5:08:20 PM org.apache.solr.search.SolrIndexSearcher warm
INFO: autowarming Searcher@d1329 main from Searcher@1dcc2a3 main
   
documentCache{lookups=0

Re: Adding the suggest component

2011-03-18 Thread Geert-Jan Brits
> 2011-03-18 14:11:02.284:INFO::Started SocketConnector@0.0.0.0:8983
Solr started on port 8983

instead of this:
> http://localhost/solr/admin/

try this instead:
http://localhost:8983/solr/admin/ 

Cheers,
Geert-Jan



2011/3/18 Brian Lamb 

> That does seem like a better solution. I downloaded a recent version and
> there were the following files/folders:
>
> build.xml
> dev-tools
> LICENSE.txt
> lucene
> NOTICE.txt
> README.txt
> solr
>
> So I did cp -r solr/* /path/to/solr/stuff/ and started solr. I didn't get
> any error message but I only got the following messages:
>
> 2011-03-18 14:11:02.016:INFO::Logging to STDERR via
> org.mortbay.log.StdErrLog
> 2011-03-18 14:11:02.240:INFO::jetty-6.1-SNAPSHOT
> 2011-03-18 14:11:02.284:INFO::Started SocketConnector@0.0.0.0:8983
>
> Where as before I got a bunch of messages indicating various libraries had
> been loaded. Additionally, when I go to http://localhost/solr/admin/, I
> get
> the following message:
>
> HTTP ERROR: 404
>
> Problem accessing /solr/admin. Reason:
>
>NOT_FOUND
>
> What did I do incorrectly?
>
> Thanks,
>
> Brian Lamb
>
>
> On Fri, Mar 18, 2011 at 9:04 AM, Erick Erickson  >wrote:
>
> > What do you mean "you copied the contents...to the right place"? If you
> > checked out trunk and copied the files into 1.4.1, you have mixed source
> > files between disparate versions. All bets are off.
> >
> > Or do you mean jar files? or???
> >
> > I'd build the source you checked out (at the Solr level) and use that
> > rather
> > than try to mix-n-match.
> >
> > BTW, if you're just starting (as in not in production), you may want to
> > consider
> > using 3.1, as it's being released even as we speak and has many
> > improvements
> > over 1.4. You can get a nightly build from here:
> > https://builds.apache.org/hudson/view/S-Z/view/Solr/
> >
> > Best
> > Erick
> >
> > On Thu, Mar 17, 2011 at 3:36 PM, Brian Lamb
> >  wrote:
> > > Hi all,
> > >
> > > When I installed Solr, I downloaded the most recent version (1.4.1) I
> > > believe. I wanted to implement the Suggester (
> > > http://wiki.apache.org/solr/Suggester). I copied and pasted the
> > information
> > > there into my solrconfig.xml file but I'm getting the following error:
> > >
> > > Error loading class 'org.apache.solr.spelling.suggest.Suggester'
> > >
> > > I read up on this error and found that I needed to checkout a newer
> > version
> > > from SVN. I checked out a full version and copied the contents of
> > > src/java/org/apache/spelling/suggest to the same location on my set up.
> > > However, I am still receiving this error.
> > >
> > > Did I not put the files in the right place? What am I doing
> incorrectly?
> > >
> > > Thanks,
> > >
> > > Brian Lamb
> > >
> >
>


Please reply "how to embed code to solr"

2011-03-18 Thread Anurag
http://lucene.472066.n3.nabble.com/Implementing-Fuzzy-Search-using-OWA-operator-and-Fuzzy-Linguistic-Quantifier-td2261469.html
Link 

-
Kumar Anurag

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Please-reply-how-to-embed-code-to-solr-tp2699218p2699218.html
Sent from the Solr - User mailing list archive at Nabble.com.


Surge 2011 Conference CFP

2011-03-18 Thread Katherine Jeschke
We are excited to announce Surge 2011, the Scalability and Performance
Conference, to be held in Baltimore on Sept 28-30, 2011. The event focuses
on case studies that demonstrate successes (and failures) in Web
applications and Internet architectures. This year, we're adding Hack Day on
September 28th.

The inaugural, 2010 conference (http://omniti.com/surge/2010) was a smashing
success and we are currently accepting submissions for papers through April
3rd. You can find more information about topics online:

http://omniti.com/surge/2011

2010 attendees compared Surge to the early days of Velocity, and our
speakers received 3.5-4 out of 4 stars for quality of presentation and
quality of content! Nearly 90% of first-year attendees are planning to come
again in 2011.

For more information about the CFP or sponsorship of the event, please
contact us at surge (AT) omniti (DOT) com.

-- 
Katherine Jeschke
Marketing Director
OmniTI Computer Consulting, Inc.
7070 Samuel Morse Drive, Ste.150
Columbia, MD 21046
O: 410/872-4910, 222
C: 443/643-6140
omniti.com
circonus.com


Search in all the documents

2011-03-18 Thread Juan Manuel Alvarez
Hello everyone! I would like to ask you a question.

I am trying to search in all documents using the dismax parser.

A sample query that works using the q parameter goes like this:
select/?q=air&qf=description%20name&start=0&rows=60&sort=name+asc&fq=(projectId:1)&defType=dismax&fq=(type:3)&fq=(folder:0)

But when I have nothing to put into the q paramenter, I can't make it
work for all the indices.
I have tried using q=* but nothing happens.

Is there a way to search in all documents or it is a limitation?

Thanks in advance!
Juan M.


Solr, cURL, and Java runtime

2011-03-18 Thread rockholla
I'm experiencing something weird while trying to post updates to Solr docs
via cURL from exec in Java runtime.  I can't figure out if this is something
strange with Solr's update mechanism, cURL, or Java runtime oddities, so
please excuse if I'm posting here incorrectly.  Any help would be greatly
appreciated.  Here's what I'm doing in Java:


String updateDoc = docXml.transform("resources/doc-to-update.xsl",
affiliation);

logger.info("Update document: " + updateDoc);

Runtime rt = Runtime.getRuntime();
FileUtility.write("resources/" + aggregatorId.replace(":", "") + ".xml",
updateDoc);
String command = "curl " + SOLR_ROOT + "/update?commit=true -H
\"Content-Type: text/xml\" --data-binary 'ac:01school'";
logger.debug("Executing: " + command);
Process pr = rt.exec(command);

BufferedReader input = new BufferedReader(new
InputStreamReader(pr.getInputStream()));

String line=null;

while((line=input.readLine()) != null) 
{
 logger.debug(line);
}

int exitVal = pr.waitFor();
logger.info("Completed cURL process: " + exitVal);


So, executing the above does not update the Solr doc, however, I am getting
what looks like successful output from the BufferedReader:




0278



I'm logging the actual command.  To debug, I've simply copied and pasted the
exact command that's getting executed via Java's process into the terminal
and executed there.  The command updates the doc when executed from the
Terminal, but not via Java's process execution.  Any ideas?

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-cURL-and-Java-runtime-tp2699317p2699317.html
Sent from the Solr - User mailing list archive at Nabble.com.

Re: Adding the suggest component

2011-03-18 Thread Brian Lamb
Sorry, that was a typo on my part.

I was using http://localhost:8983/solr/admin and getting the above error
messages.

On Fri, Mar 18, 2011 at 2:57 PM, Geert-Jan Brits  wrote:

> > 2011-03-18 14:11:02.284:INFO::Started SocketConnector@0.0.0.0:8983
> Solr started on port 8983
>
> instead of this:
> > http://localhost/solr/admin/
>
> try this instead:
> http://localhost:8983/solr/admin/ 
>
> Cheers,
> Geert-Jan
>
>
>
> 2011/3/18 Brian Lamb 
>
> > That does seem like a better solution. I downloaded a recent version and
> > there were the following files/folders:
> >
> > build.xml
> > dev-tools
> > LICENSE.txt
> > lucene
> > NOTICE.txt
> > README.txt
> > solr
> >
> > So I did cp -r solr/* /path/to/solr/stuff/ and started solr. I didn't get
> > any error message but I only got the following messages:
> >
> > 2011-03-18 14:11:02.016:INFO::Logging to STDERR via
> > org.mortbay.log.StdErrLog
> > 2011-03-18 14:11:02.240:INFO::jetty-6.1-SNAPSHOT
> > 2011-03-18 14:11:02.284:INFO::Started SocketConnector@0.0.0.0:8983
> >
> > Where as before I got a bunch of messages indicating various libraries
> had
> > been loaded. Additionally, when I go to http://localhost/solr/admin/, I
> > get
> > the following message:
> >
> > HTTP ERROR: 404
> >
> > Problem accessing /solr/admin. Reason:
> >
> >NOT_FOUND
> >
> > What did I do incorrectly?
> >
> > Thanks,
> >
> > Brian Lamb
> >
> >
> > On Fri, Mar 18, 2011 at 9:04 AM, Erick Erickson  > >wrote:
> >
> > > What do you mean "you copied the contents...to the right place"? If you
> > > checked out trunk and copied the files into 1.4.1, you have mixed
> source
> > > files between disparate versions. All bets are off.
> > >
> > > Or do you mean jar files? or???
> > >
> > > I'd build the source you checked out (at the Solr level) and use that
> > > rather
> > > than try to mix-n-match.
> > >
> > > BTW, if you're just starting (as in not in production), you may want to
> > > consider
> > > using 3.1, as it's being released even as we speak and has many
> > > improvements
> > > over 1.4. You can get a nightly build from here:
> > > https://builds.apache.org/hudson/view/S-Z/view/Solr/
> > >
> > > Best
> > > Erick
> > >
> > > On Thu, Mar 17, 2011 at 3:36 PM, Brian Lamb
> > >  wrote:
> > > > Hi all,
> > > >
> > > > When I installed Solr, I downloaded the most recent version (1.4.1) I
> > > > believe. I wanted to implement the Suggester (
> > > > http://wiki.apache.org/solr/Suggester). I copied and pasted the
> > > information
> > > > there into my solrconfig.xml file but I'm getting the following
> error:
> > > >
> > > > Error loading class 'org.apache.solr.spelling.suggest.Suggester'
> > > >
> > > > I read up on this error and found that I needed to checkout a newer
> > > version
> > > > from SVN. I checked out a full version and copied the contents of
> > > > src/java/org/apache/spelling/suggest to the same location on my set
> up.
> > > > However, I am still receiving this error.
> > > >
> > > > Did I not put the files in the right place? What am I doing
> > incorrectly?
> > > >
> > > > Thanks,
> > > >
> > > > Brian Lamb
> > > >
> > >
> >
>


Re: Search in all the documents

2011-03-18 Thread Ahmet Arslan
> I am trying to search in all documents using the dismax
> parser.
> 
> A sample query that works using the q parameter goes like
> this:
> select/?q=air&qf=description%20name&start=0&rows=60&sort=name+asc&fq=(projectId:1)&defType=dismax&fq=(type:3)&fq=(folder:0)
> 
> But when I have nothing to put into the q paramenter, I
> can't make it
> work for all the indices.
> I have tried using q=* but nothing happens.
> 
> Is there a way to search in all documents or it is a
> limitation?

q.alt is used for that. &q.alt=*:* it is better to hard code this into 
solrconfig.xml. If no q is present, then q.alt is used.


  


Re: Search in all the documents

2011-03-18 Thread Juan Manuel Alvarez
Thanks!!! That did the trick! =o)

On Fri, Mar 18, 2011 at 5:09 PM, Ahmet Arslan  wrote:
>> I am trying to search in all documents using the dismax
>> parser.
>>
>> A sample query that works using the q parameter goes like
>> this:
>> select/?q=air&qf=description%20name&start=0&rows=60&sort=name+asc&fq=(projectId:1)&defType=dismax&fq=(type:3)&fq=(folder:0)
>>
>> But when I have nothing to put into the q paramenter, I
>> can't make it
>> work for all the indices.
>> I have tried using q=* but nothing happens.
>>
>> Is there a way to search in all documents or it is a
>> limitation?
>
> q.alt is used for that. &q.alt=*:* it is better to hard code this into 
> solrconfig.xml. If no q is present, then q.alt is used.
>
>
>
>


Re: Adding the suggest component

2011-03-18 Thread Ahmet Arslan
> downloaded a recent version and
> > > there were the following files/folders:
> > >
> > > build.xml
> > > dev-tools
> > > LICENSE.txt
> > > lucene
> > > NOTICE.txt
> > > README.txt
> > > solr
> > >
> > > So I did cp -r solr/* /path/to/solr/stuff/ and
> started solr. I didn't get
> > > any error message but I only got the following
> messages:

How do you start solr? using java -jar start.jar? Did you run 'ant clean 
example' in the solr folder?


  


Re: Search in all the documents

2011-03-18 Thread Bill Bell
q.alt=*:* should work

Bill Bell
Sent from mobile


On Mar 18, 2011, at 1:37 PM, Juan Manuel Alvarez  wrote:

> Hello everyone! I would like to ask you a question.
> 
> I am trying to search in all documents using the dismax parser.
> 
> A sample query that works using the q parameter goes like this:
> select/?q=air&qf=description%20name&start=0&rows=60&sort=name+asc&fq=(projectId:1)&defType=dismax&fq=(type:3)&fq=(folder:0)
> 
> But when I have nothing to put into the q paramenter, I can't make it
> work for all the indices.
> I have tried using q=* but nothing happens.
> 
> Is there a way to search in all documents or it is a limitation?
> 
> Thanks in advance!
> Juan M.


Re: Different options for autocomplete/autosuggestion

2011-03-18 Thread Kai Schlamp-2
One autosuggestion solution would be to query normal text fields. That way
you have the whole feature set of Solr (like NGram filtered text for infix
search or field queries to scope the search). If you also store the data of
the text field you directly have the results to use as autosuggestions.
Unfortunately in some applications (like ours) different documents often
contain the same data for the same field.
So this approach may return many duplications as search hits for exactly
this field. If one could collapse the results by that field all duplications
would be gone and you only distinct hits remain (what one would expect from
autosuggestions). At least this is how I understood field collapsing.

Kai

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Different-options-for-autocomplete-autosuggestion-tp2678899p2701055.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Problem with field collapsing of patched Solr 1.4

2011-03-18 Thread Kai Schlamp-2
Unfortunately I have to use Solr 1.4.x or 3.x as one of the interfaces to
access Solr uses Sunspot (a Ruby Solr library), which doesn't seem to be
compatible with 4.x.

Kai


Otis Gospodnetic-2 wrote:
> 
> Kai, try SOLR-1086 with Solr trunk instead if trunk is OK for you.
> 
> Otis
> 
> Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch
> Lucene ecosystem search :: http://search-lucene.com/
> 
> 
> 
> - Original Message 
>> From: Kai Schlamp 
>> To: solr-user@lucene.apache.org
>> Sent: Sun, March 13, 2011 11:58:56 PM
>> Subject: Problem with field collapsing of patched Solr 1.4
>> 
>> Hello.
>> 
>> I just tried to patch Solr 1.4 with the field collapsing patch  of
>> https://issues.apache.org/jira/browse/SOLR-236. The patching and  build
>> process seemed to be ok (below are the steps I did), but the  field
>> collapsing feature doesn't seem to work.
>> When I go to `http://localhost:8982/solr/select/?q=*:*` I correctly
>> get 10 documents  as result.
>> When going to 
>>`http://localhost:8982/solr/select/?q=*:*&collapse=true&collapse.field=tag_name_ss&collapse.max=1`
>>
>> (tag_name_ss  is surely a field with content) I get the same 10 docs as
>> result back. No  further information regarding the field collapsing.
>> What do I miss? Do I have  to activate it somehow?
>> 
>> * Downloaded 
>>[Solr](http://apache.lauf-forum.at//lucene/solr/1.4.1/apache-solr-1.4.1.tgz)
>> *  Downloaded 
>>[SOLR-236-1_4_1-paging-totals-working.patch](https://issues.apache.org/jira/secure/attachment/12459716/SOLR-236-1_4_1-paging-totals-working.patch)
>>
>> *  Changed line 2837 of that patch to `@@ -0,0 +1,511 @@` (regarding
>> this  
>>[comment](https://issues.apache.org/jira/browse/SOLR-236?focusedCommentId=12932905&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-12932905))
>>
>> *  Downloaded 
>>[SOLR-236-1_4_1-NPEfix.patch](https://issues.apache.org/jira/secure/attachment/12470202/SOLR-236-1_4_1-NPEfix.patch)
>>
>> *  Extracted the Solr archive
>> * Applied both patches:
>> ** `cd  apache-solr-1.4.1`
>> ** `patch -p0 <  ../SOLR-236-1_4_1-paging-totals-working.patch`
>> ** `patch -p0 <  ../SOLR-236-1_4_1-NPEfix.patch`
>> * Build Solr
>> ** `ant clean`
>> ** `ant  example` ... tells me "BUILD SUCCESSFUL"
>> * Reindexed everything (using  Sunspot Solr)
>> * Solr info tells me correctly "Solr Specification  Version:
>> 1.4.1.2011.03.14.04.29.20"
>> 
>> Kai
>>
> 


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-with-field-collapsing-of-patched-Solr-1-4-tp2678850p2701061.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr tuning parameters

2011-03-18 Thread Gerd W. Naschenweng
I am looking for feedback on your setup and current tuning parameters. 
Hopefully with your feedback we can enhance on the Wiki to list common
tuning parameters. If you can provide the following info, I think it would help 
everyone starting off on Solr.

I would be specifically interested in your solrconfig.xml, application-server 
and possible OS tuning.

We are currently in testing phase and still need to do quite a lot of work.

OS: CentOS 5.x / 64bit
RAM: 6GB
CPUs: XEN Server with 1 x 4-core 2.66Ghz Processor
Hardware: IBM BladeServer HS22
Application Server: Jetty 7
SOLR Stats: 1M documents in index.
SolrMeter Stats: Average faceted query response time: 5-40ms. 1200 queries/per 
minute
Solr Setup: 1 x Master and 1 x Slave. New documents are fed into the master 
every 5 minutes.

OS Tuning:
- NONE -

JVM Tuning:
# Server config - switch between 32/64 bit environment
JAVA_OPTIONS="$JAVA_OPTIONS -server -d64 -Djava.awt.headless=true"

# Log4J Config
JAVA_OPTIONS="$JAVA_OPTIONS 
-Dlog4j.configuration=file:${JETTY_HOME}/resources/log4j.properties"

# Solr Config
JAVA_OPTIONS="$JAVA_OPTIONS -Dfile.encoding=UTF-8"
JAVA_OPTIONS="$JAVA_OPTIONS -Dsolr.solr.home=${SOLR_HOME} 
-Dsolr.data.dir=${SOLR_HOME}/data"

## Logging
#JAVA_OPTIONS="$JAVA_OPTIONS -XX:+PrintCommandLineFlags"
# turn on some debug for GC only print the distribution when doing tuning
JAVA_OPTIONS="$JAVA_OPTIONS -verbose:gc"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+PrintGCTimeStamps"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+PrintGCDetails"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+PrintTenuringDistribution"
JAVA_OPTIONS="$JAVA_OPTIONS -Xloggc:${JETTY_HOME}/logs/jetty-gc.log"

## Garbage collection
# Keep each survivor space about 90% full 
JAVA_OPTIONS="$JAVA_OPTIONS -XX:TargetSurvivorRatio=90"
# A SurvivorRatio of 5 to a 128MB new space will yield two ~64MB survivor spaces
JAVA_OPTIONS="$JAVA_OPTIONS -XX:SurvivorRatio=5"
# Copy object between survivor space at most 16 times 
JAVA_OPTIONS="$JAVA_OPTIONS -XX:MaxTenuringThreshold=16"
# Force CMS for the collector
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+UseConcMarkSweepGC"
# Use incremental mode since minor CPU overhead is better than potential pauses
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+CMSIncrementalMode"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+CMSIncrementalPacing"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+CMSParallelRemarkEnabled"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+UseParNewGC"
# Enable perm-gen class unloading (needed with UseConcMarkSweepGC)
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+CMSClassUnloadingEnabled"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+UseTLAB"

## Non Heap memory
# PermSize controls area of heap for Class/Method objects; 
#   Dynamic class loading/reflection (e.g., JSP) may require more space 
#   Note that this space is ABOVE and beyond the min/max heap size 
JAVA_OPTIONS="$JAVA_OPTIONS -XX:PermSize=32m"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:MaxPermSize=128m"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:CodeCacheMinimumFreeSpace=8m"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:ReservedCodeCacheSize=128m"

## Heap memory
# We set heap min/max to same size for consistent results
JAVA_OPTIONS="$JAVA_OPTIONS -Xms512m"
JAVA_OPTIONS="$JAVA_OPTIONS -Xmx2000m"
# We set new area to 1/4 heap
JAVA_OPTIONS="$JAVA_OPTIONS -XX:NewSize=128m"
JAVA_OPTIONS="$JAVA_OPTIONS -XX:MaxNewSize=128m"

## Solr specific optimisations (especially to avoid GC's)
# Reduce the number of objects getting promoted into the Old Gen, reducing 
fragmentation and CMS frequency & time
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+UseStringCache -XX:+OptimizeStringConcat 
-XX:+UseCompressedStrings"
# Use compressed pointers on a 64-bit JVM, reducing memory & performance 
penality in using 64bit JVM
JAVA_OPTIONS="$JAVA_OPTIONS -XX:+UseCompressedOops"
# Play with this - perhaps the JVM defaults are to lenient here. Under high 
load CSM would stall
JAVA_OPTIONS="$JAVA_OPTIONS -XX:CMSInitiatingOccupancyFraction=75"

Application Server Tuning:
- NONE -

SOLR Tuning - solrconfig.xml:
  
5
...
  
  
  
5
...
  
  
  






   50

   2000

...