solr.TrieDoubleField deprecated with 7.1.0 but wildcard "*" search behaviour is different with solr.DoublePointField

2017-12-11 Thread Torsten Krah
Hi,

some question about the new DoublePointField which should be used
instead of the TrieDoubleField in 7.1.

https://lucene.apache.org/solr/guide/7_1/field-types-included-with-solr.html

If i am using the deprecated one its possible to get a match for a
double field like this:

test_d:*

even in 7.1.0.

But with the new DoublePointField, which you should use instead, you
won't get that match - you have to use e.g. [* TO *].

Short recipe can be found here to have a look yourself:

https://stackoverflow.com/questions/47473188/solr-7-1-querying-double-field-for-any-value-not-possible-with-anymore/47752445

Is this an intended change in runtime / query behaviour or some bug or
is it possible to restore that behaviour with the new field too?

kind regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


IllegalStateException, response already committed - replication related

2011-10-06 Thread Torsten Krah
Sometimes i am seing this in the logs - but i can not tell whats causing
it or if something may be broken, anyone got an idea how to find the
cause or whats going wrong:

2011-10-06 14:19:00.333:WARN:oejs.Response:Committed before 500
org.eclipse.jetty.io.EofException
2011-10-06
14:19:00.334:WARN:oejs.HttpConnection:/app/solrmaster/replication
java.lang.IllegalStateException: Committed
at org.eclipse.jetty.server.Response.resetBuffer(Response.java:1069)
at org.eclipse.jetty.server.Response.sendError(Response.java:277)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:512)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:972)
at
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:417)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:906)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:110)
at org.eclipse.jetty.server.Server.handle(Server.java:350)
at
org.eclipse.jetty.server.HttpConnection.handleRequest(HttpConnection.java:442)
at org.eclipse.jetty.server.HttpConnection
$RequestHandler.content(HttpConnection.java:927)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:784)
at
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:223)
at
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:46)
at
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:545)
at org.eclipse.jetty.io.nio.SelectChannelEndPoint
$1.run(SelectChannelEndPoint.java:43)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:598)
at org.eclipse.jetty.util.thread.QueuedThreadPool
$3.run(QueuedThreadPool.java:533)
at java.lang.Thread.run(Thread.java:679)



smime.p7s
Description: S/MIME cryptographic signature


Too many values for UnInvertedField faceting on field autocompleteField

2011-10-26 Thread Torsten Krah
I am getting this SolrException "Too many values for UnInvertedField
faceting on field autocompleteField".
Already added facet.method=enum to my search handler definition but
still this exception does happen.

Any known fix or workaround whan i can do to get a result?

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: Too many values for UnInvertedField faceting on field autocompleteField

2011-10-28 Thread Torsten Krah
Am Mittwoch, den 26.10.2011, 08:02 -0400 schrieb Yonik Seeley:
> You can also try adding facet.method=enum directly to your request

Added 

  query.set("facet.method", "enum");

to my solr query at code level and now it works. Don't know why the
handler stuff gets ignored or overriden, but its ok for my usecase to
specify it at query level.

thx

Torsten


smime.p7s
Description: S/MIME cryptographic signature


solr - http error 404 when requesting solrconfig.xml or schema.xml

2011-11-29 Thread Torsten Krah
Hi,

got some interesting problem and don't know how to debug further.
I am using an external solr home configured via jndi.
Deployed my war file (context is /apps/solrslave/) and if want to look
at the schema:

/apps/solrslave/admin/file/?contentType=text/xml;charset=utf-8&file=schema.xml

the response is 404.

It doesn't matter if i am using Jetty 7.x, 8.x or Tomcat 6.0.33, 404 is
the answer.

Anyone an idea where to look for?

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: IllegalStateException, response already committed - replication related

2011-11-29 Thread Torsten Krah
Anyone an idea?

regards



smime.p7s
Description: S/MIME cryptographic signature


Re: solr - http error 404 when requesting solrconfig.xml or schema.xml

2011-11-29 Thread Torsten Krah
To answer myself (sorry for the noise) - removed accidentally the admin handler 
section (only
ping was there) and thats causing the issue, after fixing this error,
all is fine again.

Torsten



smime.p7s
Description: S/MIME cryptographic signature


Question about optimize call - Request read Timeout

2011-12-02 Thread Torsten Krah
Hi,

got a question about index optimizing.
At midnight i am calling optimize(true, true) on my SolrServer instance.
However this does fail with:

org.apache.solr.client.solrj.SolrServerException:
java.net.SocketTimeoutException: Read timed out
at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:480)
 ~[solr-solrj-3.5.0.jar:3.5.0 1204988 - simon - 2011-11-22 14:55:27]
at
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:246)
 ~[solr-solrj-3.5.0.jar:3.5.0 1204988 - simon - 2011-11-22 14:55:27]
at
org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer.request(StreamingUpdateSolrServer.java:209)
 ~[solr-solrj-3.5.0.jar:3.5.0 1204988 - simon - 2011-11-22 14:55:27]
at
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
 ~[solr-solrj-3.5.0.jar:3.5.0 1204988 - simon - 2011-11-22 14:55:27]
at
org.apache.solr.client.solrj.SolrServer.optimize(SolrServer.java:205)
~[solr-solrj-3.5.0.jar:3.5.0 1204988 - simon - 2011-11-22 14:55:27]
at
org.apache.solr.client.solrj.SolrServer.optimize(SolrServer.java:191)
~[solr-solrj-3.5.0.jar:3.5.0 1204988 - simon - 2011-11-22 14:55:27]


The question is - may it take that long so that this request won't ever
be able to be answered and more important - is the optimize still done
"correctly" on the server side and can i ignore those Readtimeout
Exception?

Whats the right way here to do the optimize? 

regards

Torsten

PS: I am using solr 3.5.0.


smime.p7s
Description: S/MIME cryptographic signature


Re: Question about optimize call - Request read Timeout

2011-12-05 Thread Torsten Krah
Am Montag, den 05.12.2011, 08:11 -0500 schrieb Erick Erickson:
> You can try bumping up the timeouts in your SolrJ program, the
> SolrServer has a bunch of timeout options.
> 
> You can pretty easily tell if the optimize has carried through
> anyway, your index files should have been reduced
> substantially. But I'm pretty sure it's completing successfully.
> 
> Why call it with true/true? Is your SolrJ program also responsible
> for queries and requires knowing about the state of the index?
> 
> 
> Best
> Erick 

I am using true/true because if blocks until optimize is run. As the
optimize task is run on a regular basis, it won't run twice, if optimize
would take that long, as it will be only rescheduled if the task before
has completed, that the cause - any other way to know via solrj if
optimize is acutal running or done (other than using the true/true
blocking method)?

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


snaptshot rotation - replication handler argument available?

2011-12-09 Thread Torsten Krah
Configured my replication handler on the master with this option:

optimize

I am running an optimize call on a regular basis (e.g. every week or
every day, not the question here) and a snapshot is created.

I am wonder where the option ist, to specify how much snapshots should
be kept?
Index is very big and without rotation i am running out of space sooner
or later.
How this can be done with SOLR (without using an external self-made
Cronjob which does delete them).

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


doing snapshot after optimize - rotation parameter?

2012-01-03 Thread Torsten Krah
Hi,

i am taking snapshots of my master index after optimize calls (run each
day once), to get a clean backup of the index.
Is there a parameter to tell the replication handler how many snapshots
to keep and the rest should be deleted? Or must i use a custom script
via cron?

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: doing snapshot after optimize - rotation parameter?

2012-01-05 Thread Torsten Krah
Am Donnerstag, den 05.01.2012, 08:48 -0500 schrieb Erick Erickson:
> Have you looked at deletionPolicy and maxCommitsToKeep?

Hm, but that are deletion policy parameters for the "running" index, how
much commit points should be kept - the internal ones from lucene:

#


  
  true
  
  
  
  

#

A rotated snapshot is everytime out-of-scope - its like a backup,
maxCommitsToKeep would not make any sense here, right?
Reading this: https://issues.apache.org/jira/browse/SOLR-617 it sounds
like different use case.

Are there really meant to be used for rotation the snapshot directories,
reading the comments it does not sound to be what i am looking for, am i
right?

regards

Torsten

> 
> Best
> Erick
> 
> On Tue, Jan 3, 2012 at 8:32 AM, Torsten Krah
>  wrote:
> > Hi,
> >
> > i am taking snapshots of my master index after optimize calls (run each
> > day once), to get a clean backup of the index.
> > Is there a parameter to tell the replication handler how many snapshots
> > to keep and the rest should be deleted? Or must i use a custom script
> > via cron?
> >
> > regards
> >
> > Torsten



smime.p7s
Description: S/MIME cryptographic signature


Re: doing snapshot after optimize - rotation parameter?

2012-01-06 Thread Torsten Krah
To answer myself after looking at the code:

public static final String NUMBER_BACKUPS_TO_KEEP = "numberToKeep";

So 

7

should do it :-).

regards




smime.p7s
Description: S/MIME cryptographic signature


Commit call - ReadTimeoutException -> usage scenario for big update requests and the ioexception case

2012-02-06 Thread Torsten Krah
Hi,

i wonder if it is possible to commit data to solr without having to
catch SockedReadTimeout Exceptions.

I am calling commit(false, false) using a streaming server instance -
but i still have to wait > 30 seconds and catch the timeout from http
method.
I does not matter if its 30 or 60, it will fail depending on how long it
takes until the update request is processed, or can i tweak things here?

So whats the way to go here? Any other option or must i fetch those
exception and go on like done now.
The operation itself does finish successful - later on when its done -
on server side and all stuff is committed and searchable.


regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: Commit call - ReadTimeoutException -> usage scenario for big update requests and the ioexception case

2012-02-07 Thread Torsten Krah

Am 07.02.2012 15:12, schrieb Erick Erickson:

Right, I suspect you're hitting merges.


Guess so.

How often are you

committing?


One time, after all work is done.

In other words, why are you committing explicitly?

It's often better to use commitWithin on the add command
and just let Solr do its work without explicitly committing.


Tika does extract my docs and i'll fetch the results (memory, disk) - 
externally.
I all went ok like expected, i'll take those docs and add it to my solr 
server instance.
After i am done with add + deletes i'll do commit. One commit for all 
those docs - adding and deleting.
If something went wrong before or between adding, update or deleting 
docs, i do call rollback and all is like before (i am doing the update 
from one source only so i can be sure that no one can call commit in 
between).


CommitWithin will break my possibility to rollback things, that why i 
want to explicitly call commit here.




Going forward, this is fixed in trunk by the DocumentWriterPerThread
improvements.


Will this be backported to upcoming 3.6?



Best
Erick

On Mon, Feb 6, 2012 at 11:09 AM, Torsten Krah
  wrote:

Hi,

i wonder if it is possible to commit data to solr without having to
catch SockedReadTimeout Exceptions.

I am calling commit(false, false) using a streaming server instance -
but i still have to wait>  30 seconds and catch the timeout from http
method.
I does not matter if its 30 or 60, it will fail depending on how long it
takes until the update request is processed, or can i tweak things here?

So whats the way to go here? Any other option or must i fetch those
exception and go on like done now.
The operation itself does finish successful - later on when its done -
on server side and all stuff is committed and searchable.


regards

Torsten





smime.p7s
Description: S/MIME Kryptografische Unterschrift


Re: correct usage of StreamingUpdateSolrServer?

2012-02-13 Thread Torsten Krah
Whats the output of jstack $PID ?
If the program does not exit, there must be some non-daemon threads
still running.



smime.p7s
Description: S/MIME cryptographic signature


customizing standard tokenizer

2012-02-17 Thread Torsten Krah
Hi,

is it possible to extend the standard tokenizer or use a custom one
(possible via extending the standard one) to add some "custom" tokens
like Lucene-Core to be "one" token.

regards


smime.p7s
Description: S/MIME cryptographic signature


RE: customizing standard tokenizer

2012-02-20 Thread Torsten Krah
Thx, will use the custom tokenizer. Its less error prone than the
"workarounds" mentioned.


smime.p7s
Description: S/MIME cryptographic signature


Re: StreamingUpdateSolrServer Connection Timeout Setting

2012-06-18 Thread Torsten Krah
Am Freitag, den 15.06.2012, 18:22 +0100 schrieb Kissue Kissue:
> Hi,
> 
> Does anybody know what the default connection timeout setting is for
> StreamingUpdateSolrServer? Can i explicitly set one and how?
> 
> Thanks. 

Use a custom HttpClient to set one (only snippets, should be clear, if
not tell):

this.instance = new StreamingUpdateSolrServer(getUrl(), httpClient,
DOC_QUEUE_SIZE, WORKER_SIZE);

and use httpClient like this:

this.connectionManager = new MultiThreadedHttpConnectionManager();
final HttpClient httpClient = new HttpClient(this.connectionManager);
httpClient.getParams().setConnectionManagerTimeout(CONN_ACQUIRE_TIMEOUT);
httpClient.getParams().setSoTimeout(SO_TIMEOUT);

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: StreamingUpdateSolrServer Connection Timeout Setting

2012-06-18 Thread Torsten Krah
AddOn: You can even set a custom http factory for commons-http (which is
used by SolrStreamingUpdateServer) at all to influence socket options,
example is:

final Protocol http = new Protocol("http",
MycustomHttpSocketFactory.getSocketFactory(), 80);

and MycustomHttpSocketFactory.getSocketFactory is a factory which does
extend

org.apache.commons.httpclient.protocol.DefaultProtocolSocketFactory

and override / implement methods as needed (direct socket access).

Call this e.g. in a ServletListener in contextInitialized and you are
done.

regards

Torsten



smime.p7s
Description: S/MIME cryptographic signature


Re: StreamingUpdateSolrServer Connection Timeout Setting

2012-06-18 Thread Torsten Krah
You should also call the glue code ;-):

Protocol.registerProtocol("http", http);

regards

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: IndexWrite in Lucene/Solr 3.5 is slower?

2012-06-19 Thread Torsten Krah
May be related to https://issues.apache.org/jira/browse/LUCENE-3418
which does ensure things are really written; if you do commit very
often, you may see this sort of performance loss (at least me did in my
junit test where i do commit very often and 3.3 switch to 3.4 really
hurts here at test time - but its ok for tests to take longer because
the real app does use batch commits).

You can try to use solr 3.3.x and see how it works against 3.4.0 (which
does include the fix for 3418) if you want to find out if this is
related to #3418.

HTH

Torsten


smime.p7s
Description: S/MIME cryptographic signature


Re: UN-SUBSCRIBE ME PLEASE - 2ND REQUEST...

2018-06-01 Thread Torsten Krah
http://lmgtfy.com/?q=solr-user+unsubscribe

 schrieb am Fr., 1. Juni 2018, 18:11:

>
> THIS IS MY 2ND REQUEST - PLEASE UNSUBSCRIBE ME
>
>