I have solr 1.4.1 set up with following additions:
1. Basic authentication
2. SSL using self signed certificate
How do I use solrj to connect to this server? I think the solution might reside
in using HttpClient - but I cant figure out how. Any help will be appreciated.
Thanks!
- sky
I have solr 1.4.1 set up with following additions:
1. Basic authentication
2. SSL using self signed certificate
How do I use solrj to connect to this server?
Thanks!
- sky
I am trying to implement multi accented search on solr... basically i am
using asciifolderfilter to provide this feature... but i have a problem...
http://localhost:8983/solr/select/?q=*francois*&version=2.2&start=0&rows=10&indent=on
http://localhost:8983/solr/select/?q=*francois**&version=2.2&sta
Think I got it.. it looks something like following - however cant figure out
where to get "EasySSLProtocolSocketFactory" from Maven from a known source:
URL
solrUrl = new URL(uri);
if
(solrUrl.getProtocol().equals("https") && isSSLSelfSigned)
{
Hello list,
Took a while to get back to following the discussions after vacation.
We have recently stumbled upon an issue with distributed facet search. I
would appreciate any help before checking the source code of solr 1.4 we
currently use.
When shooting a distributed query, we use facet.limit
hi all
anyone can specify the procedure for solr scheduling in windoes o/s?
http://wiki.apache.org/solr/DataImportHandler#HTTPPostScheduler i know this
link but i need cron job like procedure in windows .
Regards,
Ganesh.
--
View this message in context:
http://lucene.472066.n3.nabble.
I am not sure if current version has this, but DIH used to reload
connections after some idle time
if (currTime - connLastUsed > CONN_TIME_OUT) {
synchronized (this) {
Connection tmpConn = factory.call();
clos
Hi,
I am using apache solr 3.3.0 with SolrJ on a linux box.
I am getting the error below when indexing kicks in:
2011-09-02 10:35:01,617 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.lang.Exception: Not Implemented
Does anybody have any idea why this error may
Cool, I will check it out.
Are these changes present in nightly build? Or do I have to make my own
build?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Context-Sensitive-Spelling-Suggestions-Collations-tp3295570p3303941.html
Sent from the Solr - User mailing list archive at
Hi,
no ideas? :(
kind regards,
Rene
From: Rene Lehmann
To: solr-user@lucene.apache.org
Date: 01.09.2011 15:36
Subject:Solr replication / repeater
Hi there,
i´m really new in Solr and have a question about the Solr replication.
We want to use Solr in two data centers (
It generally helps if your solrconfig is correct. Thank you for your
tolerance.
-Original Message-
From: Herman Kiefus [mailto:herm...@angieslist.com]
Sent: Thursday, September 01, 2011 10:15 AM
To: solr-user@lucene.apache.org
Subject: MoreLikeThis assumptions
Given a document id:n sh
--
View this message in context:
http://lucene.472066.n3.nabble.com/is-there-any-possiblitied-to-write-delta-import-query-without-using-last-modified-timestamps-coumn-i-tp3304001p3304001.html
Sent from the Solr - User mailing list archive at Nabble.com.
hi all
I saw only ways using delta import with last_modified(timestamps) column. Is
there some other ways to do delta_imports without using timestamps?
If anypossiblites are there plese specify in advance.
Regards,
vighnesh.
--
View this message in context:
http://lucene.472066.n3.nabb
I am using SolrJ with Solr 3.3.0 over HTTPS and getting the following
exception:
2011-09-02 12:42:08,111 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.lang.Exception: Not Implemented
Just wanted to find out if there is anything special i need to do in order
to u
Hi
I have recently upgraded from Solr 1.4 to Solr 3.2. In Solr 1.4 only 3
files (one .cfs & two segments) file were made in *index/* directory. (after
doing optimize).
Now, in Solr 3.2, the optimize seems not be working. My final number of
files in *index/* directory are in 7-8 in number. Can an
> I have recently upgraded from Solr 1.4 to Solr 3.2. In Solr 1.4 only 3
> files (one .cfs & two segments) file were made in *index/* directory.
> (after
> doing optimize).
>
> Now, in Solr 3.2, the optimize seems not be working. My final number of
> files in *index/* directory are in 7-8 in numb
I am look at this wiki but i am unable to use those class in my application.
meanwhile i had problem with classes where this class files are placed in my
application and which command can i use to know about scheduling is
processed
I need procedure where this class files are configure and how to e
That error has nothing to do with Solr - it looks as though you are trying
to start the JVM with a heap size that is too big for the available physical
memory.
-Simon
On Fri, Sep 2, 2011 at 2:15 AM, Rohit wrote:
> Hi All,
>
>
>
> I am using Solr 3.0 and have 4 cores build into it with the follo
Not sure about the exact reason for the error. However, there's a related
email thread today with a code fragment that you might find useful -- see
http://www.lucidimagination.com/search/document/a553f89beb41e39a/how_to_use_solrj_self_signed_cert_ssl_basic_auth#a553f89beb41e39a
-Simon
On Fri, Se
You need to give us more information. The code which throws this exception
will be most helpful.
-Simon
On Fri, Sep 2, 2011 at 5:43 AM, Kissue Kissue wrote:
> Hi,
>
> I am using apache solr 3.3.0 with SolrJ on a linux box.
>
> I am getting the error below when indexing kicks in:
>
> 2011-09-02
Hi Simon,
Thanks for your reply. I investigated this further and discovered that the
actual error was:
2011-09-02 12:42:06,673 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Nati
Under administrative Tools, select Task Scheduler.
New task, action: Run program/script, then you can call a java command line
like java -jar something.jar
the sheduler itself is pretty good, but the tasks it can perform are too
few.. but it can run java programs via command line.
2011/9/2 vighn
Hi below is my java program for indexing around 30million records from
csv.But this doen't work for such a large file.It works perfectly fro
smaller files.What's the wrong with my code.please let me know
try{
/*SolrServer server = new
CommonsHttpSolrServer("http://localhost
Never done that before but as far as I know, Tika does that job.
http://tika.apache.org/0.9/formats.html#Image_formats
2011/9/2 Jagdish Kumar
>
> Hi
>
> I am trying indexing and searching various type of files in Solr3.3.0, I am
> able to index image files but it fail to show these files in res
I think you're searching for both tokens, but do what Markus said first,
add &debugQuery=on to your query and you'll see exactly what the
search is.
You are searching against your default text field because you have not
specified any field in your query, is that what you expect?
You could add som
I don't quite get what you mean by results aren't across pages. It's possible
that all the results fit in one page because of grouping. What happens if you
specify just a few rows? e.g. &rows=3?
I suspect this is just a problem with the small number of docs.
Best
Erick
On Thu, Sep 1, 2011 at 12:
Please show how it "doesn't work", i.e. does the application throw an
exception and if yes, could you please post the stacktrace. If no,
please be more explicit.
Thanks,
Glen Newton
On Fri, Sep 2, 2011 at 10:35 AM, angel wrote:
> Hi below is my java program for indexing around 30million records
Bug (ahem, that is nudge) the committers over on the dev list to pick
it up and commit it. They'll alter the status & etc.
Best
Erick
On Thu, Sep 1, 2011 at 2:37 AM, Bernd Fehling
wrote:
> Hi list,
>
> I have fixed an issue and created a patch (SOLR-2726) but how to
> change "Status" and "Resol
It looks good except for the "repeaters used to each other for replication".
Having the masters use each other for replication implies that you're
indexing data to both of them, in which case you wouldn't need them to
update from each other!
So you really have two choices I think
1> designate one
There's some talk of releasing it this fall, but nothing official yet.
But Solr releases are happening much more frequently lately,
so I'd say just go ahead and use the nightly from the 3x branch for
dev purposes, and swap in the official release when it happens.
Best
Erick
On Thu, Sep 1, 2011 a
What is your evidence that the text field isn't indexed? The default
schema does not store the text field data, so if you specify
fl=text you won't see anything. But you can still search on the
text field.
This confuses a lot of people. Try looking in the admin/schema browser
link and exploring th
You will need get the source, apply the patch and build solr for yourself.
There are some instructions at http://wiki.apache.org/solr/HowToContribute .
It would be great if you could try this out and provide feedback.
James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311
-Ori
status of both the fields are mentioned below, which clearly shows there is
no document in text field. I have also tried to search some terms mentioned
in the documents but their is no results.
*Field: id*
Field Type: string
Properties: Indexed, Stored, Omit Norms, undefined, Sort Missing Last
Setting multivalued="false" is probably a red herring.
This form probably works because you're searching on
two different fields, namely keywords and .
APpendint &debugQUery=on to your query will show this.
q=keywords:symante* AND corporatio*
This form probably fails because of stemming
q=keywor
Hi,
In my search application, I sometimes get more than 200k matches for a
specific query.
Although I only present the first 20, I still run over the entire 200k
because I use my own Similarity function.
Is there a way to make this search smarter? I thought limiting the total
possible matches to
All,
I was wondering if anybody has any information on approaches to testing
and verification search results from Solr. Most of the time we end up
manually verifying the results from a search but the verification is not
necessarily scientific.
The main question is what are we verifying these s
Hi ,
Thanks for your reply. I investigated this further and discovered that the
actual error was:
2011-09-02 12:42:06,673 ERROR
[org.apache.solr.client.solrj.impl.StreamingUpdateSolrServer] - error
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Me
OK, I can confirm that the spellchecker now has the correct behaviour.
Eventhough a misspelled word is found in the index, it now says it is
correctlySpelled=false and gives the proper suggestion.
Thanx a bunch!
--
View this message in context:
http://lucene.472066.n3.nabble.com/Context-Sensiti
There's no firm date, but last I knew there was some talk of releasing
it by the end of this year.
That's not in any way agreed upon by the committers, but it's been suggested.
Best
Erick
2011/9/1 陈葵 :
> Hi All
>
> Is there any body know when version 4.0 will be released?
> From https://issues.
On 9/1/2011 4:12 PM, Chris Hostetter wrote:
: I've got a consistent test failure on Solr source code checked out from svn.
: The same thing happens with 3.3 and branch_3x. I have information saved from
Shawn: sorry for hte late reply.
I can't reproduce your specific problem, but the test in qu
Hi all-
I'm trying to set up a delta query for a parent entity query that has many
sub-queries. The table referenced in the parent query has a "last updated"
field, but none of the children do. The way the data is set up is that when a
child table is updated, the "last updated" field of the par
On 9/2/2011 10:25 AM, Shawn Heisey wrote:
if you do: can you tell us more about the filesystem you are using?
I am building this on an NFSv3 filesystem. The NFS server is a
Solaris 10 cluster, the underlying filesystem is ZFS, on a
fiberchannel SAN. I will check out the tree onto a local ext3
When searching for a misspelled word that is in index and a misspelled word
that isn't, the collation doesn't use the suggestion for word that is
misspelled in the index.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Context-Sensitive-Spelling-Suggestions-Collations-tp329557
Hi Chris,
That makes sense. I was behind fire wall when running both builds. I
thought I was correctly proxied - but maybe the request was being
squashed
by something else before it even got to the firewall.
I've just ran tests again but this time outside of fire wall and all pass.
Thanks a lot
What does the Analysis say? Put all words in both field value index and
query and compare them plz.
Have you tried to encode it manually in the url just in case?
2011/9/2 deniz
> I am trying to implement multi accented search on solr... basically i am
> using asciifolderfilter to provide this f
: I am not sure if current version has this, but DIH used to reload
: connections after some idle time
:
: if (currTime - connLastUsed > CONN_TIME_OUT) {
: synchronized (this) {
: Connection tmpConn = factory.call();
:
On 9/2/2011 1:59 PM, Chris Hostetter wrote:
: I am not sure if current version has this, but DIH used to reload
: connections after some idle time
:
: if (currTime - connLastUsed> CONN_TIME_OUT) {
: synchronized (this) {
: Connection tmpConn =
On Sat, Sep 3, 2011 at 1:38 AM, Shawn Heisey wrote:
[...]
> I use DIH with MySQL. When things are going well, a full rebuild will leave
> connections open and active for over two hours. This is the case with
> 1.4.0, 1.4.1, 3.1.0, and 3.2.0. Due to some kind of problem on the database
> server,
take care, "running 10 hours" != "idling 10 seconds" and trying again.
Those are different cases.
It is not dropping *used* connections (good to know it works that
good, thanks for reporting!), just not reusing connections more than
10 seconds idle
On Fri, Sep 2, 2011 at 10:26 PM, Gora Mohanty
watch out, "running 10 hours" != "idling 10 seconds" and trying again.
Those are different cases.
It is not dropping *used* connections (good to know it works that
good, thanks for reporting!), just not reusing connections more than
10 seconds idle
On Fri, Sep 2, 2011 at 10:26 PM, Gora Mohanty
Hi,
I am using the nightly built version solr4. Is there a way to let solr
return just the frequency and offset of a particular word, for example
"war". Now I can only get the whole termvector for a field, which cause a
lot of overhead.
for instance, I am using"
http://localhost:/solr/select/
: when reloading a core, it seems that the execution of firstSearcher and
: newSearcher events will happen after the new core takes over from the
: old. This will effectively stall querying until the caches on the new
: core are warmed (which can take quite a long time on large
: installations
: i´m really new in Solr and have a question about the Solr replication.
: We want to use Solr in two data centers (dedicated fibre channel lane, like
: intranet) behind a load balancer. Is the following infrastructure possible?
:
: - one repeater and one slave per data center
: - the repeaters u
Hi Everyone,
I've got an Analysis question related to both Lucene and Solr (sorry for the
cross posting).
i've created a custom analysis chain part of a field type for the title field
in my schema representing Businesses.
I've created an addition field called title_sort where I copied the orig
looking at http://wiki.apache.org/solr/SpatialSearchDev
I would think I could index a lat,lon pair into a GeoHashField (that
works) and then retrieve the field value to see the computed geohash.
however, that doesn't seem to work. If I index: 21.4,33.5
The retrieved value is not a hash, but ap
On Fri, Sep 2, 2011 at 10:26 PM, Mattmann, Chris A (388J)
wrote:
> I'm left with childrenshospitallosangeles as a single token resultant from
> the chain.
> So, when I go to sort the titles in Solr, I use sort=title_sort asc, and I am
> getting all kinds of weird results when doing
> a query.
H
Hi Yonik,
On Sep 2, 2011, at 7:47 PM, Yonik Seeley wrote:
> On Fri, Sep 2, 2011 at 10:26 PM, Mattmann, Chris A (388J)
> wrote:
>> I'm left with childrenshospitallosangeles as a single token resultant from
>> the chain.
>> So, when I go to sort the titles in Solr, I use sort=title_sort asc, and
Thanks Simon, did get that part, it was happening because solr was not able
to reserve enough memory when it had hung once. The server has 24G of memory
and I am try to start solr with "-Xms2g -Xmx16g -XX:MaxPermSize=3072m -D64"
option. But, this is not my main concern, how do I find out why solr h
Thanks for the guidance But it could not work out. Although m reading the
link provided by you, but can it be due to write.lock file being created in
"/index/" directory.
Please suggest
- pawan
On Fri, Sep 2, 2011 at 6:34 PM, Michael Ryan wrote:
> > I have recently upgraded from Solr 1.4 to S
On Sep 2, 2011, at 8:53 PM, Mattmann, Chris A (388J) wrote:
>
> I think in spelling this out though, I might have elaborated my problem.
> Since
> the method I call in the constructor for my CombiningFilter is
> super(mergeStreamTokens(in))
> where mergeStreamTokens is a static method, I think
Rohit - for debugging hangs you will can trigger platfom specific dump and
analyze it.
On Sep 3, 2011, at 9:39 AM, "Rohit" wrote:
> Thanks Simon, did get that part, it was happening because solr was not able
> to reserve enough memory when it had hung once. The server has 24G of memory
> and
61 matches
Mail list logo